pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text2text-generation
transformers
# ke-t5 base Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details. ## How to use ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small-newslike") tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small-newslike") ``` ## BibTeX entry and citation info ```bibtex @inproceedings{kim-etal-2021-model-cross, title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems", author = "Kim, San and Jang, Jin Yea and Jung, Minyoung and Shin, Saim", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.33", doi = "10.18653/v1/2021.findings-emnlp.33", pages = "352--365", abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.", } ```
{"language": ["ko", "en"], "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]}
KETI-AIR/ke-t5-small-newslike
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "ko", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# ke-t5 base Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details. ## How to use ## BibTeX entry and citation info
[ "# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.", "## How to use", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.", "## How to use", "## BibTeX entry and citation info" ]
text2text-generation
transformers
# ke-t5 base Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details. ## How to use ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small") tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small") ``` ## BibTeX entry and citation info ```bibtex @inproceedings{kim-etal-2021-model-cross, title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems", author = "Kim, San and Jang, Jin Yea and Jung, Minyoung and Shin, Saim", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.33", doi = "10.18653/v1/2021.findings-emnlp.33", pages = "352--365", abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.", } ```
{"language": ["en", "ko"], "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]}
KETI-AIR/ke-t5-small
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "en", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en", "ko" ]
TAGS #transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# ke-t5 base Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details. ## How to use ## BibTeX entry and citation info
[ "# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.", "## How to use", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.", "## How to use", "## BibTeX entry and citation info" ]
text-generation
transformers
# Clever bot DialoGPT Model
{"tags": ["conversational"]}
KOSTAS/DialoGPT-small-Cleverbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Clever bot DialoGPT Model
[ "# Clever bot DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Clever bot DialoGPT Model" ]
text-generation
transformers
# RickBot built for [Chai](https://chai.ml/) Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
{"tags": ["conversational"]}
KP2500/KPBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# RickBot built for Chai Make your own here
[ "# RickBot built for Chai\nMake your own here" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# RickBot built for Chai\nMake your own here" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Kai0857/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
#Peralta DialoGPT Model
{"tags": ["conversational"]}
Kail91/DialoGPT-small-PeraltaBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Peralta DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Rick DialoGPT model
{"tags": ["conversational"]}
Kairu/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick DialoGPT model
[ "# Rick DialoGPT model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick DialoGPT model" ]
text-generation
transformers
# Rick bot chat
{"tags": ["conversational"]}
Kairu/RICKBOT
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick bot chat
[ "# Rick bot chat" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick bot chat" ]
text-generation
transformers
#my awesome model
{"tags": ["conversational"]}
KakoSi/Smolmm3
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#my awesome model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
KakoSi/opaazzi
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
text-generation
transformers
# Dona Julia DialoGPT Model
{"tags": ["conversational"]}
Kaledmgo/DialoGPT-small-donajulia
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Dona Julia DialoGPT Model
[ "# Dona Julia DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Dona Julia DialoGPT Model" ]
fill-mask
transformers
### Overview SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages. ### Model Specifications. model : [Roberta](https://arxiv.org/abs/1907.11692) vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1 ### How to use from the Transformers Library from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto") model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto") ### OR Clone the model repo git lfs install git clone https://huggingface.co/Kalindu/SinBerto
{"language": "si", "tags": ["SinBERTo", "Sinhala", "roberta"]}
Kalindu/SinBerto
null
[ "transformers", "pytorch", "roberta", "fill-mask", "SinBERTo", "Sinhala", "si", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1907.11692" ]
[ "si" ]
TAGS #transformers #pytorch #roberta #fill-mask #SinBERTo #Sinhala #si #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
### Overview SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages. ### Model Specifications. model : Roberta vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1 ### How to use from the Transformers Library from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto") model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto") ### OR Clone the model repo git lfs install git clone URL
[ "### Overview\n\nSinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages.", "### Model Specifications.\nmodel : Roberta \n\nvocab_size=52_000,\nmax_position_embeddings=514,\nnum_attention_heads=12,\nnum_hidden_layers=6,\ntype_vocab_size=1", "### How to use from the Transformers Library\n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\n \ntokenizer = AutoTokenizer.from_pretrained(\"Kalindu/SinBerto\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Kalindu/SinBerto\")", "### OR Clone the model repo\n\ngit lfs install\n\ngit clone URL" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #SinBERTo #Sinhala #si #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n", "### Overview\n\nSinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages.", "### Model Specifications.\nmodel : Roberta \n\nvocab_size=52_000,\nmax_position_embeddings=514,\nnum_attention_heads=12,\nnum_hidden_layers=6,\ntype_vocab_size=1", "### How to use from the Transformers Library\n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\n \ntokenizer = AutoTokenizer.from_pretrained(\"Kalindu/SinBerto\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Kalindu/SinBerto\")", "### OR Clone the model repo\n\ngit lfs install\n\ngit clone URL" ]
null
null
demo file
{}
KalyanM/demo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
demo file
[]
[ "TAGS\n#region-us \n" ]
null
null
Dummy model
{}
KalyanM/dummy
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
Dummy model
[]
[ "TAGS\n#region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9271 - Recall: 0.9381 - F1: 0.9326 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2324 | 1.0 | 878 | 0.0688 | 0.9146 | 0.9264 | 0.9205 | 0.9816 | | 0.0517 | 2.0 | 1756 | 0.0620 | 0.9207 | 0.9329 | 0.9268 | 0.9829 | | 0.0301 | 3.0 | 2634 | 0.0604 | 0.9271 | 0.9381 | 0.9326 | 0.9836 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9836370279759162}}]}]}
KamSut/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0604 * Precision: 0.9271 * Recall: 0.9381 * F1: 0.9326 * Accuracy: 0.9836 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.1 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
fill-mask
transformers
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija". **DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model is the Arabizi specific version of DarijaBERT and it was trained on a total of ~4.6 Million sequences of Darija dialect written in Latin letters. The model was trained on a dataset issued from Youtube comments. More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert) **Loading the model** The model can be loaded directly using the Huggingface library: ```python from transformers import AutoTokenizer, AutoModel DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("SI2M-Lab/DarijaBERT-arabizi") DarijaBert_model = AutoModel.from_pretrained("SI2M-Lab/DarijaBERT-arabizi") ``` **Citation** If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ``` @article{gaanoun2023darijabert, title={Darijabert: a Step Forward in Nlp for the Written Moroccan Dialect}, author={Gaanoun, Kamel and Naira, Abdou Mohamed and Allak, Anass and Benelallam, Imade}, year={2023} } ``` **Acknowledgments** We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs. <font size =2>**Warning** This model being trained on texts from social networks, it can unfortunately generate toxic outputs reflecting part of the learned data</font>
{"language": "ar", "widget": [{"text": " Mchit njib [MASK] ."}, {"text": " Yak nta li [MASK] lih dik lhedra."}, {"text": " Ach [MASK] daba."}, {"text": " Lmghrib ajmal [MASK] fl3alam."}]}
SI2M-Lab/DarijaBERT-arabizi
null
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #safetensors #bert #fill-mask #ar #autotrain_compatible #endpoints_compatible #region-us
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija". DarijaBERT is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model is the Arabizi specific version of DarijaBERT and it was trained on a total of ~4.6 Million sequences of Darija dialect written in Latin letters. The model was trained on a dataset issued from Youtube comments. More details about DarijaBert are available in the dedicated GitHub repository Loading the model The model can be loaded directly using the Huggingface library: Citation If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): Acknowledgments We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs. <font size =2>Warning This model being trained on texts from social networks, it can unfortunately generate toxic outputs reflecting part of the learned data</font>
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #ar #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija". **DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model was trained on a total of ~3 Million sequences of Darija dialect representing 691MB of text or a total of ~100M tokens. The model was trained on a dataset issued from three different sources: * Stories written in Darija scrapped from a dedicated website * Youtube comments from 40 different Moroccan channels * Tweets crawled based on a list of Darija keywords. More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert) **Loading the model** The model can be loaded directly using the Huggingface library: ```python from transformers import AutoTokenizer, AutoModel DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("SI2M-Lab/DarijaBERT") DarijaBert_model = AutoModel.from_pretrained("SI2M-Lab/DarijaBERT") ``` **Citation** If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ``` @article{gaanoun2023darijabert, title={Darijabert: a Step Forward in Nlp for the Written Moroccan Dialect}, author={Gaanoun, Kamel and Naira, Abdou Mohamed and Allak, Anass and Benelallam, Imade}, year={2023} } ``` **Acknowledgments** We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs.
{"language": "ar", "widget": [{"text": " \u062c\u0627\u0628 \u0644\u064a\u0627 [MASK] ."}, {"text": "\u0645\u0634\u064a\u062a \u0646\u062c\u064a\u0628[MASK] \u0641\u0627\u0644\u0641\u0631\u0645\u0627\u0633\u064a\u0627\u0646 ."}]}
SI2M-Lab/DarijaBERT
null
[ "transformers", "pytorch", "bert", "fill-mask", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #bert #fill-mask #ar #autotrain_compatible #endpoints_compatible #region-us
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija". DarijaBERT is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model was trained on a total of ~3 Million sequences of Darija dialect representing 691MB of text or a total of ~100M tokens. The model was trained on a dataset issued from three different sources: * Stories written in Darija scrapped from a dedicated website * Youtube comments from 40 different Moroccan channels * Tweets crawled based on a list of Darija keywords. More details about DarijaBert are available in the dedicated GitHub repository Loading the model The model can be loaded directly using the Huggingface library: Citation If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): Acknowledgments We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs.
[]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #ar #autotrain_compatible #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
# MArSum: Moroccan Articles Summarization dataset - [Description](#description) - [Dataset](#dataset) - [Citation](#citation) - [License](#license) ## Description This dataset contains **19,806** news articles written in Moroccan Arabic dialect along with their titles. The articles were crawled from [Goud.ma](http://www.goud.ma) website between 01/01/2018 and 12/31/2020. The articles are written mainly in Moroccan Arabic dialect (Darija) but some of them contain Modern Standard Arabic (MSA) passages. All the titles are written in Darija. The following table summarize some tatistics on the MArSum Dataset. <table class="tg"> <thead> <tr> <th class="tg-0pky" rowspan="2">Size</th> <th class="tg-0pky" colspan="3">Titles length</th> <th class="tg-0pky" colspan="3">Articles length</th> </tr> <tr> <th class="tg-lqy6">Min.</th> <th class="tg-lqy6">Max.</th> <th class="tg-lqy6">Avg.</th> <th class="tg-lqy6">Min.</th> <th class="tg-lqy6">Max.</th> <th class="tg-0lax">Avg.</th> </tr> </thead> <tbody> <tr> <td class="tg-dvpl">19,806</td> <td class="tg-dvpl">2</td> <td class="tg-dvpl">74</td> <td class="tg-dvpl">14.6</td> <td class="tg-dvpl">30</td> <td class="tg-dvpl">2964</td> <td class="tg-0pky">140.7</td> </tr> </tbody> </table> The following figure describes the creation process of MArSum: ![alt text](MArSum_schema_Color1.png) You may refer to our paper, cited below, for more details on this process. ## Dataset The dataset is split into Train/Test subsets using a 90/10 split strategy. Both subsets are available for direct [donwload](https://github.com/KamelGaanoun/MoroccanSummarization). ## Citation Please cite the following paper if you decide to use the dataset: Gaanoun, K., Naira, A. M., Allak, A., & Benelallam, I. (2022). Automatic Text Summarization for Moroccan Arabic Dialect Using an Artificial Intelligence Approach. In International Conference on Business Intelligence (pp. 158-177). Springer, Cham. ## License The dataset is distributed under the CC BY 4.0 license.
{"language": "ar", "widget": [{"text": " \u0643\u0634\u0641 \u0627\u0644\u0645\u0644\u064a\u0627\u0631\u062f\u064a\u0631 \u0627\u0644\u0645\u064a\u0631\u064a\u0643\u0627\u0646\u064a \u0648\u0645\u0624\u0633\u0633 \u0634\u0631\u0643\u0629 \u201c\u0645\u0627\u064a\u0643\u0631\u0648\u0633\u0648\u0641\u062a\u201d\u060c \u0628\u064a\u0644 \u0643\u064e\u064a\u062a\u0633\u060c \u0628\u0644\u0644\u064a \u0645\u0627\u0639\u0646\u062f\u0648\u0634 \u062d\u062a\u0649 \u0641\u0644\u0648\u0633 \u0631\u0642\u0645\u064a\u0629\u060c \u0648\u0643\u064a\u0641\u0636\u0644 \u064a\u0633\u062a\u062b\u0645\u0631 \u0641\u0644\u0648\u0633\u0648 \u0641\u0627\u0644\u0623\u0634\u064a\u0627\u0621 \u0627\u0644\u0644\u064a \u0639\u0646\u062f\u0647\u0627 \u0642\u064a\u0645\u0629\u060c \u062d\u0633\u0628 \u0643\u0644\u0627\u0645\u0648. \u062c\u0631\u064a\u062f\u0629 \u201c\u0628\u0631\u064a\u0637\u0627\u0646\u064a\u0629 \u0642\u0627\u0644\u062a \u0623\u0646 \u062a\u0635\u0631\u064a\u062d\u0627\u062a \u0643\u064e\u064a\u062a\u0633 \u0639\u0644\u0649 \u0627\u0644\u0639\u0645\u0644\u0627\u062a \u0627\u0644\u0645\u0634\u0641\u0631\u0629 \u0643\u0627\u0646\u062a \u0628\u0645\u0646\u0627\u0633\u0628\u0629 \u062d\u062f\u062b \u201c\u0633\u0648\u0644\u0646\u064a \u0639\u0644\u0649 \u0623\u064a \u062d\u0627\u062c\u0629\u201d\u060c \u0627\u0644\u0644\u064a \u062a\u0646\u0638\u0645 \u0639\u0644\u0649 \u0645\u0648\u0642\u0639 \u201c\u0631\u064a\u062f\u064a\u062a\u201d \u0627\u0644\u0634\u0647\u064a\u0631.\u0628\u064a\u0644 \u0643\u064e\u064a\u062a\u0633 \u0627\u0644\u0644\u064a \u0648\u0627\u0635\u0644\u0629 \u0644\u0627\u0641\u0648\u0631\u062a\u064a\u0646 \u062f\u064a\u0627\u0644\u0648 \u0644116 \u0645\u0644\u064a\u0627\u0631 \u062f\u0648\u0644\u0627\u0631\u060c \u0648\u0647\u0648 \u0631\u0627\u0628\u0639 \u0623\u063a\u0646\u0649 \u0631\u062c\u0644 \u0641\u0627\u0644\u0639\u0627\u0644\u0645\u060c \u062c\u0627\u062a \u062a\u0635\u0631\u064a\u062d\u0627\u062a\u0648 \u0628\u0627\u0644\u062a\u0632\u0627\u0645\u0646 \u0645\u0639 \u062e\u0633\u0627\u0631\u0629 \u0627\u0644\u0639\u0645\u0644\u0627\u062a \u0627\u0644\u0631\u0642\u0645\u064a\u0629 \u0644\u062a\u0631\u064a\u0644\u064a\u0648\u0646 \u062f\u0648\u0644\u0627\u0631 \u0645\u0646 \u0642\u064a\u0645\u062a\u0647\u0627 \u0641\u0639\u0627\u0645 2022\u060c \u0648\u0636\u0627\u0639\u062a \u0641\u062d\u0648\u0627\u0644\u064a 200 \u0645\u0644\u064a\u0627\u0631 \u062f\u0648\u0644\u0627\u0631 \u0645\u0646 \u0642\u064a\u0645\u062a\u0647\u0627 \u064124 \u0633\u0627\u0639\u0629 \u0641\u0642\u0637 \u0641\u0648\u0642\u062a \u0633\u0627\u0628\u0642 \u0645\u0646 \u0647\u0630\u0627 \u0627\u0644\u0634\u0647\u0631."}]}
Kamel/t5-darija-summarization
null
[ "transformers", "pytorch", "t5", "text2text-generation", "ar", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #t5 #text2text-generation #ar #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# MArSum: Moroccan Articles Summarization dataset - Description - Dataset - Citation - License ## Description This dataset contains 19,806 news articles written in Moroccan Arabic dialect along with their titles. The articles were crawled from URL website between 01/01/2018 and 12/31/2020. The articles are written mainly in Moroccan Arabic dialect (Darija) but some of them contain Modern Standard Arabic (MSA) passages. All the titles are written in Darija. The following table summarize some tatistics on the MArSum Dataset. <table class="tg"> <thead> <tr> <th class="tg-0pky" rowspan="2">Size</th> <th class="tg-0pky" colspan="3">Titles length</th> <th class="tg-0pky" colspan="3">Articles length</th> </tr> <tr> <th class="tg-lqy6">Min.</th> <th class="tg-lqy6">Max.</th> <th class="tg-lqy6">Avg.</th> <th class="tg-lqy6">Min.</th> <th class="tg-lqy6">Max.</th> <th class="tg-0lax">Avg.</th> </tr> </thead> <tbody> <tr> <td class="tg-dvpl">19,806</td> <td class="tg-dvpl">2</td> <td class="tg-dvpl">74</td> <td class="tg-dvpl">14.6</td> <td class="tg-dvpl">30</td> <td class="tg-dvpl">2964</td> <td class="tg-0pky">140.7</td> </tr> </tbody> </table> The following figure describes the creation process of MArSum: !alt text You may refer to our paper, cited below, for more details on this process. ## Dataset The dataset is split into Train/Test subsets using a 90/10 split strategy. Both subsets are available for direct donwload. Please cite the following paper if you decide to use the dataset: Gaanoun, K., Naira, A. M., Allak, A., & Benelallam, I. (2022). Automatic Text Summarization for Moroccan Arabic Dialect Using an Artificial Intelligence Approach. In International Conference on Business Intelligence (pp. 158-177). Springer, Cham. ## License The dataset is distributed under the CC BY 4.0 license.
[ "# MArSum: Moroccan Articles Summarization dataset\n- Description\n- Dataset\n- Citation\n- License", "## Description\n\nThis dataset contains 19,806 news articles written in Moroccan Arabic dialect along with their titles. The articles were crawled from URL website between 01/01/2018 and 12/31/2020. \nThe articles are written mainly in Moroccan Arabic dialect (Darija) but some of them contain Modern Standard Arabic (MSA) passages. All the titles are written in Darija. \nThe following table summarize some tatistics on the MArSum Dataset.\n\n\n<table class=\"tg\">\n<thead>\n <tr>\n <th class=\"tg-0pky\" rowspan=\"2\">Size</th>\n <th class=\"tg-0pky\" colspan=\"3\">Titles length</th>\n <th class=\"tg-0pky\" colspan=\"3\">Articles length</th>\n </tr>\n <tr>\n <th class=\"tg-lqy6\">Min.</th>\n <th class=\"tg-lqy6\">Max.</th>\n <th class=\"tg-lqy6\">Avg.</th>\n <th class=\"tg-lqy6\">Min.</th>\n <th class=\"tg-lqy6\">Max.</th>\n <th class=\"tg-0lax\">Avg.</th>\n </tr>\n</thead>\n<tbody>\n <tr>\n <td class=\"tg-dvpl\">19,806</td>\n <td class=\"tg-dvpl\">2</td>\n <td class=\"tg-dvpl\">74</td>\n <td class=\"tg-dvpl\">14.6</td>\n <td class=\"tg-dvpl\">30</td>\n <td class=\"tg-dvpl\">2964</td>\n <td class=\"tg-0pky\">140.7</td>\n </tr>\n</tbody>\n</table>\n\nThe following figure describes the creation process of MArSum:\n\n!alt text\n\nYou may refer to our paper, cited below, for more details on this process.", "## Dataset\n\nThe dataset is split into Train/Test subsets using a 90/10 split strategy. Both subsets are available for direct donwload.\n \nPlease cite the following paper if you decide to use the dataset:\n\n Gaanoun, K., Naira, A. M., Allak, A., & Benelallam, I. (2022). Automatic Text Summarization for Moroccan Arabic Dialect\n Using an Artificial Intelligence Approach. In International Conference on Business Intelligence (pp. 158-177). Springer, Cham.", "## License\nThe dataset is distributed under the CC BY 4.0 license." ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #ar #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# MArSum: Moroccan Articles Summarization dataset\n- Description\n- Dataset\n- Citation\n- License", "## Description\n\nThis dataset contains 19,806 news articles written in Moroccan Arabic dialect along with their titles. The articles were crawled from URL website between 01/01/2018 and 12/31/2020. \nThe articles are written mainly in Moroccan Arabic dialect (Darija) but some of them contain Modern Standard Arabic (MSA) passages. All the titles are written in Darija. \nThe following table summarize some tatistics on the MArSum Dataset.\n\n\n<table class=\"tg\">\n<thead>\n <tr>\n <th class=\"tg-0pky\" rowspan=\"2\">Size</th>\n <th class=\"tg-0pky\" colspan=\"3\">Titles length</th>\n <th class=\"tg-0pky\" colspan=\"3\">Articles length</th>\n </tr>\n <tr>\n <th class=\"tg-lqy6\">Min.</th>\n <th class=\"tg-lqy6\">Max.</th>\n <th class=\"tg-lqy6\">Avg.</th>\n <th class=\"tg-lqy6\">Min.</th>\n <th class=\"tg-lqy6\">Max.</th>\n <th class=\"tg-0lax\">Avg.</th>\n </tr>\n</thead>\n<tbody>\n <tr>\n <td class=\"tg-dvpl\">19,806</td>\n <td class=\"tg-dvpl\">2</td>\n <td class=\"tg-dvpl\">74</td>\n <td class=\"tg-dvpl\">14.6</td>\n <td class=\"tg-dvpl\">30</td>\n <td class=\"tg-dvpl\">2964</td>\n <td class=\"tg-0pky\">140.7</td>\n </tr>\n</tbody>\n</table>\n\nThe following figure describes the creation process of MArSum:\n\n!alt text\n\nYou may refer to our paper, cited below, for more details on this process.", "## Dataset\n\nThe dataset is split into Train/Test subsets using a 90/10 split strategy. Both subsets are available for direct donwload.\n \nPlease cite the following paper if you decide to use the dataset:\n\n Gaanoun, K., Naira, A. M., Allak, A., & Benelallam, I. (2022). Automatic Text Summarization for Moroccan Arabic Dialect\n Using an Artificial Intelligence Approach. In International Conference on Business Intelligence (pp. 158-177). Springer, Cham.", "## License\nThe dataset is distributed under the CC BY 4.0 license." ]
text-classification
transformers
samyarn-bert-base-multilingual-cased kao
{}
Kao/samyarn-bert-base-multilingual-cased
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
samyarn-bert-base-multilingual-cased kao
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# randombot DialoGPT Model
{"tags": ["conversational"]}
Kargan/DialoGPT-small-randombot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# randombot DialoGPT Model
[ "# randombot DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# randombot DialoGPT Model" ]
null
null
this is a test. How do you write a paper?
{}
Katiejdarby/test1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
this is a test. How do you write a paper?
[]
[ "TAGS\n#region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8229 - Accuracy: 0.54 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 0.7709 | 0.74 | | No log | 2.0 | 14 | 0.7048 | 0.72 | | No log | 3.0 | 21 | 0.8728 | 0.46 | | No log | 4.0 | 28 | 0.7849 | 0.64 | | No log | 5.0 | 35 | 0.8229 | 0.54 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned", "results": []}]}
Katsiaryna/distilbert-base-uncased-finetuned
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned ================================= This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.8229 * Accuracy: 0.54 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned_9th This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2826 - Accuracy: 0.4462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2357 | 1.0 | 569 | 0.2277 | 0.3474 | | 0.2237 | 2.0 | 1138 | 0.2316 | 0.3474 | | 0.1847 | 3.0 | 1707 | 0.2456 | 0.3712 | | 0.1302 | 4.0 | 2276 | 0.2763 | 0.4602 | | 0.0863 | 5.0 | 2845 | 0.2826 | 0.4462 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned_9th", "results": []}]}
Katsiaryna/distilbert-base-uncased-finetuned_9th
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned\_9th ====================================== This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2826 * Accuracy: 0.4462 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Joshua Dialogue Model
{"tags": ["conversational"]}
KaydenSou/Joshua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Joshua Dialogue Model
[ "# Joshua Dialogue Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Joshua Dialogue Model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-complaints-product This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided ## Model description A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint. ## Intended uses & limitations This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: - **Infrastructure:** Terraform - **ML Ops:** HuggingFace (Datasets, Hub, Transformers) - **Ml Explainability:** SHAP - **Cloud:** AWS - Model Hosting: Lambda - DB Backend: DynamoDB - Orchestration: Step-Functions - UI Hosting: EC2 - Routing: API Gateway - **UI:** Budibase ## Training and evaluation data consumer_complaints dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "datasets": ["consumer_complaints"], "model-index": [{"name": "distilbert-complaints-product", "results": []}]}
Kayvane/distilbert-complaints-product
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:consumer_complaints", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-consumer_complaints #autotrain_compatible #endpoints_compatible #region-us
# distilbert-complaints-product This model was trained from the CFBP dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided ## Model description A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint. ## Intended uses & limitations This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: - Infrastructure: Terraform - ML Ops: HuggingFace (Datasets, Hub, Transformers) - Ml Explainability: SHAP - Cloud: AWS - Model Hosting: Lambda - DB Backend: DynamoDB - Orchestration: Step-Functions - UI Hosting: EC2 - Routing: API Gateway - UI: Budibase ## Training and evaluation data consumer_complaints dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
[ "# distilbert-complaints-product\n\nThis model was trained from the CFBP dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided", "## Model description\n\nA DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint.", "## Intended uses & limitations\n\nThis model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: \n\n- Infrastructure: Terraform\n- ML Ops: HuggingFace (Datasets, Hub, Transformers) \n- Ml Explainability: SHAP \n- Cloud: AWS \n - Model Hosting: Lambda \n - DB Backend: DynamoDB\n - Orchestration: Step-Functions\n - UI Hosting: EC2\n - Routing: API Gateway \n- UI: Budibase", "## Training and evaluation data\n\nconsumer_complaints dataset", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.16.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.2\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-consumer_complaints #autotrain_compatible #endpoints_compatible #region-us \n", "# distilbert-complaints-product\n\nThis model was trained from the CFBP dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided", "## Model description\n\nA DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint.", "## Intended uses & limitations\n\nThis model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: \n\n- Infrastructure: Terraform\n- ML Ops: HuggingFace (Datasets, Hub, Transformers) \n- Ml Explainability: SHAP \n- Cloud: AWS \n - Model Hosting: Lambda \n - DB Backend: DynamoDB\n - Orchestration: Step-Functions\n - UI Hosting: EC2\n - Routing: API Gateway \n- UI: Budibase", "## Training and evaluation data\n\nconsumer_complaints dataset", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.16.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.2\n- Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-undersampled-noweights This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-undersampled-noweights", "results": []}]}
Kayvane/distilbert-undersampled-noweights
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
# distilbert-undersampled-noweights This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# distilbert-undersampled-noweights\n\nThis model was trained from scratch on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 33\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "# distilbert-undersampled-noweights\n\nThis model was trained from scratch on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 33\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-undersampled This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0826 - Accuracy: 0.9811 - F1: 0.9810 - Recall: 0.9811 - Precision: 0.9812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0959 | 0.2 | 2000 | 0.0999 | 0.9651 | 0.9628 | 0.9651 | 0.9655 | | 0.0618 | 0.41 | 4000 | 0.0886 | 0.9717 | 0.9717 | 0.9717 | 0.9731 | | 0.159 | 0.61 | 6000 | 0.0884 | 0.9719 | 0.9720 | 0.9719 | 0.9728 | | 0.0513 | 0.81 | 8000 | 0.0785 | 0.9782 | 0.9782 | 0.9782 | 0.9788 | | 0.0219 | 1.01 | 10000 | 0.0680 | 0.9779 | 0.9779 | 0.9779 | 0.9783 | | 0.036 | 1.22 | 12000 | 0.0745 | 0.9787 | 0.9787 | 0.9787 | 0.9792 | | 0.0892 | 1.42 | 14000 | 0.0675 | 0.9786 | 0.9786 | 0.9786 | 0.9789 | | 0.0214 | 1.62 | 16000 | 0.0760 | 0.9799 | 0.9798 | 0.9799 | 0.9801 | | 0.0882 | 1.83 | 18000 | 0.0800 | 0.9800 | 0.9800 | 0.9800 | 0.9802 | | 0.0234 | 2.03 | 20000 | 0.0720 | 0.9813 | 0.9813 | 0.9813 | 0.9815 | | 0.0132 | 2.23 | 22000 | 0.0738 | 0.9803 | 0.9803 | 0.9803 | 0.9805 | | 0.0136 | 2.43 | 24000 | 0.0847 | 0.9804 | 0.9804 | 0.9804 | 0.9806 | | 0.0119 | 2.64 | 26000 | 0.0826 | 0.9811 | 0.9810 | 0.9811 | 0.9812 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "recall", "precision"], "model-index": [{"name": "distilbert-undersampled", "results": []}]}
Kayvane/distilbert-undersampled
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-undersampled ======================= This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0826 * Accuracy: 0.9811 * F1: 0.9810 * Recall: 0.9811 * Precision: 0.9812 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 33 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 33\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 33\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 13522454 ## Validation Metrics - Loss: 0.31450966000556946 - Accuracy: 0.8461538461538461 - Precision: 0.8181818181818182 - Recall: 0.782608695652174 - AUC: 0.9369259032455604 - F1: 0.8 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kceilord/autonlp-tc-13522454 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["Kceilord/autonlp-data-tc"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
Kceilord/autonlp-tc-13522454
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:Kceilord/autonlp-data-tc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-Kceilord/autonlp-data-tc #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 13522454 ## Validation Metrics - Loss: 0.31450966000556946 - Accuracy: 0.8461538461538461 - Precision: 0.8181818181818182 - Recall: 0.782608695652174 - AUC: 0.9369259032455604 - F1: 0.8 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 13522454", "## Validation Metrics\n\n- Loss: 0.31450966000556946\n- Accuracy: 0.8461538461538461\n- Precision: 0.8181818181818182\n- Recall: 0.782608695652174\n- AUC: 0.9369259032455604\n- F1: 0.8", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-Kceilord/autonlp-data-tc #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 13522454", "## Validation Metrics\n\n- Loss: 0.31450966000556946\n- Accuracy: 0.8461538461538461\n- Precision: 0.8181818181818182\n- Recall: 0.782608695652174\n- AUC: 0.9369259032455604\n- F1: 0.8", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Keen/DialoGPT-small-potter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Rick3 DialoGPT Model
{"tags": ["conversational"]}
KekLord/DialoGPT-small-rick3
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick3 DialoGPT Model
[ "# Rick3 DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick3 DialoGPT Model" ]
text-generation
transformers
# Siesta
{"tags": ["conversational"]}
Keqing/Keqing-Siesta
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Siesta
[ "# Siesta" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Siesta" ]
text-generation
transformers
@ Spamton G. Spamton DialoGPT Model
{"tags": ["conversational"]}
Keqipig/DialoGPT-small-spamton
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
@ Spamton G. Spamton DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-sts-v0.4 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3368 - Pearson: 0.9303 - Spearmanr: 0.9287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.0345 | 1.0 | 730 | 0.3368 | 0.9303 | 0.9287 | | 0.0343 | 2.0 | 1460 | 0.3368 | 0.9303 | 0.9287 | | 0.0337 | 3.0 | 2190 | 0.3368 | 0.9303 | 0.9287 | | 0.0345 | 4.0 | 2920 | 0.3368 | 0.9303 | 0.9287 | | 0.0347 | 5.0 | 3650 | 0.3368 | 0.9303 | 0.9287 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["spearmanr"]}
Ketzu/koelectra-sts-v0.4
null
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #electra #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us
koelectra-sts-v0.4 ================== This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3368 * Pearson: 0.9303 * Spearmanr: 0.9287 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.10.0 * Pytorch 1.10.1+cu113 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #electra #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 2.0277 - Rouge1: 9.3963 - Rouge2: 4.0473 - Rougel: 8.4526 - Rougelsum: 8.9659 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.3706 | 1.0 | 4000 | 2.1245 | 9.1644 | 3.8264 | 8.2223 | 8.718 | 20.0 | | 2.2246 | 2.0 | 8000 | 2.0811 | 9.023 | 3.7716 | 8.1453 | 8.5998 | 20.0 | | 2.1034 | 3.0 | 12000 | 2.0469 | 9.4412 | 4.0783 | 8.4949 | 8.9977 | 20.0 | | 2.0137 | 4.0 | 16000 | 2.0390 | 9.2261 | 3.9307 | 8.3154 | 8.7937 | 20.0 | | 1.9288 | 5.0 | 20000 | 2.0277 | 9.3963 | 4.0473 | 8.4526 | 8.9659 | 20.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["pub_med_summarization_dataset"], "metrics": ["rouge"], "model-index": [{"name": "bart-base-finetuned-pubmed", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "pub_med_summarization_dataset", "type": "pub_med_summarization_dataset", "args": "document"}, "metrics": [{"type": "rouge", "value": 9.3963, "name": "Rouge1"}]}]}]}
Kevincp560/bart-base-finetuned-pubmed
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-pub_med_summarization_dataset #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bart-base-finetuned-pubmed ========================== This model is a fine-tuned version of facebook/bart-base on the pub\_med\_summarization\_dataset dataset. It achieves the following results on the evaluation set: * Loss: 2.0277 * Rouge1: 9.3963 * Rouge2: 4.0473 * Rougel: 8.4526 * Rougelsum: 8.9659 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.6
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-pub_med_summarization_dataset #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8416 - Rouge1: 40.4866 - Rouge2: 16.7472 - Rougel: 24.9831 - Rougelsum: 36.4002 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.932 | 1.0 | 4000 | 1.8110 | 38.1151 | 15.2255 | 23.4286 | 34.2521 | 141.8905 | | 1.7001 | 2.0 | 8000 | 1.7790 | 39.8217 | 16.3042 | 24.649 | 35.831 | 142.0 | | 1.5 | 3.0 | 12000 | 1.7971 | 40.6108 | 17.0446 | 25.1977 | 36.5556 | 141.9865 | | 1.3316 | 4.0 | 16000 | 1.8106 | 40.0466 | 16.4851 | 24.7094 | 36.0998 | 141.9335 | | 1.1996 | 5.0 | 20000 | 1.8416 | 40.4866 | 16.7472 | 24.9831 | 36.4002 | 142.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["pub_med_summarization_dataset"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-cnn-finetuned-pubmed", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "pub_med_summarization_dataset", "type": "pub_med_summarization_dataset", "args": "document"}, "metrics": [{"type": "rouge", "value": 40.4866, "name": "Rouge1"}]}]}]}
Kevincp560/bart-large-cnn-finetuned-pubmed
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-pub_med_summarization_dataset #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
bart-large-cnn-finetuned-pubmed =============================== This model is a fine-tuned version of facebook/bart-large-cnn on the pub\_med\_summarization\_dataset dataset. It achieves the following results on the evaluation set: * Loss: 1.8416 * Rouge1: 40.4866 * Rouge2: 16.7472 * Rougel: 24.9831 * Rougelsum: 36.4002 * Gen Len: 142.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.6
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-pub_med_summarization_dataset #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8135 - Rouge1: 10.946 - Rouge2: 5.0933 - Rougel: 9.5608 - Rougelsum: 10.4259 - Gen Len: 19.0495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 2.0861 | 1.0 | 4000 | 1.8909 | 8.7344 | 3.6919 | 7.8804 | 8.3305 | 20.0 | | 1.8996 | 2.0 | 8000 | 1.8261 | 10.2124 | 4.6212 | 8.9842 | 9.7417 | 17.632 | | 1.7459 | 3.0 | 12000 | 1.8160 | 9.4933 | 4.4117 | 8.3977 | 9.0758 | 16.4775 | | 1.6258 | 4.0 | 16000 | 1.8136 | 10.8248 | 5.0335 | 9.4286 | 10.3123 | 18.724 | | 1.5214 | 5.0 | 20000 | 1.8135 | 10.946 | 5.0933 | 9.5608 | 10.4259 | 19.0495 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["pub_med_summarization_dataset"], "metrics": ["rouge"], "model-index": [{"name": "bart-large-finetuned-pubmed", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "pub_med_summarization_dataset", "type": "pub_med_summarization_dataset", "args": "document"}, "metrics": [{"type": "rouge", "value": 10.946, "name": "Rouge1"}]}]}]}
Kevincp560/bart-large-finetuned-pubmed
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-pub_med_summarization_dataset #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bart-large-finetuned-pubmed =========================== This model is a fine-tuned version of facebook/bart-large on the pub\_med\_summarization\_dataset dataset. It achieves the following results on the evaluation set: * Loss: 1.8135 * Rouge1: 10.946 * Rouge2: 5.0933 * Rougel: 9.5608 * Rougelsum: 10.4259 * Gen Len: 19.0495 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.6
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-pub_med_summarization_dataset #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6" ]
text-generation
transformers
# Model for chat bot to talk like tony stark
{"tags": ["conversational"]}
KhanAdeeb/model-tony-stark
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model for chat bot to talk like tony stark
[ "# Model for chat bot to talk like tony stark" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model for chat bot to talk like tony stark" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1782 | 1.0 | 579 | 0.5258 | | 0.4938 | 2.0 | 1158 | 0.4639 | | 0.32 | 3.0 | 1737 | 0.4919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-multilingual-cased-finetuned-squad", "results": []}]}
Khanh/bert-base-multilingual-cased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
bert-base-multilingual-cased-finetuned-squad ============================================ This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4919 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-viquad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 65 | 2.5534 | | No log | 2.0 | 130 | 2.1165 | | No log | 3.0 | 195 | 1.9815 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-multilingual-cased-finetuned-viquad", "results": []}]}
Khanh/bert-base-multilingual-cased-finetuned-viquad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
bert-base-multilingual-cased-finetuned-viquad ============================================= This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.9815 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.923 | 1.0 | 579 | 0.8439 | | 0.8479 | 2.0 | 1158 | 0.6784 | | 0.6148 | 3.0 | 1737 | 0.6587 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-multilingual-cased-finetuned-squad", "results": []}]}
Khanh/distilbert-base-multilingual-cased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-multilingual-cased-finetuned-squad ================================================== This model is a fine-tuned version of distilbert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6587 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-viquad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 65 | 4.0975 | | No log | 2.0 | 130 | 3.9315 | | No log | 3.0 | 195 | 3.6742 | | No log | 4.0 | 260 | 3.4878 | | No log | 5.0 | 325 | 3.4241 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-multilingual-cased-finetuned-viquad", "results": []}]}
Khanh/distilbert-base-multilingual-cased-finetuned-viquad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-multilingual-cased-finetuned-viquad =================================================== This model is a fine-tuned version of distilbert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.4241 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-squad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5539 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7665 | 1.0 | 2295 | 0.5231 | | 0.5236 | 2.0 | 4590 | 0.5539 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-finetuned-squad", "results": []}]}
Khanh/xlm-roberta-base-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us
xlm-roberta-base-finetuned-squad ================================ This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.5539 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-viquad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 259 | 2.9945 | | 3.3665 | 2.0 | 518 | 2.3761 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-finetuned-viquad", "results": []}]}
Khanh/xlm-roberta-base-finetuned-viquad
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us
xlm-roberta-base-finetuned-viquad ================================= This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.3761 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
null
null
VietnameseQA model based on custom dataset.
{}
KhoiNXM/KhoiNXM_Vietnamese_QA
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
VietnameseQA model based on custom dataset.
[]
[ "TAGS\n#region-us \n" ]
text-classification
transformers
# CLOG Assessment generator model
{}
Khu1998/clog-assessment-model
null
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
# CLOG Assessment generator model
[ "# CLOG Assessment generator model" ]
[ "TAGS\n#transformers #tf #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "# CLOG Assessment generator model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5327 - Matthews Correlation: 0.5233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5314 | 1.0 | 535 | 0.4955 | 0.4270 | | 0.3545 | 2.0 | 1070 | 0.5327 | 0.5233 | | 0.2418 | 3.0 | 1605 | 0.6180 | 0.5132 | | 0.1722 | 4.0 | 2140 | 0.7344 | 0.5158 | | 0.1243 | 5.0 | 2675 | 0.8581 | 0.5196 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5232819075279987, "name": "Matthews Correlation"}]}]}]}
Kien/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.5327 * Matthews Correlation: 0.5233 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.1037 - Matthews Correlation: 0.9719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.2094 | 1.0 | 525 | 0.1069 | 0.9607 | | 0.0483 | 2.0 | 1050 | 0.0878 | 0.9719 | | 0.0296 | 3.0 | 1575 | 0.1263 | 0.9664 | | 0.0108 | 4.0 | 2100 | 0.1037 | 0.9719 | | 0.0096 | 5.0 | 2625 | 0.1065 | 0.9719 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.9719066462260881}}]}]}
Kieran/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. It achieves the following results on the evaluation set: * Loss: 0.1037 * Matthews Correlation: 0.9719 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2224 - Accuracy: 0.9225 - F1: 0.9228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.84 | 1.0 | 250 | 0.3133 | 0.909 | 0.9070 | | 0.2459 | 2.0 | 500 | 0.2224 | 0.9225 | 0.9228 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9225, "name": "Accuracy"}, {"type": "f1", "value": 0.9227765339978083, "name": "F1"}]}]}]}
Kiran146/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2224 * Accuracy: 0.9225 * F1: 0.9228 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.1 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
null
null
this is my ReadMe
{}
KiranM/someNewModel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
this is my ReadMe
[]
[ "TAGS\n#region-us \n" ]
text2text-generation
transformers
### 📝 Description MBart for Russian summarization fine-tuned for **dialogues** summarization. This model was firstly fine-tuned by [Ilya Gusev](https://hf.co/IlyaGusev) on [Gazeta dataset](https://huggingface.co/datasets/IlyaGusev/gazeta). We have **fine tuned** that model on [SamSum dataset](https://huggingface.co/datasets/samsum) **translated to Russian** using GoogleTranslateAPI 🤗 Moreover! We have implemented a **! telegram bot [@summarization_bot](https://t.me/summarization_bot) !** with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages!  🤗 ### ❓ How to use with code ```python from transformers import MBartTokenizer, MBartForConditionalGeneration # Download model and tokenizer model_name = "Kirili4ik/mbart_ruDialogSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = MBartForConditionalGeneration.from_pretrained(model_name) model.eval() article_text = "..." input_ids = tokenizer( [article_text], max_length=600, padding="max_length", truncation=True, return_tensors="pt", )["input_ids"] output_ids = model.generate( input_ids=input_ids, top_k=0, num_beams=3, no_repeat_ngram_size=3 )[0] summary = tokenizer.decode(output_ids, skip_special_tokens=True) print(summary) ```
{"language": ["ru"], "license": "cc", "tags": ["mbart"], "datasets": ["IlyaGusev/gazeta", "samsum", "samsum_(translated_into_Russian)"], "inference": {"parameters": {"no_repeat_ngram_size": "4,", "num_beams": 5}}, "widget": [{"text": "\u0414\u0436\u0435\u0444\u0444: \u041c\u043e\u0433\u0443 \u043b\u0438 \u044f \u043e\u0431\u0443\u0447\u0438\u0442\u044c \u043c\u043e\u0434\u0435\u043b\u044c \ud83e\udd17 Transformers \u043d\u0430 Amazon SageMaker? \n\u0424\u0438\u043b\u0438\u043f\u043f: \u041a\u043e\u043d\u0435\u0447\u043d\u043e, \u0432\u044b \u043c\u043e\u0436\u0435\u0442\u0435 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u043d\u043e\u0432\u044b\u0439 \u043a\u043e\u043d\u0442\u0435\u0439\u043d\u0435\u0440 \u0434\u043b\u044f \u0433\u043b\u0443\u0431\u043e\u043a\u043e\u0433\u043e \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f HuggingFace. \n\u0414\u0436\u0435\u0444\u0444: \u0425\u043e\u0440\u043e\u0448\u043e.\n\u0414\u0436\u0435\u0444\u0444: \u0438 \u043a\u0430\u043a \u044f \u043c\u043e\u0433\u0443 \u043d\u0430\u0447\u0430\u0442\u044c? \n\u0414\u0436\u0435\u0444\u0444: \u0433\u0434\u0435 \u044f \u043c\u043e\u0433\u0443 \u043d\u0430\u0439\u0442\u0438 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0446\u0438\u044e? \n\u0424\u0438\u043b\u0438\u043f\u043f: \u043e\u043a, \u043e\u043a, \u0437\u0434\u0435\u0441\u044c \u043c\u043e\u0436\u043d\u043e \u043d\u0430\u0439\u0442\u0438 \u0432\u0441\u0435: https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\n"}], "model-index": [{"name": "mbart_ruDialogSum", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Dialogue Summarization"}, "dataset": {"name": "SAMSum Corpus (translated to Russian)", "type": "samsum"}, "metrics": [{"type": "rogue-1", "value": 34.5, "name": "Validation ROGUE-1"}, {"type": "rogue-l", "value": 33, "name": "Validation ROGUE-L"}, {"type": "rogue-1", "value": 31, "name": "Test ROGUE-1"}, {"type": "rogue-l", "value": 28, "name": "Test ROGUE-L"}]}]}]}
Kirili4ik/mbart_ruDialogSum
null
[ "transformers", "pytorch", "mbart", "text2text-generation", "ru", "license:cc", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #mbart #text2text-generation #ru #license-cc #model-index #autotrain_compatible #endpoints_compatible #region-us
### Description MBart for Russian summarization fine-tuned for dialogues summarization. This model was firstly fine-tuned by Ilya Gusev on Gazeta dataset. We have fine tuned that model on SamSum dataset translated to Russian using GoogleTranslateAPI Moreover! We have implemented a ! telegram bot @summarization_bot ! with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages!   ### How to use with code
[ "### Description\n\nMBart for Russian summarization fine-tuned for dialogues summarization.\n\n\nThis model was firstly fine-tuned by Ilya Gusev on Gazeta dataset. We have fine tuned that model on SamSum dataset translated to Russian using GoogleTranslateAPI\n\n Moreover! We have implemented a ! telegram bot @summarization_bot ! with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages!", "### How to use with code" ]
[ "TAGS\n#transformers #pytorch #mbart #text2text-generation #ru #license-cc #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Description\n\nMBart for Russian summarization fine-tuned for dialogues summarization.\n\n\nThis model was firstly fine-tuned by Ilya Gusev on Gazeta dataset. We have fine tuned that model on SamSum dataset translated to Russian using GoogleTranslateAPI\n\n Moreover! We have implemented a ! telegram bot @summarization_bot ! with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages!", "### How to use with code" ]
text-generation
transformers
### 📝 Description DialoGPT trained on Russian language and fine tuned on my telegram chat. This model was created by [sberbank-ai](https://hf.co/sberbank-ai) and trained on Russian forums (see [Grossmend's model](https://hf.co/Grossmend/rudialogpt3_medium_based_on_gpt2)). You can find info about how it has been trained on [habr](https://habr.com/ru/company/icl_services/blog/548244/) (in Russian). I have created a **simple pipeline** and **fine tuned** that model on my own **exported telegram chat** (~30mb json). It is in fact very easy to get the data from telegram and fine tune a model. Therefore, I made a **colab tutorial** for it: https://colab.research.google.com/drive/1fnAVURjyZRK9VQg1Co_-SKUQnRES8l9R?usp=sharing ⚠️ Due to specifics of the data Hosted inference API may not work properly ⚠️ 🤗To try it use my [Spaces demo](https://huggingface.co/spaces/Kirili4ik/chat-with-Kirill)🤗 ### ❓ How to use with code ```python # Download model and tokenizer checkpoint = "Kirili4ik/ruDialoGpt3-medium-finetuned-telegram" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) model.eval() # util function to get expected len after tokenizing def get_length_param(text: str, tokenizer) -> str: tokens_count = len(tokenizer.encode(text)) if tokens_count <= 15: len_param = '1' elif tokens_count <= 50: len_param = '2' elif tokens_count <= 256: len_param = '3' else: len_param = '-' return len_param # util function to get next person number (1/0) for Machine or Human in the dialogue def get_user_param(text: dict, machine_name_in_chat: str) -> str: if text['from'] == machine_name_in_chat: return '1' # machine else: return '0' # human chat_history_ids = torch.zeros((1, 0), dtype=torch.int) while True: next_who = input("Who's phrase?\t") #input("H / G?") # Human or GPT # In case Human if next_who == "H" or next_who == "Human": input_user = input("===> Human: ") # encode the new user input, add parameters and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(f"|0|{get_length_param(input_user, tokenizer)}|" \ + input_user + tokenizer.eos_token, return_tensors="pt") # append the new user input tokens to the chat history chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if next_who == "G" or next_who == "GPT": next_len = input("Phrase len? 1/2/3/-\t") #input("Exp. len?(-/1/2/3): ") # encode the new user input, add parameters and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(f"|1|{next_len}|", return_tensors="pt") # append the new user input tokens to the chat history chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) # print(tokenizer.decode(chat_history_ids[-1])) # uncomment to see full gpt input # save previous len input_len = chat_history_ids.shape[-1] # generated a response; PS you can read about the parameters at hf.co/blog/how-to-generate chat_history_ids = model.generate( chat_history_ids, num_return_sequences=1, # use for more variants, but have to print [i] max_length=512, no_repeat_ngram_size=3, do_sample=True, top_k=50, top_p=0.9, temperature = 0.6, # 0 for greedy mask_token_id=tokenizer.mask_token_id, eos_token_id=tokenizer.eos_token_id, unk_token_id=tokenizer.unk_token_id, pad_token_id=tokenizer.pad_token_id, device='cpu' ) # pretty print last ouput tokens from bot print(f"===> GPT-3: {tokenizer.decode(chat_history_ids[:, input_len:][0], skip_special_tokens=True)}") ```
{"language": ["ru", "ru-RU"], "tags": ["conversational"]}
Kirili4ik/ruDialoGpt3-medium-finetuned-telegram
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru", "ru-RU" ]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
### Description DialoGPT trained on Russian language and fine tuned on my telegram chat. This model was created by sberbank-ai and trained on Russian forums (see Grossmend's model). You can find info about how it has been trained on habr (in Russian). I have created a simple pipeline and fine tuned that model on my own exported telegram chat (~30mb json). It is in fact very easy to get the data from telegram and fine tune a model. Therefore, I made a colab tutorial for it: URL ️ Due to specifics of the data Hosted inference API may not work properly ️ To try it use my Spaces demo ### How to use with code
[ "### Description\n\nDialoGPT trained on Russian language and fine tuned on my telegram chat.\n\n\nThis model was created by sberbank-ai and trained on Russian forums (see Grossmend's model). You can find info about how it has been trained on habr (in Russian). I have created a simple pipeline and fine tuned that model on my own exported telegram chat (~30mb json). It is in fact very easy to get the data from telegram and fine tune a model. Therefore, I made a colab tutorial for it: URL\n\n️ Due to specifics of the data Hosted inference API may not work properly ️\n\nTo try it use my Spaces demo", "### How to use with code" ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### Description\n\nDialoGPT trained on Russian language and fine tuned on my telegram chat.\n\n\nThis model was created by sberbank-ai and trained on Russian forums (see Grossmend's model). You can find info about how it has been trained on habr (in Russian). I have created a simple pipeline and fine tuned that model on my own exported telegram chat (~30mb json). It is in fact very easy to get the data from telegram and fine tune a model. Therefore, I made a colab tutorial for it: URL\n\n️ Due to specifics of the data Hosted inference API may not work properly ️\n\nTo try it use my Spaces demo", "### How to use with code" ]
text2text-generation
transformers
T5-base fine-tuned on SQuAD and CoQA datasets for question generation language: - en-us tags: - question-generation license: - MIT datasets: - SQuAD 2.0 - CoQA
{}
Kithogue/T5_Question_Generation
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
T5-base fine-tuned on SQuAD and CoQA datasets for question generation language: - en-us tags: - question-generation license: - MIT datasets: - SQuAD 2.0 - CoQA
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wangchanberta-Depress-Finetuned This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the wisesight_sentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.5910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0114 | 0.08 | 200 | 0.9538 | | 0.8617 | 0.15 | 400 | 0.8280 | | 0.7882 | 0.23 | 600 | 0.7472 | | 0.7132 | 0.3 | 800 | 0.7264 | | 0.7226 | 0.38 | 1000 | 0.7265 | | 0.6854 | 0.45 | 1200 | 0.6792 | | 0.621 | 0.53 | 1400 | 0.6451 | | 0.6093 | 0.61 | 1600 | 0.6364 | | 0.6099 | 0.68 | 1800 | 0.6128 | | 0.5766 | 0.76 | 2000 | 0.6388 | | 0.6033 | 0.83 | 2200 | 0.6148 | | 0.5966 | 0.91 | 2400 | 0.6440 | | 0.6208 | 0.98 | 2600 | 0.5910 | | 0.5178 | 1.06 | 2800 | 0.6340 | | 0.4863 | 1.13 | 3000 | 0.7177 | | 0.4852 | 1.21 | 3200 | 0.6766 | | 0.4711 | 1.29 | 3400 | 0.6739 | | 0.5203 | 1.36 | 3600 | 0.6429 | | 0.5167 | 1.44 | 3800 | 0.6539 | | 0.5053 | 1.51 | 4000 | 0.6172 | | 0.5076 | 1.59 | 4200 | 0.6053 | | 0.4704 | 1.66 | 4400 | 0.6474 | | 0.4807 | 1.74 | 4600 | 0.6225 | | 0.4792 | 1.82 | 4800 | 0.6282 | | 0.5177 | 1.89 | 5000 | 0.6011 | | 0.4839 | 1.97 | 5200 | 0.6231 | | 0.4155 | 2.04 | 5400 | 0.6668 | | 0.3923 | 2.12 | 5600 | 0.6886 | | 0.3713 | 2.19 | 5800 | 0.6895 | | 0.364 | 2.27 | 6000 | 0.6886 | | 0.3774 | 2.34 | 6200 | 0.7117 | | 0.4001 | 2.42 | 6400 | 0.7081 | | 0.3531 | 2.5 | 6600 | 0.7465 | | 0.3768 | 2.57 | 6800 | 0.7706 | | 0.3324 | 2.65 | 7000 | 0.7456 | | 0.3597 | 2.72 | 7200 | 0.7507 | | 0.3868 | 2.8 | 7400 | 0.7542 | | 0.4141 | 2.87 | 7600 | 0.7223 | | 0.3701 | 2.95 | 7800 | 0.7374 | | 0.3175 | 3.03 | 8000 | 0.7615 | | 0.2951 | 3.1 | 8200 | 0.7880 | | 0.2885 | 3.18 | 8400 | 0.8158 | | 0.2913 | 3.25 | 8600 | 0.8565 | | 0.2815 | 3.33 | 8800 | 0.8649 | | 0.2748 | 3.4 | 9000 | 0.8783 | | 0.2776 | 3.48 | 9200 | 0.8851 | | 0.2982 | 3.56 | 9400 | 0.8922 | | 0.2939 | 3.63 | 9600 | 0.8796 | | 0.2712 | 3.71 | 9800 | 0.8873 | | 0.2918 | 3.78 | 10000 | 0.8973 | | 0.3144 | 3.86 | 10200 | 0.8978 | | 0.2988 | 3.93 | 10400 | 0.8951 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["wisesight_sentiment"], "model-index": [{"name": "Wangchanberta-Depress-Finetuned", "results": []}]}
Kittipot/Wangchanberta-Depress-Finetuned
null
[ "transformers", "pytorch", "tensorboard", "camembert", "text-classification", "generated_from_trainer", "dataset:wisesight_sentiment", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #camembert #text-classification #generated_from_trainer #dataset-wisesight_sentiment #autotrain_compatible #endpoints_compatible #region-us
Wangchanberta-Depress-Finetuned =============================== This model is a fine-tuned version of airesearch/wangchanberta-base-att-spm-uncased on the wisesight\_sentiment dataset. It achieves the following results on the evaluation set: * Loss: 0.5910 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.11.2 * Pytorch 1.11.0+cu113 * Datasets 2.1.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.11.0+cu113\n* Datasets 2.1.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #camembert #text-classification #generated_from_trainer #dataset-wisesight_sentiment #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.11.0+cu113\n* Datasets 2.1.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# MORTY!!!
{"tags": ["conversational"]}
KnutZuidema/DialoGPT-small-morty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# MORTY!!!
[ "# MORTY!!!" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# MORTY!!!" ]
text-generation
transformers
# GPT-J 6B - Janeway ## Model Description GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
{"language": "en", "license": "mit"}
KoboldAI/GPT-J-6B-Janeway
null
[ "transformers", "pytorch", "gptj", "text-generation", "en", "arxiv:2101.00027", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2101.00027" ]
[ "en" ]
TAGS #transformers #pytorch #gptj #text-generation #en #arxiv-2101.00027 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# GPT-J 6B - Janeway ## Model Description GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]' ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ## Acknowledgements This project would not have been possible without compute generously provided by Google through the TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.
[ "# GPT-J 6B - Janeway", "## Model Description\r\nGPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.", "## Training data\r\nThe training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.\r\nSome parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]'", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\n\r\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\r\n\r\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\r\n\r\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\r\nThe model uses the following model as base:", "## Acknowledgements\r\n\r\nThis project would not have been possible without compute generously provided by Google through the\r\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha." ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #en #arxiv-2101.00027 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# GPT-J 6B - Janeway", "## Model Description\r\nGPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.", "## Training data\r\nThe training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.\r\nSome parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]'", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\n\r\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\r\n\r\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\r\n\r\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\r\nThe model uses the following model as base:", "## Acknowledgements\r\n\r\nThis project would not have been possible without compute generously provided by Google through the\r\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha." ]
text-generation
transformers
# GPT-J 6B - Shinen ## Model Description GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** ## Training data The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way: ``` [Theme: <theme1>, <theme2> ,<theme3>] <Story goes here> ``` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen') >>> generator("She was staring at me", do_sample=True, min_length=50) [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}] ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
{"language": "en", "license": "mit"}
KoboldAI/GPT-J-6B-Shinen
null
[ "transformers", "pytorch", "gptj", "text-generation", "en", "arxiv:2101.00027", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2101.00027" ]
[ "en" ]
TAGS #transformers #pytorch #gptj #text-generation #en #arxiv-2101.00027 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# GPT-J 6B - Shinen ## Model Description GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content. ## Training data The training data contains user-generated stories from URL. All stories are tagged using the following way: ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ## Acknowledgements This project would not have been possible without compute generously provided by Google through the TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.
[ "# GPT-J 6B - Shinen", "## Model Description\r\nGPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.\r\nWarning: THIS model is NOT suitable for use by minors. The model will output X-rated content.", "## Training data\r\nThe training data contains user-generated stories from URL. All stories are tagged using the following way:", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\n\r\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\r\n\r\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\r\n\r\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\r\nThe model uses the following model as base:", "## Acknowledgements\r\n\r\nThis project would not have been possible without compute generously provided by Google through the\r\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha." ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #en #arxiv-2101.00027 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# GPT-J 6B - Shinen", "## Model Description\r\nGPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.\r\nWarning: THIS model is NOT suitable for use by minors. The model will output X-rated content.", "## Training data\r\nThe training data contains user-generated stories from URL. All stories are tagged using the following way:", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\n\r\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\r\n\r\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\r\n\r\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\r\nThe model uses the following model as base:", "## Acknowledgements\r\n\r\nThis project would not have been possible without compute generously provided by Google through the\r\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha." ]
text-generation
transformers
# Model Card for GPT-J-6B-Skein # Model Details ## Model Description - **Developed by:** KoboldAI - **Shared by [Optional]:** KoboldAI - **Model type:** Text Generation - **Language(s) (NLP):** English - **License:** Apache License 2.0 - **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) - **Parent Model:** GPT-J - **Resources for more information:** - [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax) - [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM) # Uses ## Direct Use This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as: ``` You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is." "No," you respond, uneasy. You open your mouth and close it again. > You ask about the experience of waking up ``` ## Downstream Use [Optional] More information needed ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt). ## Training Procedure ### Preprocessing The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages. ### Speeds, Sizes, Times Training took approximately 14 hours in total, with the average speed being 5265 tokens per second. # Evaluation ## Testing Data, Factors & Metrics ### Testing Data More information needed ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software https://github.com/kingoflolz/mesh-transformer-jax # Citation **BibTeX:** ``` @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein") model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein") ``` </details>
{"tags": ["text-generation"]}
KoboldAI/GPT-J-6B-Skein
null
[ "transformers", "pytorch", "gptj", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #pytorch #gptj #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for GPT-J-6B-Skein # Model Details ## Model Description - Developed by: KoboldAI - Shared by [Optional]: KoboldAI - Model type: Text Generation - Language(s) (NLP): English - License: Apache License 2.0 - Related Models: GPT-J 6B - Parent Model: GPT-J - Resources for more information: - GitHub Repo - Associated Model Doc # Uses ## Direct Use This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as: ## Downstream Use [Optional] More information needed ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. See the GPT-J 6B model card for more information. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The data are mostly comprised of light novels from the dataset of the KoboldAI/GPT-Neo-2.7B-Horni-LN model and assorted interactive fiction. The dataset uses '[Themes: <comma-separated list of genres>]' for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult this document. ## Training Procedure ### Preprocessing The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages. ### Speeds, Sizes, Times Training took approximately 14 hours in total, with the average speed being 5265 tokens per second. # Evaluation ## Testing Data, Factors & Metrics ### Testing Data More information needed ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: More information needed - Hours used: More information needed - Cloud Provider: More information needed - Compute Region: More information needed - Carbon Emitted: More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software URL BibTeX: # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> </details>
[ "# Model Card for GPT-J-6B-Skein", "# Model Details", "## Model Description\n \n \n- Developed by: KoboldAI\n- Shared by [Optional]: KoboldAI\n- Model type: Text Generation\n- Language(s) (NLP): English\n- License: Apache License 2.0\n- Related Models: GPT-J 6B\n - Parent Model: GPT-J\n- Resources for more information: \n - GitHub Repo\n - Associated Model Doc", "# Uses", "## Direct Use\n \nThis model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with \"> You\", such as:", "## Downstream Use [Optional]\n \nMore information needed", "## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.", "# Bias, Risks, and Limitations\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n \nSee the GPT-J 6B model card for more information.", "## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "# Training Details", "## Training Data\n \nThe data are mostly comprised of light novels from the dataset of the KoboldAI/GPT-Neo-2.7B-Horni-LN model and assorted interactive fiction. The dataset uses '[Themes: <comma-separated list of genres>]' for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult this document.", "## Training Procedure", "### Preprocessing\n \nThe data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as \"thank you for playing\" messages and title messages.", "### Speeds, Sizes, Times\n \nTraining took approximately 14 hours in total, with the average speed being 5265 tokens per second.", "# Evaluation", "## Testing Data, Factors & Metrics", "### Testing Data\n \nMore information needed", "### Factors", "### Metrics\n \nMore information needed", "## Results \n \nMore information needed", "# Model Examination\n \nMore information needed", "# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed", "# Technical Specifications [optional]", "## Model Architecture and Objective\n \nMore information needed", "## Compute Infrastructure\n \nMore information needed", "### Hardware\n \nMore information needed", "### Software\nURL\n \nBibTeX:", "# Glossary [optional]\nMore information needed", "# More Information [optional]\n \nMore information needed", "# Model Card Authors [optional]\n \n \nKoboldAI in collaboration with Ezi Ozoani and the Hugging Face team", "# Model Card Contact\n \nMore information needed", "# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>" ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for GPT-J-6B-Skein", "# Model Details", "## Model Description\n \n \n- Developed by: KoboldAI\n- Shared by [Optional]: KoboldAI\n- Model type: Text Generation\n- Language(s) (NLP): English\n- License: Apache License 2.0\n- Related Models: GPT-J 6B\n - Parent Model: GPT-J\n- Resources for more information: \n - GitHub Repo\n - Associated Model Doc", "# Uses", "## Direct Use\n \nThis model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with \"> You\", such as:", "## Downstream Use [Optional]\n \nMore information needed", "## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.", "# Bias, Risks, and Limitations\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n \nSee the GPT-J 6B model card for more information.", "## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "# Training Details", "## Training Data\n \nThe data are mostly comprised of light novels from the dataset of the KoboldAI/GPT-Neo-2.7B-Horni-LN model and assorted interactive fiction. The dataset uses '[Themes: <comma-separated list of genres>]' for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult this document.", "## Training Procedure", "### Preprocessing\n \nThe data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as \"thank you for playing\" messages and title messages.", "### Speeds, Sizes, Times\n \nTraining took approximately 14 hours in total, with the average speed being 5265 tokens per second.", "# Evaluation", "## Testing Data, Factors & Metrics", "### Testing Data\n \nMore information needed", "### Factors", "### Metrics\n \nMore information needed", "## Results \n \nMore information needed", "# Model Examination\n \nMore information needed", "# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed", "# Technical Specifications [optional]", "## Model Architecture and Objective\n \nMore information needed", "## Compute Infrastructure\n \nMore information needed", "### Hardware\n \nMore information needed", "### Software\nURL\n \nBibTeX:", "# Glossary [optional]\nMore information needed", "# More Information [optional]\n \nMore information needed", "# Model Card Authors [optional]\n \n \nKoboldAI in collaboration with Ezi Ozoani and the Hugging Face team", "# Model Card Contact\n \nMore information needed", "# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>" ]
text-generation
transformers
# GPT-Neo-125M-AID This model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size. Because of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models.
{}
KoboldAI/GPT-Neo-125M-AID
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
# GPT-Neo-125M-AID This model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size. Because of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models.
[ "# GPT-Neo-125M-AID\nThis model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size.\nBecause of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models." ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n", "# GPT-Neo-125M-AID\nThis model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size.\nBecause of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models." ]
text-generation
transformers
# GPT-Neo 2.7B - Janeway ## Model Description GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
{"language": "en", "license": "mit"}
KoboldAI/GPT-Neo-2.7B-Janeway
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt_neo #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# GPT-Neo 2.7B - Janeway ## Model Description GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]' ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software:
[ "# GPT-Neo 2.7B - Janeway", "## Model Description\r\nGPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\r\nThe training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.\r\nSome parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]'", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\r\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\r\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\r\nThe model is made using the following software:" ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# GPT-Neo 2.7B - Janeway", "## Model Description\r\nGPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\r\nThe training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.\r\nSome parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]'", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\r\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\r\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\r\nThe model is made using the following software:" ]
text-generation
transformers
# GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='mrseeker87/GPT-Neo-2.7B-Picard') >>> generator("Jean-Luc Picard", do_sample=True, min_length=50) [{'generated_text': 'Jean-Luc Picard, the captain of a Federation starship in command of one of Starfleet's few fulltime scientists.'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
{"language": "en", "license": "mit"}
KoboldAI/GPT-Neo-2.7B-Picard
null
[ "transformers", "pytorch", "safetensors", "gpt_neo", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #gpt_neo #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software:
[ "# GPT-Neo 2.7B - Picard", "## Model Description\nGPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\nThe training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt_neo #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# GPT-Neo 2.7B - Picard", "## Model Description\nGPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\nThe training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
text-generation
transformers
# GPT-Neo 2.7B - Shinen ## Model Description GPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** ## Training data The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way: ``` [Theme: <theme1>, <theme2> ,<theme3>] <Story goes here> ``` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Shinen') >>> generator("She was staring at me", do_sample=True, min_length=50) [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning. GPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
{"language": "en", "license": "mit"}
KoboldAI/GPT-Neo-2.7B-Shinen
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt_neo #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# GPT-Neo 2.7B - Shinen ## Model Description GPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content. ## Training data The training data contains user-generated stories from URL. All stories are tagged using the following way: ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning. GPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software:
[ "# GPT-Neo 2.7B - Shinen", "## Model Description\nGPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.\n\nWarning: THIS model is NOT suitable for use by minors. The model will output X-rated content.", "## Training data\nThe training data contains user-generated stories from URL. All stories are tagged using the following way:", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning.\nGPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# GPT-Neo 2.7B - Shinen", "## Model Description\nGPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.\n\nWarning: THIS model is NOT suitable for use by minors. The model will output X-rated content.", "## Training data\nThe training data contains user-generated stories from URL. All stories are tagged using the following way:", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning.\nGPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
text-generation
transformers
This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-1.3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 31.66 | | ARC (25-shot) | 31.14 | | HellaSwag (10-shot) | 58.39 | | MMLU (5-shot) | 24.98 | | TruthfulQA (0-shot) | 37.43 | | Winogrande (5-shot) | 59.04 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 10.6 |
{"language": "en"}
KoboldAI/fairseq-dense-1.3B
null
[ "transformers", "pytorch", "safetensors", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2112.10684" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-125M) | Metric | Value | |-----------------------|---------------------------| | Avg. | 26.0 | | ARC (25-shot) | 24.06 | | HellaSwag (10-shot) | 34.14 | | MMLU (5-shot) | 23.98 | | TruthfulQA (0-shot) | 43.72 | | Winogrande (5-shot) | 50.59 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 5.5 |
{"language": "en"}
KoboldAI/fairseq-dense-125M
null
[ "transformers", "pytorch", "safetensors", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2112.10684" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
This is a Hugging Face transformers-compatible conversion of the original dense 13B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 37.53 | | ARC (25-shot) | 40.36 | | HellaSwag (10-shot) | 75.51 | | MMLU (5-shot) | 27.07 | | TruthfulQA (0-shot) | 32.83 | | Winogrande (5-shot) | 67.96 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 18.96 |
{"language": "en"}
KoboldAI/fairseq-dense-13B
null
[ "transformers", "pytorch", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2112.10684" ]
[ "en" ]
TAGS #transformers #pytorch #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is a Hugging Face transformers-compatible conversion of the original dense 13B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #pytorch #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
# Fairseq-dense 2.7B - Janeway ## Model Description Fairseq-dense 2.7B-Janeway is a finetune created using Fairseq's MoE dense model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-2.7B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### BibTeX entry and citation info ``` Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts ```
{"language": "en", "license": "mit"}
KoboldAI/fairseq-dense-2.7B-Janeway
null
[ "transformers", "pytorch", "xglm", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #xglm #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# Fairseq-dense 2.7B - Janeway ## Model Description Fairseq-dense 2.7B-Janeway is a finetune created using Fairseq's MoE dense model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway. Some parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]' ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### BibTeX entry and citation info
[ "# Fairseq-dense 2.7B - Janeway", "## Model Description\r\nFairseq-dense 2.7B-Janeway is a finetune created using Fairseq's MoE dense model.", "## Training data\r\nThe training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway.\r\nSome parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]'", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\nBased on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #xglm #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Fairseq-dense 2.7B - Janeway", "## Model Description\r\nFairseq-dense 2.7B-Janeway is a finetune created using Fairseq's MoE dense model.", "## Training data\r\nThe training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway.\r\nSome parts of the dataset have been prepended using the following text: '[Genre: <genre1>,<genre2>]'", "### How to use\r\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\r\nBased on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).", "### BibTeX entry and citation info" ]
text-generation
transformers
This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-2.7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 33.67 | | ARC (25-shot) | 33.79 | | HellaSwag (10-shot) | 65.74 | | MMLU (5-shot) | 26.44 | | TruthfulQA (0-shot) | 34.57 | | Winogrande (5-shot) | 63.93 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 11.24 |
{"language": "en"}
KoboldAI/fairseq-dense-2.7B
null
[ "transformers", "pytorch", "safetensors", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2112.10684" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-355M) | Metric | Value | |-----------------------|---------------------------| | Avg. | 27.99 | | ARC (25-shot) | 25.43 | | HellaSwag (10-shot) | 46.67 | | MMLU (5-shot) | 25.3 | | TruthfulQA (0-shot) | 39.19 | | Winogrande (5-shot) | 52.88 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 6.48 |
{"language": "en"}
KoboldAI/fairseq-dense-355M
null
[ "transformers", "pytorch", "safetensors", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2112.10684" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #pytorch #safetensors #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
This is a Hugging Face transformers-compatible conversion of the original dense 6.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-6.7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 36.09 | | ARC (25-shot) | 39.42 | | HellaSwag (10-shot) | 71.26 | | MMLU (5-shot) | 26.91 | | TruthfulQA (0-shot) | 32.73 | | Winogrande (5-shot) | 65.27 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 17.05 |
{"language": "en"}
KoboldAI/fairseq-dense-6.7B
null
[ "transformers", "pytorch", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2112.10684" ]
[ "en" ]
TAGS #transformers #pytorch #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is a Hugging Face transformers-compatible conversion of the original dense 6.7B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #pytorch #xglm #text-generation #en #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
token-classification
transformers
[![Current PyPI packages](https://badge.fury.io/py/suparkanbun.svg)](https://pypi.org/project/suparkanbun/) # SuPar-Kanbun Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with [spaCy](https://spacy.io), [Transformers](https://huggingface.co/transformers/) and [SuPar](https://github.com/yzhangcs/parser). ## Basic usage ```py >>> import suparkanbun >>> nlp=suparkanbun.load() >>> doc=nlp("不入虎穴不得虎子") >>> print(type(doc)) <class 'spacy.tokens.doc.Doc'> >>> print(suparkanbun.to_conllu(doc)) # text = 不入虎穴不得虎子 1 不 不 ADV v,副詞,否定,無界 Polarity=Neg 2 advmod _ Gloss=not|SpaceAfter=No 2 入 入 VERB v,動詞,行為,移動 _ 0 root _ Gloss=enter|SpaceAfter=No 3 虎 虎 NOUN n,名詞,主体,動物 _ 4 nmod _ Gloss=tiger|SpaceAfter=No 4 穴 穴 NOUN n,名詞,固定物,地形 Case=Loc 2 obj _ Gloss=cave|SpaceAfter=No 5 不 不 ADV v,副詞,否定,無界 Polarity=Neg 6 advmod _ Gloss=not|SpaceAfter=No 6 得 得 VERB v,動詞,行為,得失 _ 2 parataxis _ Gloss=get|SpaceAfter=No 7 虎 虎 NOUN n,名詞,主体,動物 _ 8 nmod _ Gloss=tiger|SpaceAfter=No 8 子 子 NOUN n,名詞,人,関係 _ 6 obj _ Gloss=child|SpaceAfter=No >>> import deplacy >>> deplacy.render(doc) 不 ADV <════╗ advmod 入 VERB ═══╗═╝═╗ ROOT 虎 NOUN <╗ ║ ║ nmod 穴 NOUN ═╝<╝ ║ obj 不 ADV <════╗ ║ advmod 得 VERB ═══╗═╝<╝ parataxis 虎 NOUN <╗ ║ nmod 子 NOUN ═╝<╝ obj ``` `suparkanbun.load()` has two options `suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False)`. With the option `Danku=True` the pipeline tries to segment sentences automatically. Available `BERT` options are: * `BERT="roberta-classical-chinese-base-char"` utilizes [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) (default) * `BERT="roberta-classical-chinese-large-char"` utilizes [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char) * `BERT="guwenbert-base"` utilizes [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base) * `BERT="guwenbert-large"` utilizes [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large) * `BERT="sikubert"` utilizes [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert) * `BERT="sikuroberta"` utilizes [SikuRoBERTa](https://huggingface.co/SIKU-BERT/sikuroberta) ## Installation for Linux ```sh pip3 install suparkanbun --user ``` ## Installation for Cygwin64 Make sure to get `python37-devel` `python37-pip` `python37-cython` `python37-numpy` `python37-wheel` `gcc-g++` `mingw64-x86_64-gcc-g++` `git` `curl` `make` `cmake` packages, and then: ```sh curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh pip3.7 install suparkanbun --no-build-isolation ``` ## Installation for Jupyter Notebook (Google Colaboratory) ```py !pip install suparkanbun ``` Try [notebook](https://colab.research.google.com/github/KoichiYasuoka/SuPar-Kanbun/blob/main/suparkanbun.ipynb) for Google Colaboratory. ## Author Koichi Yasuoka (安岡孝一)
{"language": ["lzh"], "license": "mit", "tags": ["classical chinese", "literary chinese", "ancient chinese", "token-classification", "pos"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u4e0d\u5165\u864e\u7a74\u4e0d\u5f97\u864e\u5b50"}]}
KoichiYasuoka/SuPar-Kanbun
null
[ "transformers", "pytorch", "roberta", "token-classification", "classical chinese", "literary chinese", "ancient chinese", "pos", "lzh", "dataset:universal_dependencies", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #pos #lzh #dataset-universal_dependencies #license-mit #autotrain_compatible #endpoints_compatible #region-us
![Current PyPI packages](URL # SuPar-Kanbun Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with spaCy, Transformers and SuPar. ## Basic usage 'URL()' has two options 'URL(BERT="roberta-classical-chinese-base-char",Danku=False)'. With the option 'Danku=True' the pipeline tries to segment sentences automatically. Available 'BERT' options are: * 'BERT="roberta-classical-chinese-base-char"' utilizes roberta-classical-chinese-base-char (default) * 'BERT="roberta-classical-chinese-large-char"' utilizes roberta-classical-chinese-large-char * 'BERT="guwenbert-base"' utilizes GuwenBERT-base * 'BERT="guwenbert-large"' utilizes GuwenBERT-large * 'BERT="sikubert"' utilizes SikuBERT * 'BERT="sikuroberta"' utilizes SikuRoBERTa ## Installation for Linux ## Installation for Cygwin64 Make sure to get 'python37-devel' 'python37-pip' 'python37-cython' 'python37-numpy' 'python37-wheel' 'gcc-g++' 'mingw64-x86_64-gcc-g++' 'git' 'curl' 'make' 'cmake' packages, and then: ## Installation for Jupyter Notebook (Google Colaboratory) Try notebook for Google Colaboratory. ## Author Koichi Yasuoka (安岡孝一)
[ "# SuPar-Kanbun\n\nTokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with spaCy, Transformers and SuPar.", "## Basic usage\n\n\n\n'URL()' has two options 'URL(BERT=\"roberta-classical-chinese-base-char\",Danku=False)'. With the option 'Danku=True' the pipeline tries to segment sentences automatically. Available 'BERT' options are:\n\n* 'BERT=\"roberta-classical-chinese-base-char\"' utilizes roberta-classical-chinese-base-char (default)\n* 'BERT=\"roberta-classical-chinese-large-char\"' utilizes roberta-classical-chinese-large-char\n* 'BERT=\"guwenbert-base\"' utilizes GuwenBERT-base\n* 'BERT=\"guwenbert-large\"' utilizes GuwenBERT-large\n* 'BERT=\"sikubert\"' utilizes SikuBERT\n* 'BERT=\"sikuroberta\"' utilizes SikuRoBERTa", "## Installation for Linux", "## Installation for Cygwin64\n\nMake sure to get 'python37-devel' 'python37-pip' 'python37-cython' 'python37-numpy' 'python37-wheel' 'gcc-g++' 'mingw64-x86_64-gcc-g++' 'git' 'curl' 'make' 'cmake' packages, and then:", "## Installation for Jupyter Notebook (Google Colaboratory)\n\n\n\nTry notebook for Google Colaboratory.", "## Author\n\nKoichi Yasuoka (安岡孝一)" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #pos #lzh #dataset-universal_dependencies #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# SuPar-Kanbun\n\nTokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with spaCy, Transformers and SuPar.", "## Basic usage\n\n\n\n'URL()' has two options 'URL(BERT=\"roberta-classical-chinese-base-char\",Danku=False)'. With the option 'Danku=True' the pipeline tries to segment sentences automatically. Available 'BERT' options are:\n\n* 'BERT=\"roberta-classical-chinese-base-char\"' utilizes roberta-classical-chinese-base-char (default)\n* 'BERT=\"roberta-classical-chinese-large-char\"' utilizes roberta-classical-chinese-large-char\n* 'BERT=\"guwenbert-base\"' utilizes GuwenBERT-base\n* 'BERT=\"guwenbert-large\"' utilizes GuwenBERT-large\n* 'BERT=\"sikubert\"' utilizes SikuBERT\n* 'BERT=\"sikuroberta\"' utilizes SikuRoBERTa", "## Installation for Linux", "## Installation for Cygwin64\n\nMake sure to get 'python37-devel' 'python37-pip' 'python37-cython' 'python37-numpy' 'python37-wheel' 'gcc-g++' 'mingw64-x86_64-gcc-g++' 'git' 'curl' 'make' 'cmake' packages, and then:", "## Installation for Jupyter Notebook (Google Colaboratory)\n\n\n\nTry notebook for Google Colaboratory.", "## Author\n\nKoichi Yasuoka (安岡孝一)" ]
fill-mask
transformers
# bert-base-japanese-char-extended ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-base-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-base-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-base-japanese-wikipedia-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended") ```
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u9178\u7d20\u30dc\u30f3\u30d9\u3092\u5145[MASK]\u3059\u308b\u3002"}]}
KoichiYasuoka/bert-base-japanese-char-extended
null
[ "transformers", "pytorch", "bert", "fill-mask", "japanese", "masked-lm", "wikipedia", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #fill-mask #japanese #masked-lm #wikipedia #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# bert-base-japanese-char-extended ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-base-japanese-char-v2. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune 'bert-base-japanese-char-extended' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use
[ "# bert-base-japanese-char-extended", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-base-japanese-char-v2. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune 'bert-base-japanese-char-extended' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #japanese #masked-lm #wikipedia #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# bert-base-japanese-char-extended", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-base-japanese-char-v2. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune 'bert-base-japanese-char-extended' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
token-classification
transformers
# bert-base-japanese-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/bert-base-japanese-luw-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "japanese", "pos", "wikipedia", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-base-japanese-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-char-extended. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS. ## How to Use or ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-base-japanese-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-char-extended. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-base-japanese-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-char-extended. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# bert-base-japanese-unidic-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-japanese-unidic-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` [fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required. ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/bert-base-japanese-unidic-luw-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "japanese", "pos", "wikipedia", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-base-japanese-unidic-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-v2. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or fugashi and unidic-lite are required. ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-base-japanese-unidic-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-v2. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor\n\n\n\nfugashi and unidic-lite are required.", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-base-japanese-unidic-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-v2. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor\n\n\n\nfugashi and unidic-lite are required.", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# bert-base-japanese-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-japanese-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/bert-base-japanese-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "japanese", "pos", "wikipedia", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# bert-base-japanese-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-char-extended. Every short-unit-word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-base-japanese-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-char-extended. Every short-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# bert-base-japanese-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-japanese-char-extended. Every short-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# bert-base-thai-upos ## Model Description This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-thai-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-thai-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]}
KoichiYasuoka/bert-base-thai-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "thai", "pos", "wikipedia", "dependency-parsing", "th", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #bert #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-base-thai-upos ## Model Description This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-th-cased. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-base-thai-upos", "## Model Description\n\nThis is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-th-cased. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-base-thai-upos", "## Model Description\n\nThis is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-base-th-cased. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# bert-large-japanese-char-extended ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/cl-tohoku/bert-large-japanese-char). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-large-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-large-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-large-japanese-wikipedia-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended") ```
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u9178\u7d20\u30dc\u30f3\u30d9\u3092\u5145[MASK]\u3059\u308b\u3002"}]}
KoichiYasuoka/bert-large-japanese-char-extended
null
[ "transformers", "pytorch", "bert", "fill-mask", "japanese", "masked-lm", "wikipedia", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #fill-mask #japanese #masked-lm #wikipedia #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-large-japanese-char-extended ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-large-japanese-char. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune 'bert-large-japanese-char-extended' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use
[ "# bert-large-japanese-char-extended", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-large-japanese-char. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune 'bert-large-japanese-char-extended' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #japanese #masked-lm #wikipedia #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-large-japanese-char-extended", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts, derived from bert-large-japanese-char. Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune 'bert-large-japanese-char-extended' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
token-classification
transformers
# bert-large-japanese-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/bert-large-japanese-luw-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "japanese", "pos", "wikipedia", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-large-japanese-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese-char-extended. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS. ## How to Use or ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-large-japanese-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese-char-extended. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-large-japanese-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese-char-extended. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# bert-large-japanese-unidic-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese](https://huggingface.co/cl-tohoku/bert-large-japanese). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-unidic-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` [fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required. ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/bert-large-japanese-unidic-luw-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "japanese", "pos", "wikipedia", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-large-japanese-unidic-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or fugashi and unidic-lite are required. ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-large-japanese-unidic-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor\n\n\n\nfugashi and unidic-lite are required.", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-large-japanese-unidic-luw-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor\n\n\n\nfugashi and unidic-lite are required.", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# bert-large-japanese-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/bert-large-japanese-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "japanese", "pos", "wikipedia", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-large-japanese-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese-char-extended. Every short-unit-word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# bert-large-japanese-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese-char-extended. Every short-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #japanese #pos #wikipedia #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-large-japanese-upos", "## Model Description\n\nThis is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from bert-large-japanese-char-extended. Every short-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# chinese-bert-wwm-ext-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-bert-wwm-ext-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"}
KoichiYasuoka/chinese-bert-wwm-ext-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "chinese", "pos", "wikipedia", "dependency-parsing", "zh", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #token-classification #chinese #pos #wikipedia #dependency-parsing #zh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# chinese-bert-wwm-ext-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-bert-wwm-ext. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# chinese-bert-wwm-ext-upos", "## Model Description\n\nThis is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-bert-wwm-ext. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #chinese #pos #wikipedia #dependency-parsing #zh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# chinese-bert-wwm-ext-upos", "## Model Description\n\nThis is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-bert-wwm-ext. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# chinese-roberta-base-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-base-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"}
KoichiYasuoka/chinese-roberta-base-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "chinese", "pos", "wikipedia", "dependency-parsing", "zh", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #token-classification #chinese #pos #wikipedia #dependency-parsing #zh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# chinese-roberta-base-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-roberta-wwm-ext. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# chinese-roberta-base-upos", "## Model Description\n\nThis is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-roberta-wwm-ext. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #chinese #pos #wikipedia #dependency-parsing #zh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# chinese-roberta-base-upos", "## Model Description\n\nThis is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-roberta-wwm-ext. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# chinese-roberta-large-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-large-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"}
KoichiYasuoka/chinese-roberta-large-upos
null
[ "transformers", "pytorch", "bert", "token-classification", "chinese", "pos", "wikipedia", "dependency-parsing", "zh", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #token-classification #chinese #pos #wikipedia #dependency-parsing #zh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# chinese-roberta-large-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-roberta-wwm-ext-large. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# chinese-roberta-large-upos", "## Model Description\n\nThis is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-roberta-wwm-ext-large. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #chinese #pos #wikipedia #dependency-parsing #zh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# chinese-roberta-large-upos", "## Model Description\n\nThis is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from chinese-roberta-wwm-ext-large. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# roberta-base-english-upos ## Model Description This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-base](https://huggingface.co/roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["english", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"}
KoichiYasuoka/roberta-base-english-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "english", "pos", "dependency-parsing", "en", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #token-classification #english #pos #dependency-parsing #en #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-english-upos ## Model Description This is a RoBERTa model pre-trained with UD_English for POS-tagging and dependency-parsing, derived from roberta-base. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-base-english-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained with UD_English for POS-tagging and dependency-parsing, derived from roberta-base. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #english #pos #dependency-parsing #en #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-english-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained with UD_English for POS-tagging and dependency-parsing, derived from roberta-base. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# roberta-base-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-base-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]}
KoichiYasuoka/roberta-base-japanese-aozora-char
null
[ "transformers", "pytorch", "roberta", "fill-mask", "japanese", "masked-lm", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-base-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
[ "# roberta-base-japanese-aozora-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-base-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-japanese-aozora-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-base-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8." ]
fill-mask
transformers
# roberta-base-japanese-aozora ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-base-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]}
KoichiYasuoka/roberta-base-japanese-aozora
null
[ "transformers", "pytorch", "roberta", "fill-mask", "japanese", "masked-lm", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-japanese-aozora ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-base-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
[ "# roberta-base-japanese-aozora", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-base-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-japanese-aozora", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-base-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8." ]
token-classification
transformers
# roberta-base-japanese-char-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-char-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/roberta-base-japanese-char-luw-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "japanese", "pos", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-japanese-char-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-base-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS. ## How to Use or ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-base-japanese-char-luw-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-base-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-japanese-char-luw-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-base-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# roberta-base-japanese-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]}
KoichiYasuoka/roberta-base-japanese-luw-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "japanese", "pos", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-japanese-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-base-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-base-japanese-luw-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-base-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-japanese-luw-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-base-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# roberta-base-thai-char-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-char](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-char-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]}
KoichiYasuoka/roberta-base-thai-char-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "thai", "pos", "wikipedia", "dependency-parsing", "th", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #roberta #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-thai-char-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-char. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-base-thai-char-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-char. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-thai-char-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-char. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# roberta-base-thai-char ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune `roberta-base-thai-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-char") ```
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]"}
KoichiYasuoka/roberta-base-thai-char
null
[ "transformers", "pytorch", "roberta", "fill-mask", "thai", "masked-lm", "wikipedia", "th", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #roberta #fill-mask #thai #masked-lm #wikipedia #th #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-thai-char ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune 'roberta-base-thai-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use
[ "# roberta-base-thai-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune 'roberta-base-thai-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #thai #masked-lm #wikipedia #th #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-thai-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune 'roberta-base-thai-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
token-classification
transformers
# roberta-base-thai-spm-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-spm-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]}
KoichiYasuoka/roberta-base-thai-spm-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "thai", "pos", "wikipedia", "dependency-parsing", "th", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #roberta #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-thai-spm-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-spm. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-base-thai-spm-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-spm. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-thai-spm-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-spm. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# roberta-base-thai-spm ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune `roberta-base-thai-spm` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-spm") ```
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]"}
KoichiYasuoka/roberta-base-thai-spm
null
[ "transformers", "pytorch", "roberta", "fill-mask", "thai", "masked-lm", "wikipedia", "th", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #roberta #fill-mask #thai #masked-lm #wikipedia #th #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# roberta-base-thai-spm ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune 'roberta-base-thai-spm' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use
[ "# roberta-base-thai-spm", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune 'roberta-base-thai-spm' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #thai #masked-lm #wikipedia #th #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# roberta-base-thai-spm", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune 'roberta-base-thai-spm' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
token-classification
transformers
# roberta-base-thai-syllable-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-syllable-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]}
KoichiYasuoka/roberta-base-thai-syllable-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "thai", "pos", "wikipedia", "dependency-parsing", "th", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #roberta #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-thai-syllable-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-syllable. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-base-thai-syllable-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-syllable. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #thai #pos #wikipedia #dependency-parsing #th #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-thai-syllable-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from roberta-base-thai-syllable. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# roberta-base-thai-syllable ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from [wangchanberta-base-wiki-syllable](https://huggingface.co/airesearch/wangchanberta-base-wiki-syllable). Character-embeddings are modified to use BertTokenizerFast. You can fine-tune `roberta-base-thai-syllable` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable") ```
{"language": ["th"], "license": "apache-2.0", "tags": ["thai", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "<mask>", "widget": [{"text": "\u0e41\u0e1c\u0e19\u0e01\u0e19\u0e35\u0e49\u0e01\u0e33\u0e25\u0e31\u0e07<mask>\u0e01\u0e31\u0e1a\u0e04\u0e27\u0e32\u0e21\u0e17\u0e49\u0e32\u0e17\u0e32\u0e22\u0e43\u0e2b\u0e21\u0e48"}]}
KoichiYasuoka/roberta-base-thai-syllable
null
[ "transformers", "pytorch", "roberta", "fill-mask", "thai", "masked-lm", "wikipedia", "th", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th" ]
TAGS #transformers #pytorch #roberta #fill-mask #thai #masked-lm #wikipedia #th #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-thai-syllable ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from wangchanberta-base-wiki-syllable. Character-embeddings are modified to use BertTokenizerFast. You can fine-tune 'roberta-base-thai-syllable' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use
[ "# roberta-base-thai-syllable", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from wangchanberta-base-wiki-syllable. Character-embeddings are modified to use BertTokenizerFast. You can fine-tune 'roberta-base-thai-syllable' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #thai #masked-lm #wikipedia #th #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-thai-syllable", "## Model Description\n\nThis is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from wangchanberta-base-wiki-syllable. Character-embeddings are modified to use BertTokenizerFast. You can fine-tune 'roberta-base-thai-syllable' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use" ]
fill-mask
transformers
# roberta-classical-chinese-base-char ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-base-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char") ``` ## See Also [SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
{"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u5b5f\u5b50[MASK]\u6881\u60e0\u738b"}]}
KoichiYasuoka/roberta-classical-chinese-base-char
null
[ "transformers", "pytorch", "roberta", "fill-mask", "classical chinese", "literary chinese", "ancient chinese", "masked-lm", "lzh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #fill-mask #classical chinese #literary chinese #ancient chinese #masked-lm #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-classical-chinese-base-char ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from GuwenBERT-base. Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune 'roberta-classical-chinese-base-char' for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on. ## How to Use ## See Also SuPar-Kanbun: Tokenizer POS-tagger and Dependency-parser for Classical Chinese
[ "# roberta-classical-chinese-base-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts, derived from GuwenBERT-base. Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune 'roberta-classical-chinese-base-char' for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on.", "## How to Use", "## See Also\n\nSuPar-Kanbun: Tokenizer POS-tagger and Dependency-parser for Classical Chinese" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #classical chinese #literary chinese #ancient chinese #masked-lm #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-classical-chinese-base-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts, derived from GuwenBERT-base. Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune 'roberta-classical-chinese-base-char' for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on.", "## How to Use", "## See Also\n\nSuPar-Kanbun: Tokenizer POS-tagger and Dependency-parser for Classical Chinese" ]
token-classification
transformers
# roberta-classical-chinese-base-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation") s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p))) ``` ## Reference Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
{"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "token-classification"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]}
KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation
null
[ "transformers", "pytorch", "roberta", "token-classification", "classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "lzh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #sentence segmentation #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-classical-chinese-base-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from roberta-classical-chinese-base-char. Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ## Reference Koichi Yasuoka: Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models, IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
[ "# roberta-classical-chinese-base-sentence-segmentation", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from roberta-classical-chinese-base-char. Every segmented sentence begins with token-class \"B\" and ends with token-class \"E\" (except for single-character sentence with token-class \"S\").", "## How to Use", "## Reference\n\nKoichi Yasuoka: Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models, IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #sentence segmentation #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-classical-chinese-base-sentence-segmentation", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from roberta-classical-chinese-base-char. Every segmented sentence begins with token-class \"B\" and ends with token-class \"E\" (except for single-character sentence with token-class \"S\").", "## How to Use", "## Reference\n\nKoichi Yasuoka: Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models, IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109." ]
token-classification
transformers
# roberta-classical-chinese-base-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-base-upos") ``` ## Reference Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]}
KoichiYasuoka/roberta-classical-chinese-base-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "classical chinese", "literary chinese", "ancient chinese", "pos", "dependency-parsing", "lzh", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #pos #dependency-parsing #lzh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-classical-chinese-base-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from roberta-classical-chinese-base-char. Every word is tagged by UPOS (Universal Part-Of-Speech) and FEATS. ## How to Use or ## Reference Koichi Yasuoka: Universal Dependencies Treebank of the Four Books in Classical Chinese, DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-classical-chinese-base-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from roberta-classical-chinese-base-char. Every word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\nKoichi Yasuoka: Universal Dependencies Treebank of the Four Books in Classical Chinese, DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #pos #dependency-parsing #lzh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-classical-chinese-base-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from roberta-classical-chinese-base-char. Every word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\n\nor", "## Reference\n\nKoichi Yasuoka: Universal Dependencies Treebank of the Four Books in Classical Chinese, DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# roberta-classical-chinese-large-char ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-large-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char") ``` ## See Also [SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
{"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u5b5f\u5b50[MASK]\u6881\u60e0\u738b"}]}
KoichiYasuoka/roberta-classical-chinese-large-char
null
[ "transformers", "pytorch", "roberta", "fill-mask", "classical chinese", "literary chinese", "ancient chinese", "masked-lm", "lzh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #fill-mask #classical chinese #literary chinese #ancient chinese #masked-lm #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-classical-chinese-large-char ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from GuwenBERT-large. Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune 'roberta-classical-chinese-large-char' for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on. ## How to Use ## See Also SuPar-Kanbun: Tokenizer POS-tagger and Dependency-parser for Classical Chinese
[ "# roberta-classical-chinese-large-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts, derived from GuwenBERT-large. Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune 'roberta-classical-chinese-large-char' for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on.", "## How to Use", "## See Also\n\nSuPar-Kanbun: Tokenizer POS-tagger and Dependency-parser for Classical Chinese" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #classical chinese #literary chinese #ancient chinese #masked-lm #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-classical-chinese-large-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts, derived from GuwenBERT-large. Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune 'roberta-classical-chinese-large-char' for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on.", "## How to Use", "## See Also\n\nSuPar-Kanbun: Tokenizer POS-tagger and Dependency-parser for Classical Chinese" ]
token-classification
transformers
# roberta-classical-chinese-large-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation") s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p))) ``` ## Reference Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
{"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "token-classification"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]}
KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation
null
[ "transformers", "pytorch", "roberta", "token-classification", "classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "lzh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #sentence segmentation #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-classical-chinese-large-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from roberta-classical-chinese-large-char. Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ## Reference Koichi Yasuoka: Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models, IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
[ "# roberta-classical-chinese-large-sentence-segmentation", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from roberta-classical-chinese-large-char. Every segmented sentence begins with token-class \"B\" and ends with token-class \"E\" (except for single-character sentence with token-class \"S\").", "## How to Use", "## Reference\n\nKoichi Yasuoka: Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models, IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109." ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #sentence segmentation #lzh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-classical-chinese-large-sentence-segmentation", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from roberta-classical-chinese-large-char. Every segmented sentence begins with token-class \"B\" and ends with token-class \"E\" (except for single-character sentence with token-class \"S\").", "## How to Use", "## Reference\n\nKoichi Yasuoka: Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models, IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109." ]
token-classification
transformers
# roberta-classical-chinese-large-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-large-upos") ``` ## Reference Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]}
KoichiYasuoka/roberta-classical-chinese-large-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "classical chinese", "literary chinese", "ancient chinese", "pos", "dependency-parsing", "lzh", "dataset:universal_dependencies", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lzh" ]
TAGS #transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #pos #dependency-parsing #lzh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-classical-chinese-large-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from roberta-classical-chinese-large-char. Every word is tagged by UPOS (Universal Part-Of-Speech) and FEATS. ## How to Use or ## Reference Koichi Yasuoka: Universal Dependencies Treebank of the Four Books in Classical Chinese, DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-classical-chinese-large-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from roberta-classical-chinese-large-char. Every word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\nor", "## Reference\n\nKoichi Yasuoka: Universal Dependencies Treebank of the Four Books in Classical Chinese, DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #classical chinese #literary chinese #ancient chinese #pos #dependency-parsing #lzh #dataset-universal_dependencies #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-classical-chinese-large-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from roberta-classical-chinese-large-char. Every word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.", "## How to Use\n\n\nor", "## Reference\n\nKoichi Yasuoka: Universal Dependencies Treebank of the Four Books in Classical Chinese, DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
token-classification
transformers
# roberta-large-english-upos ## Model Description This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-large](https://huggingface.co/roberta-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-large-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
{"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["english", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"}
KoichiYasuoka/roberta-large-english-upos
null
[ "transformers", "pytorch", "roberta", "token-classification", "english", "pos", "dependency-parsing", "en", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #token-classification #english #pos #dependency-parsing #en #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-large-english-upos ## Model Description This is a RoBERTa model pre-trained with UD_English for POS-tagging and dependency-parsing, derived from roberta-large. Every word is tagged by UPOS (Universal Part-Of-Speech). ## How to Use or ## See Also esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
[ "# roberta-large-english-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained with UD_English for POS-tagging and dependency-parsing, derived from roberta-large. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #english #pos #dependency-parsing #en #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-large-english-upos", "## Model Description\n\nThis is a RoBERTa model pre-trained with UD_English for POS-tagging and dependency-parsing, derived from roberta-large. Every word is tagged by UPOS (Universal Part-Of-Speech).", "## How to Use\n\n\n\nor", "## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models" ]
fill-mask
transformers
# roberta-large-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-large-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
{"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]}
KoichiYasuoka/roberta-large-japanese-aozora-char
null
[ "transformers", "pytorch", "roberta", "fill-mask", "japanese", "masked-lm", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# roberta-large-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-large-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on. ## How to Use ## Reference 安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
[ "# roberta-large-japanese-aozora-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-large-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8." ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-large-japanese-aozora-char", "## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-large-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.", "## How to Use", "## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8." ]