pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
# CamemBERT pretrained on french trade directories from the XIXth century
This mdoel is part of the material of the paper
> Abadie, N., Carlinet, E., Chazalon, J., Dumรฉnieu, B. (2022). A
> Benchmark of Named Entity Recognition Approaches in Historical
> Documents Application to 19๐กโ Century French Directories. In: Uchida,
> S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022.
> Lecture Notes in Computer Science, vol 13237. Springer, Cham.
> https://doi.org/10.1007/978-3-031-06555-2_30
The source code to train this model is available on the [GitHub repository](https://github.com/soduco/paper-ner-bench-das22) of the paper as a Jupyter notebook in `src/ner/10-camembert_pretraining.ipynb`.
## Model description
This model pre-train the model [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on a set of ~845k entries from Paris trade directories from the XIXth century extracted with OCR.
Trade directory entries are short and strongly structured texts that giving the name, activity and location of a person or business, e.g:
```
Peynaud, R. de la Vieille Bouclerie, 18. Richard, Joullain et comp., (commission- โPhรฉรขtre Franรงais. naire, (entrepรดt), au port de la Rapรฉe-
```
## Intended uses & limitations
This model is intended for reproducibility of the NER evaluation published in the DAS2022 paper.
Several derived models trained for NER on trade directories are available on HuggingFace, each trained on a different dataset :
- [das22-10-camembert_pretrained_finetuned_ref](): trained for NER on ~6000 directory entries manually corrected.
- [das22-10-camembert_pretrained_finetuned_pero](): trained for NER on ~6000 directory entries extracted with PERO-OCR.
- [das22-10-camembert_pretrained_finetuned_tess](): trained for NER on ~6000 directory entries extracted with Tesseract.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.9603 | 1.0 | 100346 | 1.8005 |
| 1.7032 | 2.0 | 200692 | 1.6460 |
| 1.5879 | 3.0 | 301038 | 1.5570 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "CamemBERT pretrained on french trade directories from the XIXth century", "results": []}]} | HueyNemud/das22-10-camembert_pretrained | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #camembert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| CamemBERT pretrained on french trade directories from the XIXth century
=======================================================================
This mdoel is part of the material of the paper
>
> Abadie, N., Carlinet, E., Chazalon, J., Dumรฉnieu, B. (2022). A
> Benchmark of Named Entity Recognition Approaches in Historical
> Documents Application to 19๐กโ Century French Directories. In: Uchida,
> S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022.
> Lecture Notes in Computer Science, vol 13237. Springer, Cham.
> URL
>
>
>
The source code to train this model is available on the GitHub repository of the paper as a Jupyter notebook in 'src/ner/10-camembert\_pretraining.ipynb'.
Model description
-----------------
This model pre-train the model Jean-Baptiste/camembert-ner on a set of ~845k entries from Paris trade directories from the XIXth century extracted with OCR.
Trade directory entries are short and strongly structured texts that giving the name, activity and location of a person or business, e.g:
Intended uses & limitations
---------------------------
This model is intended for reproducibility of the NER evaluation published in the DAS2022 paper.
Several derived models trained for NER on trade directories are available on HuggingFace, each trained on a different dataset :
* das22-10-camembert\_pretrained\_finetuned\_ref: trained for NER on ~6000 directory entries manually corrected.
* das22-10-camembert\_pretrained\_finetuned\_pero: trained for NER on ~6000 directory entries extracted with PERO-OCR.
* das22-10-camembert\_pretrained\_finetuned\_tess: trained for NER on ~6000 directory entries extracted with Tesseract.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #camembert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
36,
103,
5,
47
] | [
"TAGS\n#transformers #pytorch #camembert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
null | sentence-transformers |
# KLUE RoBERTa base model for Sentence Embeddings
This is the `sentence-klue-roberta-base` model. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings.
The model is described in the paper [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
import torch
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("Huffon/sentence-klue-roberta-base")
docs = [
"1992๋
7์ 8์ผ ์ํฅ๋ฏผ์ ๊ฐ์๋ ์ถ์ฒ์ ํํ๋์์ ์๋ฒ์ง ์์
์ ๊ณผ ์ด๋จธ๋ ๊ธธ์์์ ์ฐจ๋จ์ผ๋ก ํ์ด๋ ๊ทธ๊ณณ์์ ์๋๋ค.",
"ํ์ ์ํฅ์ค์ด๋ค.",
"์ถ์ฒ ๋ถ์์ด๋ฑํ๊ต๋ฅผ ์กธ์
ํ๊ณ , ์ถ์ฒ ํํ์คํ๊ต์ ์
ํํ ํ 2ํ๋
๋ ์์ฃผ ์ก๋ฏผ๊ด์คํ๊ต ์ถ๊ตฌ๋ถ์ ๋ค์ด๊ฐ๊ธฐ ์ํด ์ ํํ์ฌ ์กธ์
ํ์์ผ๋ฉฐ, 2008๋
๋น์ FC ์์ธ์ U-18ํ์ด์๋ ๋๋ถ๊ณ ๋ฑํ๊ต ์ถ๊ตฌ๋ถ์์ ์ ์ ํ๋ ์ค ๋ํ์ถ๊ตฌํํ ์ฐ์์ ์ ํด์ธ์ ํ ํ๋ก์ ํธ์ ์ ๋ฐ๋์ด 2008๋
8์ ๋
์ผ ๋ถ๋ฐ์ค๋ฆฌ๊ฐ์ ํจ๋ถ๋ฅดํฌ ์ ์๋
ํ์ ์
๋จํ์๋ค.",
"ํจ๋ถ๋ฅดํฌ ์ ์คํ ์ฃผ์ ๊ณต๊ฒฉ์๋ก 2008๋
6์ ๋ค๋๋๋์์ ์ด๋ฆฐ 4๊ฐ๊ตญ ๊ฒฝ๊ธฐ์์ 4๊ฒ์์ ์ถ์ , 3๊ณจ์ ํฐ๋จ๋ ธ๋ค.",
"1๋
๊ฐ์ ์ ํ ํ 2009๋
8์ ํ๊ตญ์ผ๋ก ๋์์จ ํ 10์์ ๊ฐ๋งํ FIFA U-17 ์๋์ปต์ ์ถ์ ํ์ฌ 3๊ณจ์ ํฐํธ๋ฆฌ๋ฉฐ ํ๊ตญ์ 8๊ฐ์ผ๋ก ์ด๋์๋ค.",
"๊ทธํด 11์ ํจ๋ถ๋ฅดํฌ์ ์ ์ ์ ์๋
ํ ์ ์ ๊ณ์ฝ์ ์ฒด๊ฒฐํ์์ผ๋ฉฐ ๋
์ผ U-19 ๋ฆฌ๊ทธ 4๊ฒฝ๊ธฐ 2๊ณจ์ ๋ฃ๊ณ 2๊ตฐ ๋ฆฌ๊ทธ์ ์ถ์ ์ ์์ํ๋ค.",
"๋
์ผ U-19 ๋ฆฌ๊ทธ์์ ์ํฅ๋ฏผ์ 11๊ฒฝ๊ธฐ 6๊ณจ, 2๋ถ ๋ฆฌ๊ทธ์์๋ 6๊ฒฝ๊ธฐ 1๊ณจ์ ๋ฃ์ผ๋ฉฐ ์ฌ๋ฅ์ ์ธ์ ๋ฐ์ 2010๋
6์ 17์ธ์ ๋์ด๋ก ํจ๋ถ๋ฅดํฌ์ 1๊ตฐ ํ ํ๋ จ์ ์ฐธ๊ฐ, ํ๋ฆฌ์์ฆ ํ์ฝ์ผ๋ก ํจ๋ถ๋ฅดํฌ์ ์ ์ ๊ณ์ฝ์ ํ ํ 10์ 18์ธ์ ํจ๋ถ๋ฅดํฌ 1๊ตฐ ์์์ผ๋ก ๋
์ผ ๋ถ๋ฐ์ค๋ฆฌ๊ฐ์ ๋ฐ๋ทํ์๋ค.",
]
document_embeddings = model.encode(docs)
query = "์ํฅ๋ฏผ์ ์ด๋ฆฐ ๋์ด์ ์ ๋ฝ์ ์ง์ถํ์๋ค."
query_embedding = model.encode(query)
top_k = min(5, len(docs))
cos_scores = util.pytorch_cos_sim(query_embedding, document_embeddings)[0]
top_results = torch.topk(cos_scores, k=top_k)
print(f"์
๋ ฅ ๋ฌธ์ฅ: {query}")
print(f"<์
๋ ฅ ๋ฌธ์ฅ๊ณผ ์ ์ฌํ {top_k} ๊ฐ์ ๋ฌธ์ฅ>")
for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):
print(f"{i+1}: {docs[idx]} {'(์ ์ฌ๋: {:.4f})'.format(score)}")
``` | {"language": "ko", "tags": ["roberta", "sentence-transformers"], "datasets": ["klue"]} | Huffon/sentence-klue-roberta-base | null | [
"sentence-transformers",
"pytorch",
"roberta",
"ko",
"dataset:klue",
"arxiv:1908.10084",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1908.10084"
] | [
"ko"
] | TAGS
#sentence-transformers #pytorch #roberta #ko #dataset-klue #arxiv-1908.10084 #has_space #region-us
|
# KLUE RoBERTa base model for Sentence Embeddings
This is the 'sentence-klue-roberta-base' model. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings.
The model is described in the paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have sentence-transformers installed:
Then you can use the model like this:
| [
"# KLUE RoBERTa base model for Sentence Embeddings\n\nThis is the 'sentence-klue-roberta-base' model. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings.\n\nThe model is described in the paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes more convenient when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:"
] | [
"TAGS\n#sentence-transformers #pytorch #roberta #ko #dataset-klue #arxiv-1908.10084 #has_space #region-us \n",
"# KLUE RoBERTa base model for Sentence Embeddings\n\nThis is the 'sentence-klue-roberta-base' model. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings.\n\nThe model is described in the paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes more convenient when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:"
] | [
39,
74,
31
] | [
"TAGS\n#sentence-transformers #pytorch #roberta #ko #dataset-klue #arxiv-1908.10084 #has_space #region-us \n# KLUE RoBERTa base model for Sentence Embeddings\n\nThis is the 'sentence-klue-roberta-base' model. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings.\n\nThe model is described in the paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks## Usage (Sentence-Transformers)\n\nUsing this model becomes more convenient when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:"
] |
sentence-similarity | sentence-transformers |
# Humair/all-mpnet-base-v2-finetuned-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Humair/all-mpnet-base-v2-finetuned-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Humair/all-mpnet-base-v2-finetuned-v2')
model = AutoModel.from_pretrained('Humair/all-mpnet-base-v2-finetuned-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Humair/all-mpnet-base-v2-finetuned-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Humair/all-mpnet-base-v2-finetuned-v2 | null | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# Humair/all-mpnet-base-v2-finetuned-v2
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# Humair/all-mpnet-base-v2-finetuned-v2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# Humair/all-mpnet-base-v2-finetuned-v2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
32,
55,
30,
58,
26,
72,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# Humair/all-mpnet-base-v2-finetuned-v2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
null | null | Model saved for Paraphrased Detection in English-Vietnamese cross-lingual based on XLM-R in MT-DNN
MT-DNN: github.com/namisan/mt-dnn | {} | HungVo/mt-dnn-ev-mrpc | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Model saved for Paraphrased Detection in English-Vietnamese cross-lingual based on XLM-R in MT-DNN
MT-DNN: URL | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
#DwightSchrute DialoGPT-Model
#TheOffice | {"tags": ["conversational"]} | HypNyx/DialoGPT-small-DwightBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#DwightSchrute DialoGPT-Model
#TheOffice | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Thanos DialoGPT Model | {"tags": ["conversational"]} | HypNyx/DialoGPT-small-Thanos | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Thanos DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Peter from Your Boyfriend Game.
| {"tags": ["conversational"]} | HypedKid/PeterBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Peter from Your Boyfriend Game.
| [
"# Peter from Your Boyfriend Game."
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Peter from Your Boyfriend Game."
] | [
43,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# Peter from Your Boyfriend Game."
] |
null | transformers | # Erlangshen-MegatronBert-1.3B
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
2021็ป้กถFewCLUEๅZeroCLUE๏ผๅค็NLUไปปๅก๏ผๅผๆบๆถๆๅคง็ไธญๆBERTๆจกๅ
It topped FewCLUE and ZeroCLUE benchmarks in 2021, solving NLU tasks, was the largest BERT when publicly released.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ็่งฃ NLU | ไบ้็ฅ Erlangshen | MegatronBERT | 1.3B | ไธญๆ Chinese |
## ๆจกๅไฟกๆฏ Model Information
Encoder็ปๆไธบไธป็ๅๅ่ฏญ่จๆจกๅ๏ผไธๆณจไบ่งฃๅณๅ็ง่ช็ถ่ฏญ่จ็่งฃไปปๅกใ
ๆไปฌ่ท่ฟไบ[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)็ๅทฅไฝ๏ผไฝฟ็จไบ32ๅผ A100๏ผๆปๅ
ฑ่ๆถ14ๅคฉๅจๆ้่ฏญๆๅบ๏ผ180 GB็ๆฌ๏ผไธ่ฎญ็ปไบๅไบฟ็บงๅซๅๆฐ้็BERTใๅๆถ๏ผ้ดไบไธญๆ่ฏญๆณๅๅคง่งๆจก่ฎญ็ป็้พๅบฆ๏ผๆไปฌไฝฟ็จๅ็ง้ข่ฎญ็ป็ญ็ฅๆฅๆน่ฟBERT๏ผ1) ๆด่ฏๆฉ็ , 2) ็ฅ่ฏๅจๆ้ฎๆฉ, 3) ๅฅๅญ้กบๅบ้ขๆต, 4) ๅฑๅๅฝไธๅ.
A bidirectional language model based on the Encoder structure, focusing on solving various NLU tasks.
We follow [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), using 32 A100s and spending 14 days training a billion-level BERT on WuDao Corpora (180 GB version). Given Chinese grammar and the difficulty of large-scale training, we use four pre-training procedures to improve BERT: 1) Whole Word Masking (WWM), 2) Knowledge-based Dynamic Masking (KDM), 3) Sentence Order Prediction (SOP), 4) Pre-layer Normalization (Pre-LN).
### ๆๅฐฑ Achievements
1.2021ๅนด11ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจFewCLUEไธๅๅพ็ฌฌไธใๅ
ถไธญ๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ)ๅTNEWS(ๆฐ้ปๅ็ฑป)ๅญไปปๅกไธญ็่กจ็ฐไผไบไบบ็ฑป่กจ็ฐใๆญคๅค๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ), CSLDCP(ๅญฆ็งๆ็ฎๅ็ฑป), OCNLI(่ช็ถ่ฏญ่จๆจ็)ไปปๅกไธญๅๅๅๅ่
ใ
2.2022ๅนด1ๆ24ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅๆต่ฏไธญ็ZeroCLUEไธญๅๅพ็ฌฌไธใๅ
ทไฝๅฐๅญไปปๅก๏ผๆไปฌๅจCSLDCP(ไธป้ขๆ็ฎๅ็ฑป), TNEWS(ๆฐ้ปๅ็ฑป), IFLYTEK(ๅบ็จๆ่ฟฐๅ็ฑป), CSL(ๆฝ่ฑกๅ
ณ้ฎๅญ่ฏๅซ)ๅCLUEWSC(ๅ่ๆถๆญง)ไปปๅกไธญๅๅพ็ฌฌไธใ
3.ๅจ2022ๅนด7ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅ็่ฏญไนๅน้
ไปปๅกไธญๅๅพ็ฌฌไธใ
1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks.
2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks.
3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.
### ไธๆธธๆๆ Performance
| ๆจกๅ | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
| roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 |
| Erlangshen-MegatronBert-1.3B | 0.7608 | 0.5996 | 0.6234 | 0.7917 | 0.81 | 0.9243 | 0.872 |
## ไฝฟ็จ Usage
``` python
from transformers import MegatronBertConfig, MegatronBertModel
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
config = MegatronBertConfig.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
model = MegatronBertModel.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | {"language": ["zh"], "license": "apache-2.0", "tags": ["bert", "NLU", "FewCLUE", "ZeroCLUE"], "inference": true} | IDEA-CCNL/Erlangshen-MegatronBert-1.3B | null | [
"transformers",
"pytorch",
"megatron-bert",
"bert",
"NLU",
"FewCLUE",
"ZeroCLUE",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2209.02970"
] | [
"zh"
] | TAGS
#transformers #pytorch #megatron-bert #bert #NLU #FewCLUE #ZeroCLUE #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us
| Erlangshen-MegatronBert-1.3B
============================
* Main Page:Fengshenbang
* Github: Fengshenbang-LM
็ฎไป Brief Introduction
---------------------
2021็ป้กถFewCLUEๅZeroCLUE๏ผๅค็NLUไปปๅก๏ผๅผๆบๆถๆๅคง็ไธญๆBERTๆจกๅ
It topped FewCLUE and ZeroCLUE benchmarks in 2021, solving NLU tasks, was the largest BERT when publicly released.
ๆจกๅๅ็ฑป Model Taxonomy
-------------------
ๆจกๅไฟกๆฏ Model Information
----------------------
Encoder็ปๆไธบไธป็ๅๅ่ฏญ่จๆจกๅ๏ผไธๆณจไบ่งฃๅณๅ็ง่ช็ถ่ฏญ่จ็่งฃไปปๅกใ
ๆไปฌ่ท่ฟไบMegatron-LM็ๅทฅไฝ๏ผไฝฟ็จไบ32ๅผ A100๏ผๆปๅ
ฑ่ๆถ14ๅคฉๅจๆ้่ฏญๆๅบ๏ผ180 GB็ๆฌ๏ผไธ่ฎญ็ปไบๅไบฟ็บงๅซๅๆฐ้็BERTใๅๆถ๏ผ้ดไบไธญๆ่ฏญๆณๅๅคง่งๆจก่ฎญ็ป็้พๅบฆ๏ผๆไปฌไฝฟ็จๅ็ง้ข่ฎญ็ป็ญ็ฅๆฅๆน่ฟBERT๏ผ1) ๆด่ฏๆฉ็ , 2) ็ฅ่ฏๅจๆ้ฎๆฉ, 3) ๅฅๅญ้กบๅบ้ขๆต, 4) ๅฑๅๅฝไธๅ.
A bidirectional language model based on the Encoder structure, focusing on solving various NLU tasks.
We follow Megatron-LM, using 32 A100s and spending 14 days training a billion-level BERT on WuDao Corpora (180 GB version). Given Chinese grammar and the difficulty of large-scale training, we use four pre-training procedures to improve BERT: 1) Whole Word Masking (WWM), 2) Knowledge-based Dynamic Masking (KDM), 3) Sentence Order Prediction (SOP), 4) Pre-layer Normalization (Pre-LN).
### ๆๅฐฑ Achievements
1.2021ๅนด11ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจFewCLUEไธๅๅพ็ฌฌไธใๅ
ถไธญ๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ)ๅTNEWS(ๆฐ้ปๅ็ฑป)ๅญไปปๅกไธญ็่กจ็ฐไผไบไบบ็ฑป่กจ็ฐใๆญคๅค๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ), CSLDCP(ๅญฆ็งๆ็ฎๅ็ฑป), OCNLI(่ช็ถ่ฏญ่จๆจ็)ไปปๅกไธญๅๅๅๅ่
ใ
2.2022ๅนด1ๆ24ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅๆต่ฏไธญ็ZeroCLUEไธญๅๅพ็ฌฌไธใๅ
ทไฝๅฐๅญไปปๅก๏ผๆไปฌๅจCSLDCP(ไธป้ขๆ็ฎๅ็ฑป), TNEWS(ๆฐ้ปๅ็ฑป), IFLYTEK(ๅบ็จๆ่ฟฐๅ็ฑป), CSL(ๆฝ่ฑกๅ
ณ้ฎๅญ่ฏๅซ)ๅCLUEWSC(ๅ่ๆถๆญง)ไปปๅกไธญๅๅพ็ฌฌไธใ
3.ๅจ2022ๅนด7ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅ็่ฏญไนๅน้
ไปปๅกไธญๅๅพ็ฌฌไธใ
1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks.
2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks.
3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.
### ไธๆธธๆๆ Performance
ไฝฟ็จ Usage
--------
ๅผ็จ Citation
-----------
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ
If you are using the resource for your work, please cite the our paper:
ไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:
You can also cite our website:
| [
"### ๆๅฐฑ Achievements\n\n\n1.2021ๅนด11ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจFewCLUEไธๅๅพ็ฌฌไธใๅ
ถไธญ๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ)ๅTNEWS(ๆฐ้ปๅ็ฑป)ๅญไปปๅกไธญ็่กจ็ฐไผไบไบบ็ฑป่กจ็ฐใๆญคๅค๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ), CSLDCP(ๅญฆ็งๆ็ฎๅ็ฑป), OCNLI(่ช็ถ่ฏญ่จๆจ็)ไปปๅกไธญๅๅๅๅ่
ใ \n\n2.2022ๅนด1ๆ24ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅๆต่ฏไธญ็ZeroCLUEไธญๅๅพ็ฌฌไธใๅ
ทไฝๅฐๅญไปปๅก๏ผๆไปฌๅจCSLDCP(ไธป้ขๆ็ฎๅ็ฑป), TNEWS(ๆฐ้ปๅ็ฑป), IFLYTEK(ๅบ็จๆ่ฟฐๅ็ฑป), CSL(ๆฝ่ฑกๅ
ณ้ฎๅญ่ฏๅซ)ๅCLUEWSC(ๅ่ๆถๆญง)ไปปๅกไธญๅๅพ็ฌฌไธใ \n\n3.ๅจ2022ๅนด7ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅ็่ฏญไนๅน้
ไปปๅกไธญๅๅพ็ฌฌไธใ\n\n\n1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks. \n\n2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks. \n\n3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.",
"### ไธๆธธๆๆ Performance\n\n\n\nไฝฟ็จ Usage\n--------\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
"TAGS\n#transformers #pytorch #megatron-bert #bert #NLU #FewCLUE #ZeroCLUE #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### ๆๅฐฑ Achievements\n\n\n1.2021ๅนด11ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจFewCLUEไธๅๅพ็ฌฌไธใๅ
ถไธญ๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ)ๅTNEWS(ๆฐ้ปๅ็ฑป)ๅญไปปๅกไธญ็่กจ็ฐไผไบไบบ็ฑป่กจ็ฐใๆญคๅค๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ), CSLDCP(ๅญฆ็งๆ็ฎๅ็ฑป), OCNLI(่ช็ถ่ฏญ่จๆจ็)ไปปๅกไธญๅๅๅๅ่
ใ \n\n2.2022ๅนด1ๆ24ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅๆต่ฏไธญ็ZeroCLUEไธญๅๅพ็ฌฌไธใๅ
ทไฝๅฐๅญไปปๅก๏ผๆไปฌๅจCSLDCP(ไธป้ขๆ็ฎๅ็ฑป), TNEWS(ๆฐ้ปๅ็ฑป), IFLYTEK(ๅบ็จๆ่ฟฐๅ็ฑป), CSL(ๆฝ่ฑกๅ
ณ้ฎๅญ่ฏๅซ)ๅCLUEWSC(ๅ่ๆถๆญง)ไปปๅกไธญๅๅพ็ฌฌไธใ \n\n3.ๅจ2022ๅนด7ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅ็่ฏญไนๅน้
ไปปๅกไธญๅๅพ็ฌฌไธใ\n\n\n1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks. \n\n2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks. \n\n3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.",
"### ไธๆธธๆๆ Performance\n\n\n\nไฝฟ็จ Usage\n--------\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
57,
513,
95
] | [
"TAGS\n#transformers #pytorch #megatron-bert #bert #NLU #FewCLUE #ZeroCLUE #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us \n### ๆๅฐฑ Achievements\n\n\n1.2021ๅนด11ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจFewCLUEไธๅๅพ็ฌฌไธใๅ
ถไธญ๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ)ๅTNEWS(ๆฐ้ปๅ็ฑป)ๅญไปปๅกไธญ็่กจ็ฐไผไบไบบ็ฑป่กจ็ฐใๆญคๅค๏ผๅฎๅจCHIDF(ๆ่ฏญๅกซ็ฉบ), CSLDCP(ๅญฆ็งๆ็ฎๅ็ฑป), OCNLI(่ช็ถ่ฏญ่จๆจ็)ไปปๅกไธญๅๅๅๅ่
ใ \n\n2.2022ๅนด1ๆ24ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅๆต่ฏไธญ็ZeroCLUEไธญๅๅพ็ฌฌไธใๅ
ทไฝๅฐๅญไปปๅก๏ผๆไปฌๅจCSLDCP(ไธป้ขๆ็ฎๅ็ฑป), TNEWS(ๆฐ้ปๅ็ฑป), IFLYTEK(ๅบ็จๆ่ฟฐๅ็ฑป), CSL(ๆฝ่ฑกๅ
ณ้ฎๅญ่ฏๅซ)ๅCLUEWSC(ๅ่ๆถๆญง)ไปปๅกไธญๅๅพ็ฌฌไธใ \n\n3.ๅจ2022ๅนด7ๆ10ๆฅ๏ผErlangshen-MegatronBert-1.3BๅจCLUEๅบๅ็่ฏญไนๅน้
ไปปๅกไธญๅๅพ็ฌฌไธใ\n\n\n1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks. \n\n2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks. \n\n3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.### ไธๆธธๆๆ Performance\n\n\n\nไฝฟ็จ Usage\n--------\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] |
text2text-generation | transformers | # Randeng-MegatronT5-770M
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
ๅไบๅค็NLTไปปๅก๏ผไธญๆ็็T5-largeใ
Good at solving NLT tasks, Chinese T5-large.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ่ฝฌๆข NLT | ็็ฏ Randeng | MegatronT5 | 770M | ไธญๆ-Chinese |
## ๆจกๅไฟกๆฏ Model Information
ไธบไบๅพๅฐไธไธชๅคง่งๆจก็ไธญๆ็็T5๏ผๆไปฌไฝฟ็จไบMegatron-LM็ๆนๆณๅๆ้่ฏญๆๅบ(180G็ๆฌ)็จไบ้ข่ฎญ็ปใๅ
ทไฝๅฐ๏ผๆไปฌๅจ้ข่ฎญ็ป้ถๆฎตไธญไฝฟ็จไบ[Megatron-LM](https://github.com/NVIDIA/Megatron-LM) ๅคงๆฆ่ฑ่ดนไบ16ๅผ A100็บฆ14ๅคฉใ
To get a large-scale Chinese T5, we use of [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) and WuDao Corpora (180 GB version) for pre-training. Specifically, in the pre-training phase which cost about 14 days with 16 A100 GPUs.
## ไฝฟ็จ Usage
ๅ ไธบ[transformers](https://github.com/huggingface/transformers)ๅบไธญๆฏๆฒกๆRandeng-MegatronT5-770M็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)ไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ
Since there is no structure of Randeng-MegatronT5-770M in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Randeng-MegatronT5-770M and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### ๅ ่ฝฝๆจกๅ Loading Models
```python
from fengshen import T5ForConditionalGeneration
from fengshen import T5Config
from fengshen import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
config = T5Config.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
model = T5ForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| {"language": ["zh"], "license": "apache-2.0", "inference": false} | IDEA-CCNL/Randeng-MegatronT5-770M | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2209.02970"
] | [
"zh"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #zh #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
| Randeng-MegatronT5-770M
=======================
* Main Page:Fengshenbang
* Github: Fengshenbang-LM
็ฎไป Brief Introduction
---------------------
ๅไบๅค็NLTไปปๅก๏ผไธญๆ็็T5-largeใ
Good at solving NLT tasks, Chinese T5-large.
ๆจกๅๅ็ฑป Model Taxonomy
-------------------
ๆจกๅไฟกๆฏ Model Information
----------------------
ไธบไบๅพๅฐไธไธชๅคง่งๆจก็ไธญๆ็็T5๏ผๆไปฌไฝฟ็จไบMegatron-LM็ๆนๆณๅๆ้่ฏญๆๅบ(180G็ๆฌ)็จไบ้ข่ฎญ็ปใๅ
ทไฝๅฐ๏ผๆไปฌๅจ้ข่ฎญ็ป้ถๆฎตไธญไฝฟ็จไบMegatron-LM ๅคงๆฆ่ฑ่ดนไบ16ๅผ A100็บฆ14ๅคฉใ
To get a large-scale Chinese T5, we use of Megatron-LM and WuDao Corpora (180 GB version) for pre-training. Specifically, in the pre-training phase which cost about 14 days with 16 A100 GPUs.
ไฝฟ็จ Usage
--------
ๅ ไธบtransformersๅบไธญๆฏๆฒกๆRandeng-MegatronT5-770M็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็Fengshenbang-LMไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ
Since there is no structure of Randeng-MegatronT5-770M in transformers library, you can find the structure of Randeng-MegatronT5-770M and run the codes in Fengshenbang-LM.
### ๅ ่ฝฝๆจกๅ Loading Models
ๅผ็จ Citation
-----------
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ
If you are using the resource for your work, please cite the our paper:
ไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:
You can also cite our website:
| [
"### ๅ ่ฝฝๆจกๅ Loading Models\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #zh #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"### ๅ ่ฝฝๆจกๅ Loading Models\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
54,
85
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #zh #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n### ๅ ่ฝฝๆจกๅ Loading Models\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] |
text-generation | transformers |
# Wenzhong-GPT2-3.5B
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
ๅไบๅค็NLGไปปๅก๏ผ็ฎๅๆๅคง็๏ผไธญๆ็็GPT2
Focused on handling NLG tasks, the current largest, Chinese GPT2.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ็ๆ NLG| ้ปไปฒ Wenzhong | GPT2 | 3.5B | ไธญๆ Chinese |
## ๆจกๅไฟกๆฏ Model Information
ไธบไบๅฏไปฅ่ทๅพไธไธชๅผบๅคง็ๅๅ่ฏญ่จๆจกๅ๏ผๆไปฌ้็จGPTๆจกๅ็ปๆ๏ผๅนถไธๅบ็จไบไธญๆ่ฏญๆไธใๅ
ทไฝๅฐ๏ผ่ฟไธชๆจกๅๆฅๆ30ๅฑ่งฃ็ ๅจๅ35ไบฟๅๆฐ๏ผ่ฟๆฏๅๆฌ็GPT2-XL่ฟ่ฆๅคงใๆไปฌๅจ100G็ไธญๆ่ฏญๆไธ้ข่ฎญ็ป๏ผ่ฟๆถ่ไบ32ไธชNVIDIA A100ๆพๅกๅคง็บฆ28ๅฐๆถใๆฎๆไปฌๆ็ฅ๏ผๅฎๆฏ็ฎๅๆๅคง็ไธญๆ็GPTๆจกๅใ
To obtain a robust unidirectional language model, we adopt the GPT model structure and apply it to the Chinese corpus. Specifically, this model has 30 decoder layers and 3.5 billion parameters, which is larger than the original GPT2-XL. We pre-train it on 100G of Chinese corpus, which consumes 32 NVIDIA A100 GPUs for about 28 hours. To the best of our knowledge, it is the largest Chinese GPT model currently available.
## ไฝฟ็จ Usage
### ๅ ่ฝฝๆจกๅ Loading Models
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Wenzhong-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Wenzhong-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### ไฝฟ็จ็คบไพ Usage Examples
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Wenzhong-GPT2-3.5B')
generator("ๅไบฌไฝไบ", max_length=30, num_return_sequences=1)
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | {"language": ["zh"], "license": "apache-2.0", "inference": {"parameters": {"max_new_tokens": 128, "do_sample": true}}} | IDEA-CCNL/Wenzhong-GPT2-3.5B | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2209.02970"
] | [
"zh"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #zh #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Wenzhong-GPT2-3.5B
==================
* Main Page:Fengshenbang
* Github: Fengshenbang-LM
็ฎไป Brief Introduction
---------------------
ๅไบๅค็NLGไปปๅก๏ผ็ฎๅๆๅคง็๏ผไธญๆ็็GPT2
Focused on handling NLG tasks, the current largest, Chinese GPT2.
ๆจกๅๅ็ฑป Model Taxonomy
-------------------
ๆจกๅไฟกๆฏ Model Information
----------------------
ไธบไบๅฏไปฅ่ทๅพไธไธชๅผบๅคง็ๅๅ่ฏญ่จๆจกๅ๏ผๆไปฌ้็จGPTๆจกๅ็ปๆ๏ผๅนถไธๅบ็จไบไธญๆ่ฏญๆไธใๅ
ทไฝๅฐ๏ผ่ฟไธชๆจกๅๆฅๆ30ๅฑ่งฃ็ ๅจๅ35ไบฟๅๆฐ๏ผ่ฟๆฏๅๆฌ็GPT2-XL่ฟ่ฆๅคงใๆไปฌๅจ100G็ไธญๆ่ฏญๆไธ้ข่ฎญ็ป๏ผ่ฟๆถ่ไบ32ไธชNVIDIA A100ๆพๅกๅคง็บฆ28ๅฐๆถใๆฎๆไปฌๆ็ฅ๏ผๅฎๆฏ็ฎๅๆๅคง็ไธญๆ็GPTๆจกๅใ
To obtain a robust unidirectional language model, we adopt the GPT model structure and apply it to the Chinese corpus. Specifically, this model has 30 decoder layers and 3.5 billion parameters, which is larger than the original GPT2-XL. We pre-train it on 100G of Chinese corpus, which consumes 32 NVIDIA A100 GPUs for about 28 hours. To the best of our knowledge, it is the largest Chinese GPT model currently available.
ไฝฟ็จ Usage
--------
### ๅ ่ฝฝๆจกๅ Loading Models
### ไฝฟ็จ็คบไพ Usage Examples
ๅผ็จ Citation
-----------
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ
If you are using the resource for your work, please cite the our paper:
ไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:
You can also cite our website:
| [
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #zh #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
58,
9,
85
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #zh #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### ๅ ่ฝฝๆจกๅ Loading Models### ไฝฟ็จ็คบไพ Usage Examples\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] |
text-generation | transformers |
# Yuyuan-GPT2-3.5B
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
็ฎๅๆๅคง็๏ผๅป็้ขๅ็็ๆ่ฏญ่จๆจกๅGPT2ใ
The currently largest, generative language model GPT2 in the medical domain.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ็นๆฎ Special | ้ขๅ Domain | ไฝๅ
Yuyuan | GPT2 | 3.5B | - |
## ๆจกๅไฟกๆฏ Model Information
ๆไปฌ้็จไธWenzhong-GPT2-3.5B็ธๅ็ๆถๆ๏ผๅจ50GB็ๅปๅญฆ(PubMed)่ฏญๆๅบไธ่ฟ่ก้ข่ฎญ็ปใๆไปฌไฝฟ็จไบ32ไธชNVIDIA A100ๆพๅกๅคง็บฆ7ๅคฉใๆไปฌ็Yuyuan-GPT2-3.5Bๆฏๅป็้ขๅๆๅคง็ๅผๆบ็GPT2ๆจกๅใ่ฟไธๆญฅๅฐ๏ผๆจกๅๅฏไปฅ้่ฟ่ฎก็ฎๅฐๆๅบฆ๏ผPPL๏ผๆฅๅคๆญไบๅฎใไธบไบๅฎๆ้ฎ็ญๅ่ฝ๏ผๆไปฌๅฐ็ญ่ฏญๆจกๅผไป็้ฎ็ๅฝขๅผ่ฝฌๆขไธบไบ้่ฟฐๅฅใ
We adopt the same architecture as Wenzhong-GPT2-3.5B to be pre-trained on 50 GB medical (PubMed) corpus. We use 32 NVIDIA A100 GPUs for about 7 days. Our Yuyuan-GPT2-3.5B is the largest open-source GPT2 model in the medical domain. We further allow the model to judge facts by computing perplexity (PPL). To accomplish question-and-answer functionality, we transform the phrase pattern from interrogative to declarative.
## ไฝฟ็จ Usage
### ๅ ่ฝฝๆจกๅ Loading Models
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### ไฝฟ็จ็คบไพ Usage Examples
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Yuyuan-GPT2-3.5B')
generator("Diabetics should not eat", max_length=30, num_return_sequences=1)
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| {"language": ["en"], "license": "apache-2.0", "inference": false} | IDEA-CCNL/Yuyuan-GPT2-3.5B | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2209.02970"
] | [
"en"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #en #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
| Yuyuan-GPT2-3.5B
================
* Main Page:Fengshenbang
* Github: Fengshenbang-LM
็ฎไป Brief Introduction
---------------------
็ฎๅๆๅคง็๏ผๅป็้ขๅ็็ๆ่ฏญ่จๆจกๅGPT2ใ
The currently largest, generative language model GPT2 in the medical domain.
ๆจกๅๅ็ฑป Model Taxonomy
-------------------
ๆจกๅไฟกๆฏ Model Information
----------------------
ๆไปฌ้็จไธWenzhong-GPT2-3.5B็ธๅ็ๆถๆ๏ผๅจ50GB็ๅปๅญฆ(PubMed)่ฏญๆๅบไธ่ฟ่ก้ข่ฎญ็ปใๆไปฌไฝฟ็จไบ32ไธชNVIDIA A100ๆพๅกๅคง็บฆ7ๅคฉใๆไปฌ็Yuyuan-GPT2-3.5Bๆฏๅป็้ขๅๆๅคง็ๅผๆบ็GPT2ๆจกๅใ่ฟไธๆญฅๅฐ๏ผๆจกๅๅฏไปฅ้่ฟ่ฎก็ฎๅฐๆๅบฆ๏ผPPL๏ผๆฅๅคๆญไบๅฎใไธบไบๅฎๆ้ฎ็ญๅ่ฝ๏ผๆไปฌๅฐ็ญ่ฏญๆจกๅผไป็้ฎ็ๅฝขๅผ่ฝฌๆขไธบไบ้่ฟฐๅฅใ
We adopt the same architecture as Wenzhong-GPT2-3.5B to be pre-trained on 50 GB medical (PubMed) corpus. We use 32 NVIDIA A100 GPUs for about 7 days. Our Yuyuan-GPT2-3.5B is the largest open-source GPT2 model in the medical domain. We further allow the model to judge facts by computing perplexity (PPL). To accomplish question-and-answer functionality, we transform the phrase pattern from interrogative to declarative.
ไฝฟ็จ Usage
--------
### ๅ ่ฝฝๆจกๅ Loading Models
### ไฝฟ็จ็คบไพ Usage Examples
ๅผ็จ Citation
-----------
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ
If you are using the resource for your work, please cite the our paper:
ไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:
You can also cite our website:
| [
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #en #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
52,
9,
85
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #en #arxiv-2209.02970 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n### ๅ ่ฝฝๆจกๅ Loading Models### ไฝฟ็จ็คบไพ Usage Examples\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] |
null | transformers | # Zhouwenwang-Unified-1.3B
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
ไธ่ฟฝไธ็งๆๅไฝๆข็ดข็ไธญๆ็ปไธๆจกๅ๏ผ13ไบฟๅๆฐ็็ผ็ ๅจ็ปๆๆจกๅใ
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 1.3B parameters.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ็นๆฎ Special | ๆข็ดข Exploration | ๅจๆ็ Zhouwenwang | ๅพ
ๅฎ TBD | 1.3B | ไธญๆ Chinese |
## ๆจกๅไฟกๆฏ Model Information
IDEA็ ็ฉถ้ข่ฎค็ฅ่ฎก็ฎไธญๅฟ่ๅ่ฟฝไธ็งๆๆ้ๅ
ฌๅธๆๅบ็ๅ
ทๆๆฐ็ปๆ็ๅคงๆจกๅใ่ฏฅๆจกๅๅจ้ข่ฎญ็ป้ถๆฎตๆถ่่็ปไธLMๅMLM็ไปปๅก๏ผ่ฟ่ฎฉๅ
ถๅๆถๅ
ทๅค็ๆๅ็่งฃ็่ฝๅ๏ผๅนถไธๅขๅ ไบๆ่ฝฌไฝ็ฝฎ็ผ็ ๆๆฏใ็ฎๅๅทฒๆ13ไบฟๅๆฐ็Zhouwenwang-Unified-1.3Bๅคงๆจกๅ๏ผๆฏไธญๆ้ขๅไธญๅฏไปฅๅๆถๅLMๅMLMไปปๅก็ๆๅคง็ๆจกๅใๆไปฌๅ็ปญไผๆ็ปญๅจๆจกๅ่งๆจกใ็ฅ่ฏ่ๅ
ฅใ็็ฃ่พ
ๅฉไปปๅก็ญๆนๅไธๆญไผๅใ
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. At present, Zhouwenwang-Unified-1.3B with 13B parameters is the largest Chinese model that can do both LM and MLM tasks. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
### ไธๆธธไปปๅก Performance
ไธๆธธไธญๆไปปๅก็ๅพๅ๏ผๆฒกๆๅไปปไฝๆฐๆฎๅขๅผบ๏ผใ
Scores on downstream chinese tasks (without any data augmentation)
| ๆจกๅ Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
| roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.7770 | 0.8140 | 0.8914 | 0.8600 |
| Zhouwenwang-Unified-1.3B | 0.7463 | 0.6036 | 0.6288 | 0.7654 | 0.7741 | 0.8849 | 0. 8777 |
## ไฝฟ็จ Usage
ๅ ไธบ[transformers](https://github.com/huggingface/transformers)ๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-1.3B็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)ไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ
Since there is no structure of Zhouwenwang-Unified-1.3B in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### ๅ ่ฝฝๆจกๅ Loading Models
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
```
### ไฝฟ็จ็คบไพ Usage Examples
ไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ
You can use the model for continuation writing tasks.
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = 'ๆธ
ๅๅคงๅญฆไฝไบ'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == 'ใ':
break
print(sentence)
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | {"language": ["zh"], "license": "apache-2.0", "widget": [{"text": "\u751f\u6d3b\u7684\u771f\u8c1b\u662f[MASK]\u3002"}]} | IDEA-CCNL/Zhouwenwang-Unified-1.3B | null | [
"transformers",
"pytorch",
"megatron-bert",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2209.02970"
] | [
"zh"
] | TAGS
#transformers #pytorch #megatron-bert #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us
| Zhouwenwang-Unified-1.3B
========================
* Main Page:Fengshenbang
* Github: Fengshenbang-LM
็ฎไป Brief Introduction
---------------------
ไธ่ฟฝไธ็งๆๅไฝๆข็ดข็ไธญๆ็ปไธๆจกๅ๏ผ13ไบฟๅๆฐ็็ผ็ ๅจ็ปๆๆจกๅใ
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 1.3B parameters.
ๆจกๅๅ็ฑป Model Taxonomy
-------------------
ๆจกๅไฟกๆฏ Model Information
----------------------
IDEA็ ็ฉถ้ข่ฎค็ฅ่ฎก็ฎไธญๅฟ่ๅ่ฟฝไธ็งๆๆ้ๅ
ฌๅธๆๅบ็ๅ
ทๆๆฐ็ปๆ็ๅคงๆจกๅใ่ฏฅๆจกๅๅจ้ข่ฎญ็ป้ถๆฎตๆถ่่็ปไธLMๅMLM็ไปปๅก๏ผ่ฟ่ฎฉๅ
ถๅๆถๅ
ทๅค็ๆๅ็่งฃ็่ฝๅ๏ผๅนถไธๅขๅ ไบๆ่ฝฌไฝ็ฝฎ็ผ็ ๆๆฏใ็ฎๅๅทฒๆ13ไบฟๅๆฐ็Zhouwenwang-Unified-1.3Bๅคงๆจกๅ๏ผๆฏไธญๆ้ขๅไธญๅฏไปฅๅๆถๅLMๅMLMไปปๅก็ๆๅคง็ๆจกๅใๆไปฌๅ็ปญไผๆ็ปญๅจๆจกๅ่งๆจกใ็ฅ่ฏ่ๅ
ฅใ็็ฃ่พ
ๅฉไปปๅก็ญๆนๅไธๆญไผๅใ
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. At present, Zhouwenwang-Unified-1.3B with 13B parameters is the largest Chinese model that can do both LM and MLM tasks. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
### ไธๆธธไปปๅก Performance
ไธๆธธไธญๆไปปๅก็ๅพๅ๏ผๆฒกๆๅไปปไฝๆฐๆฎๅขๅผบ๏ผใ
Scores on downstream chinese tasks (without any data augmentation)
ไฝฟ็จ Usage
--------
ๅ ไธบtransformersๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-1.3B็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็Fengshenbang-LMไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ
Since there is no structure of Zhouwenwang-Unified-1.3B in transformers library, you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in Fengshenbang-LM.
### ๅ ่ฝฝๆจกๅ Loading Models
### ไฝฟ็จ็คบไพ Usage Examples
ไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ
You can use the model for continuation writing tasks.
ๅผ็จ Citation
-----------
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ
If you are using the resource for your work, please cite the our paper:
ไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:
You can also cite our website:
| [
"### ไธๆธธไปปๅก Performance\n\n\nไธๆธธไธญๆไปปๅก็ๅพๅ๏ผๆฒกๆๅไปปไฝๆฐๆฎๅขๅผบ๏ผใ\n\n\nScores on downstream chinese tasks (without any data augmentation)\n\n\n\nไฝฟ็จ Usage\n--------\n\n\nๅ ไธบtransformersๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-1.3B็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็Fengshenbang-LMไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ\n\n\nSince there is no structure of Zhouwenwang-Unified-1.3B in transformers library, you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in Fengshenbang-LM.",
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ\n\n\nYou can use the model for continuation writing tasks.\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
"TAGS\n#transformers #pytorch #megatron-bert #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### ไธๆธธไปปๅก Performance\n\n\nไธๆธธไธญๆไปปๅก็ๅพๅ๏ผๆฒกๆๅไปปไฝๆฐๆฎๅขๅผบ๏ผใ\n\n\nScores on downstream chinese tasks (without any data augmentation)\n\n\n\nไฝฟ็จ Usage\n--------\n\n\nๅ ไธบtransformersๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-1.3B็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็Fengshenbang-LMไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ\n\n\nSince there is no structure of Zhouwenwang-Unified-1.3B in transformers library, you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in Fengshenbang-LM.",
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ\n\n\nYou can use the model for continuation writing tasks.\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
44,
155,
9,
110
] | [
"TAGS\n#transformers #pytorch #megatron-bert #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us \n### ไธๆธธไปปๅก Performance\n\n\nไธๆธธไธญๆไปปๅก็ๅพๅ๏ผๆฒกๆๅไปปไฝๆฐๆฎๅขๅผบ๏ผใ\n\n\nScores on downstream chinese tasks (without any data augmentation)\n\n\n\nไฝฟ็จ Usage\n--------\n\n\nๅ ไธบtransformersๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-1.3B็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็Fengshenbang-LMไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ\n\n\nSince there is no structure of Zhouwenwang-Unified-1.3B in transformers library, you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in Fengshenbang-LM.### ๅ ่ฝฝๆจกๅ Loading Models### ไฝฟ็จ็คบไพ Usage Examples\n\n\nไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ\n\n\nYou can use the model for continuation writing tasks.\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] |
null | transformers |
# Zhouwenwang-Unified-110M
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
ไธ่ฟฝไธ็งๆๅไฝๆข็ดข็ไธญๆ็ปไธๆจกๅ๏ผ1.1ไบฟๅๆฐ็็ผ็ ๅจ็ปๆๆจกๅใ
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 110M parameters.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ็นๆฎ Special | ๆข็ดข Exploration | ๅจๆ็ Zhouwenwang | ๅพ
ๅฎ TBD | 110M | ไธญๆ Chinese |
## ๆจกๅไฟกๆฏ Model Information
IDEA็ ็ฉถ้ข่ฎค็ฅ่ฎก็ฎไธญๅฟ่ๅ่ฟฝไธ็งๆๆ้ๅ
ฌๅธๆๅบ็ๅ
ทๆๆฐ็ปๆ็ๅคงๆจกๅใ่ฏฅๆจกๅๅจ้ข่ฎญ็ป้ถๆฎตๆถ่่็ปไธLMๅMLM็ไปปๅก๏ผ่ฟ่ฎฉๅ
ถๅๆถๅ
ทๅค็ๆๅ็่งฃ็่ฝๅ๏ผๅนถไธๅขๅ ไบๆ่ฝฌไฝ็ฝฎ็ผ็ ๆๆฏใๆไปฌๅ็ปญไผๆ็ปญๅจๆจกๅ่งๆจกใ็ฅ่ฏ่ๅ
ฅใ็็ฃ่พ
ๅฉไปปๅก็ญๆนๅไธๆญไผๅใ
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
## ไฝฟ็จ Usage
ๅ ไธบ[transformers](https://github.com/huggingface/transformers)ๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-110M็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)ไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ
Since there is no structure of Zhouwenwang-Unified-110M in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Zhouwenwang-Unified-110M and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### ๅ ่ฝฝๆจกๅ Loading Models
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
```
### ไฝฟ็จ็คบไพ Usage Examples
ไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ
You can use the model for continuation writing tasks.
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = 'ๆธ
ๅๅคงๅญฆไฝไบ'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == 'ใ':
break
print(sentence)
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| {"language": ["zh"], "license": "apache-2.0", "widget": [{"text": "\u751f\u6d3b\u7684\u771f\u8c1b\u662f[MASK]\u3002"}]} | IDEA-CCNL/Zhouwenwang-Unified-110M | null | [
"transformers",
"pytorch",
"megatron-bert",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2209.02970"
] | [
"zh"
] | TAGS
#transformers #pytorch #megatron-bert #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us
| Zhouwenwang-Unified-110M
========================
* Main Page:Fengshenbang
* Github: Fengshenbang-LM
็ฎไป Brief Introduction
---------------------
ไธ่ฟฝไธ็งๆๅไฝๆข็ดข็ไธญๆ็ปไธๆจกๅ๏ผ1.1ไบฟๅๆฐ็็ผ็ ๅจ็ปๆๆจกๅใ
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 110M parameters.
ๆจกๅๅ็ฑป Model Taxonomy
-------------------
ๆจกๅไฟกๆฏ Model Information
----------------------
IDEA็ ็ฉถ้ข่ฎค็ฅ่ฎก็ฎไธญๅฟ่ๅ่ฟฝไธ็งๆๆ้ๅ
ฌๅธๆๅบ็ๅ
ทๆๆฐ็ปๆ็ๅคงๆจกๅใ่ฏฅๆจกๅๅจ้ข่ฎญ็ป้ถๆฎตๆถ่่็ปไธLMๅMLM็ไปปๅก๏ผ่ฟ่ฎฉๅ
ถๅๆถๅ
ทๅค็ๆๅ็่งฃ็่ฝๅ๏ผๅนถไธๅขๅ ไบๆ่ฝฌไฝ็ฝฎ็ผ็ ๆๆฏใๆไปฌๅ็ปญไผๆ็ปญๅจๆจกๅ่งๆจกใ็ฅ่ฏ่ๅ
ฅใ็็ฃ่พ
ๅฉไปปๅก็ญๆนๅไธๆญไผๅใ
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
ไฝฟ็จ Usage
--------
ๅ ไธบtransformersๅบไธญๆฏๆฒกๆ Zhouwenwang-Unified-110M็ธๅ
ณ็ๆจกๅ็ปๆ็๏ผๆไปฅไฝ ๅฏไปฅๅจๆไปฌ็Fengshenbang-LMไธญๆพๅฐๅนถไธ่ฟ่กไปฃ็ ใ
Since there is no structure of Zhouwenwang-Unified-110M in transformers library, you can find the structure of Zhouwenwang-Unified-110M and run the codes in Fengshenbang-LM.
### ๅ ่ฝฝๆจกๅ Loading Models
### ไฝฟ็จ็คบไพ Usage Examples
ไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ
You can use the model for continuation writing tasks.
ๅผ็จ Citation
-----------
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ
If you are using the resource for your work, please cite the our paper:
ไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:
You can also cite our website:
| [
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ\n\n\nYou can use the model for continuation writing tasks.\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
"TAGS\n#transformers #pytorch #megatron-bert #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### ๅ ่ฝฝๆจกๅ Loading Models",
"### ไฝฟ็จ็คบไพ Usage Examples\n\n\nไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ\n\n\nYou can use the model for continuation writing tasks.\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] | [
44,
9,
110
] | [
"TAGS\n#transformers #pytorch #megatron-bert #zh #arxiv-2209.02970 #license-apache-2.0 #endpoints_compatible #region-us \n### ๅ ่ฝฝๆจกๅ Loading Models### ไฝฟ็จ็คบไพ Usage Examples\n\n\nไฝ ๅฏไปฅไฝฟ็จ่ฏฅๆจกๅ่ฟ่ก็ปญๅไปปๅกใ\n\n\nYou can use the model for continuation writing tasks.\n\n\nๅผ็จ Citation\n-----------\n\n\nๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็่ฎบๆ๏ผ\n\n\nIf you are using the resource for your work, please cite the our paper:\n\n\nไนๅฏไปฅๅผ็จๆไปฌ็็ฝ็ซ:\n\n\nYou can also cite our website:"
] |
text-generation | transformers |
# Rick And Morty DialoGPT Model | {"tags": ["conversational"]} | ILoveThatLady/DialoGPT-small-rickandmorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick And Morty DialoGPT Model | [
"# Rick And Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick And Morty DialoGPT Model"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick And Morty DialoGPT Model"
] |
fill-mask | transformers | #Slovak RoBERTA Masked Language Model
###83Mil Parameters in small model
Medium and Large models coming soon!
RoBERTA pretrained tokenizer vocab and merges included.
---
##Training params:
- **Dataset**:
8GB Slovak Monolingual dataset including ParaCrawl (monolingual), OSCAR, and several gigs of my own findings and cleaning.
- **Preprocessing**:
Tokenized with a pretrained ByteLevelBPETokenizer trained on the same dataset. Uncased, with s, pad, /s, unk, and mask special tokens.
- **Evaluation results**:
- Mnoho ฤพudรญ tu MASK
- ลพije.
- ลพijรบ.
- je.
- trpรญ.
- Ako sa MASK
- mรกte
- mรกลก
- mรก
- hovorรญ
- Plรกลพovรก sezรณna pod Zoborom patrรญ medzi MASK obdobia.
- roฤnรฉ
- najkrajลกie
- najobฤพรบbenejลกie
- najnรกroฤnejลกie
- **Limitations**:
The current model is fairly small, although it works very well. This model is meant to be finetuned on downstream tasks e.g. Part-of-Speech tagging, Question Answering, anything in GLUE or SUPERGLUE.
- **Credit**:
If you use this or any of my models in research or professional work, please credit me - Christopher Brousseau in said work. | {} | IMJONEZZ/SlovenBERTcina | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| #Slovak RoBERTA Masked Language Model
###83Mil Parameters in small model
Medium and Large models coming soon!
RoBERTA pretrained tokenizer vocab and merges included.
---
##Training params:
- Dataset:
8GB Slovak Monolingual dataset including ParaCrawl (monolingual), OSCAR, and several gigs of my own findings and cleaning.
- Preprocessing:
Tokenized with a pretrained ByteLevelBPETokenizer trained on the same dataset. Uncased, with s, pad, /s, unk, and mask special tokens.
- Evaluation results:
- Mnoho ฤพudรญ tu MASK
- ลพije.
- ลพijรบ.
- je.
- trpรญ.
- Ako sa MASK
- mรกte
- mรกลก
- mรก
- hovorรญ
- Plรกลพovรก sezรณna pod Zoborom patrรญ medzi MASK obdobia.
- roฤnรฉ
- najkrajลกie
- najobฤพรบbenejลกie
- najnรกroฤnejลกie
- Limitations:
The current model is fairly small, although it works very well. This model is meant to be finetuned on downstream tasks e.g. Part-of-Speech tagging, Question Answering, anything in GLUE or SUPERGLUE.
- Credit:
If you use this or any of my models in research or professional work, please credit me - Christopher Brousseau in said work. | [] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
32
] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
# Hate Speech Classifier for Social Media Content in English Language
A monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model.
## Please cite:
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.
https://link.springer.com/chapter/10.1007/978-3-031-08974-9_54
## Tokenizer
During training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent
Details on data acquisition and labeling including the Annotation guidelines:
http://imsypp.ijs.si/wp-content/uploads/2021/12/IMSyPP_D2.2_multilingual-dataset.pdf
| {"language": ["en"], "license": "mit", "widget": [{"text": "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University."}]} | IMSyPP/hate_speech_en | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Hate Speech Classifier for Social Media Content in English Language
A monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model.
## Please cite:
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.
URL
## Tokenizer
During training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent
Details on data acquisition and labeling including the Annotation guidelines:
URL
| [
"# Hate Speech Classifier for Social Media Content in English Language\n\nA monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model.",
"## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL",
"## Tokenizer\n\nDuring training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent\n\n\nDetails on data acquisition and labeling including the Annotation guidelines: \nURL"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Hate Speech Classifier for Social Media Content in English Language\n\nA monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model.",
"## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL",
"## Tokenizer\n\nDuring training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent\n\n\nDetails on data acquisition and labeling including the Annotation guidelines: \nURL"
] | [
38,
65,
102,
33,
48
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Hate Speech Classifier for Social Media Content in English Language\n\nA monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model.## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL## Tokenizer\n\nDuring training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference.## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent\n\n\nDetails on data acquisition and labeling including the Annotation guidelines: \nURL"
] |
text-classification | transformers |
# Hate Speech Classifier for Social Media Content in Italian Language
A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.
## Please cite:
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.
https://link.springer.com/chapter/10.1007/978-3-031-08974-9_54
## Tokenizer
During training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | {"language": ["it"], "license": "mit", "widget": [{"text": "Ciao, mi chiamo Marcantonio, sono di Roma. Studio informatica all'Universit\u00e0 di Roma."}]} | IMSyPP/hate_speech_it | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Hate Speech Classifier for Social Media Content in Italian Language
A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.
## Please cite:
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.
URL
## Tokenizer
During training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | [
"# Hate Speech Classifier for Social Media Content in Italian Language\n\nA monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.",
"## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL",
"## Tokenizer\n\nDuring training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n\n* 0 - acceptable\n\n* 1 - inappropriate\n\n* 2 - offensive\n\n* 3 - violent"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Hate Speech Classifier for Social Media Content in Italian Language\n\nA monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.",
"## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL",
"## Tokenizer\n\nDuring training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n\n* 0 - acceptable\n\n* 1 - inappropriate\n\n* 2 - offensive\n\n* 3 - violent"
] | [
34,
66,
102,
32,
33
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# Hate Speech Classifier for Social Media Content in Italian Language\n\nA monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL## Tokenizer\n\nDuring training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.## Model output\n\nThe model classifies each input into one of four distinct classes:\n\n* 0 - acceptable\n\n* 1 - inappropriate\n\n* 2 - offensive\n\n* 3 - violent"
] |
text-classification | transformers |
# Hate Speech Classifier for Social Media Content in Dutch
A monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model [BERTje](https://huggingface.co/wietsedv/bert-base-dutch-cased).
## Tokenizer
During training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | {"language": ["nl"], "license": "mit"} | IMSyPP/hate_speech_nl | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #bert #text-classification #nl #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Hate Speech Classifier for Social Media Content in Dutch
A monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model BERTje.
## Tokenizer
During training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | [
"# Hate Speech Classifier for Social Media Content in Dutch\n\nA monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model BERTje.",
"## Tokenizer\n\nDuring training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #nl #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Hate Speech Classifier for Social Media Content in Dutch\n\nA monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model BERTje.",
"## Tokenizer\n\nDuring training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent"
] | [
34,
68,
31,
33
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #nl #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# Hate Speech Classifier for Social Media Content in Dutch\n\nA monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model BERTje.## Tokenizer\n\nDuring training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent"
] |
text-classification | transformers |
# Hate Speech Classifier for Social Media Content in Slovenian Language
A monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.
## Please cite:
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.
https://link.springer.com/chapter/10.1007/978-3-031-08974-9_54
## Tokenizer
During training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | {"language": ["sl"], "license": "mit", "pipeline_tag": "text-classification", "inference": true, "widget": [{"text": "Sem Mark in \u017eivim v Ljubljani. Sem doktorski \u0161tudent na Mednarodni podiplomski \u0161oli Jo\u017eefa Stefana."}]} | IMSyPP/hate_speech_slo | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl"
] | TAGS
#transformers #pytorch #bert #text-classification #sl #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Hate Speech Classifier for Social Media Content in Slovenian Language
A monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.
## Please cite:
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.
URL
## Tokenizer
During training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | [
"# Hate Speech Classifier for Social Media Content in Slovenian Language\n\nA monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.",
"## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL",
"## Tokenizer\n\nDuring training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #sl #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Hate Speech Classifier for Social Media Content in Slovenian Language\n\nA monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.",
"## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL",
"## Tokenizer\n\nDuring training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.",
"## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent"
] | [
34,
72,
102,
36,
33
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #sl #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# Hate Speech Classifier for Social Media Content in Slovenian Language\n\nA monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.## Please cite:\nKralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetiฤ, I., & Zollo, F. (2022, July). __Handling disagreement in hate speech modelling__. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 681-695). Cham: Springer International Publishing.\nURL## Tokenizer\n\nDuring training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.## Model output\n\nThe model classifies each input into one of four distinct classes:\n* 0 - acceptable\n* 1 - inappropriate\n* 2 - offensive\n* 3 - violent"
] |
text-generation | transformers |
# Cyber Bones DialoGPT Model | {"tags": ["conversational"]} | ITNODove/DialoGPT-medium-cyberbones | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Cyber Bones DialoGPT Model | [
"# Cyber Bones DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Cyber Bones DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Cyber Bones DialoGPT Model"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a dataset of Shakespeare's plays.
## Model description
The model is the original gpt-2 model fine-tuned on a custom dataset.
## Intended uses & limitations
The model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.
## Training and evaluation data
Trained with Shakespeare's plays corpus.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.11.0
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "output", "results": []}]} | Iacopo/Shakespear-GPT2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# output
This model is a fine-tuned version of gpt2 on a dataset of Shakespeare's plays.
## Model description
The model is the original gpt-2 model fine-tuned on a custom dataset.
## Intended uses & limitations
The model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.
## Training and evaluation data
Trained with Shakespeare's plays corpus.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.11.0
| [
"# output\n\nThis model is a fine-tuned version of gpt2 on a dataset of Shakespeare's plays.",
"## Model description\n\nThe model is the original gpt-2 model fine-tuned on a custom dataset.",
"## Intended uses & limitations\n\nThe model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.",
"## Training and evaluation data\n\nTrained with Shakespeare's plays corpus.",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# output\n\nThis model is a fine-tuned version of gpt2 on a dataset of Shakespeare's plays.",
"## Model description\n\nThe model is the original gpt-2 model fine-tuned on a custom dataset.",
"## Intended uses & limitations\n\nThe model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.",
"## Training and evaluation data\n\nTrained with Shakespeare's plays corpus.",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] | [
46,
24,
23,
38,
14,
4,
95,
5,
47
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# output\n\nThis model is a fine-tuned version of gpt2 on a dataset of Shakespeare's plays.## Model description\n\nThe model is the original gpt-2 model fine-tuned on a custom dataset.## Intended uses & limitations\n\nThe model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.## Training and evaluation data\n\nTrained with Shakespeare's plays corpus.## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0### Training results### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] |
text-generation | transformers |
# Hank Hill DialoGPT Model | {"tags": ["conversational"]} | Icemiser/chat-test | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Hank Hill DialoGPT Model | [
"# Hank Hill DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Hank Hill DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Hank Hill DialoGPT Model"
] |
text2text-generation | transformers | @inproceedings{adebara-abdul-mageed-2021-improving,
title = "Improving Similar Language Translation With Transfer Learning",
author = "Adebara, Ife and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the Sixth Conference on Machine Translation",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wmt-1.27",
pages = "273--278",
abstract = "We investigate transfer learning based on pre-trained neural machine translation models to translate between (low-resource) similar languages. This work is part of our contribution to the WMT 2021 Similar Languages Translation Shared Task where we submitted models for different language pairs, including French-Bambara, Spanish-Catalan, and Spanish-Portuguese in both directions. Our models for Catalan-Spanish (82.79 BLEU)and Portuguese-Spanish (87.11 BLEU) rank top 1 in the official shared task evaluation, and we are the only team to submit models for the French-Bambara pairs.",
} | {"language": ["bm", "fr"]} | Ife/BM-FR | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"bm",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bm",
"fr"
] | TAGS
#transformers #pytorch #marian #text2text-generation #bm #fr #autotrain_compatible #endpoints_compatible #region-us
| @inproceedings{adebara-abdul-mageed-2021-improving,
title = "Improving Similar Language Translation With Transfer Learning",
author = "Adebara, Ife and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the Sixth Conference on Machine Translation",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "URL
pages = "273--278",
abstract = "We investigate transfer learning based on pre-trained neural machine translation models to translate between (low-resource) similar languages. This work is part of our contribution to the WMT 2021 Similar Languages Translation Shared Task where we submitted models for different language pairs, including French-Bambara, Spanish-Catalan, and Spanish-Portuguese in both directions. Our models for Catalan-Spanish (82.79 BLEU)and Portuguese-Spanish (87.11 BLEU) rank top 1 in the official shared task evaluation, and we are the only team to submit models for the French-Bambara pairs.",
} | [] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #bm #fr #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
35
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #bm #fr #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers | # Similar-Languages-MT | {} | Ife/CA-ES | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| # Similar-Languages-MT | [
"# Similar-Languages-MT"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"# Similar-Languages-MT"
] | [
30,
6
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n# Similar-Languages-MT"
] |
question-answering | transformers | A distilbert model fine-tuned for question answering. | {"language": ["en"], "datasets": ["squad_v2", "wiki_qa"], "metrics": ["accuracy"], "pipeline_tag": "question-answering"} | Ifenna/dbert-3epoch | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:wiki_qa",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #distilbert #question-answering #en #dataset-squad_v2 #dataset-wiki_qa #endpoints_compatible #region-us
| A distilbert model fine-tuned for question answering. | [] | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #question-answering #en #dataset-squad_v2 #dataset-wiki_qa #endpoints_compatible #region-us \n"
] | [
48
] | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #question-answering #en #dataset-squad_v2 #dataset-wiki_qa #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
ะะฐะฑะฐะฒะฝะพะต ะดะปั ะดะธัะบะพัะดะธะบะฐ))00)) https://discord.gg/HpeadKH
Offers
[email protected] | {"tags": ["ru", "4ulan"]} | Ifromspace/GRIEFSOFT-walr | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"4ulan",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #ru #4ulan #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
ะะฐะฑะฐะฒะฝะพะต ะดะปั ะดะธัะบะพัะดะธะบะฐ))00)) URL
Offers
work@URL | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #ru #4ulan #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
42
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #ru #4ulan #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
**Fork of https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2**
ะะฐะฑะฐะฒะฝะพะต ะดะปั ะดะธัะบะพัะดะธะบะฐ))00))
ROADMAP:
- ะกะพะฑะธัะฐั ะดะฐัะฐัะตัะธะบ ะธะท ะบะฝะธะถะตะบ ะฟัะพ ะฟะพะฟะฐะดะฐะฝัะตะฒ. <------------------------- ะกะตะนัะฐั ััั.
- ะะพะพะฑััะฐั.
- ะัะฑัะฐััะฒะฐั ะฒ ะดะธัะบะพัะดะธะบ.
https://discord.gg/HpeadKH | {"language": ["ru"], "tags": ["PyTorch", "Transformers", "4ulan"]} | Ifromspace/GRIEFSOFT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"4ulan",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #PyTorch #Transformers #4ulan #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Fork of URL
ะะฐะฑะฐะฒะฝะพะต ะดะปั ะดะธัะบะพัะดะธะบะฐ))00))
ROADMAP:
- ะกะพะฑะธัะฐั ะดะฐัะฐัะตัะธะบ ะธะท ะบะฝะธะถะตะบ ะฟัะพ ะฟะพะฟะฐะดะฐะฝัะตะฒ. <------------------------- ะกะตะนัะฐั ััั.
- ะะพะพะฑััะฐั.
- ะัะฑัะฐััะฒะฐั ะฒ ะดะธัะบะพัะดะธะบ.
URL | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #PyTorch #Transformers #4ulan #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
49
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #PyTorch #Transformers #4ulan #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
summarization | transformers |
# MBARTRuSumGazeta
## Model description
This is a ported version of [fairseq model](https://www.dropbox.com/s/fijtntnifbt9h0k/gazeta_mbart_v2_fairseq.tar.gz).
For more details, please see [Dataset for Automatic Summarization of Russian News](https://arxiv.org/abs/2006.11063).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1wdo_nPZPk6dWAn1J8nGx4Z5Ef82jCCob)
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
model_name = "IlyaGusev/mbart_ru_sum_gazeta"
tokenizer = MBartTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
```
#### Limitations and bias
- The model should work well with Gazeta.ru articles, but for any other agencies it can suffer from domain shift
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Fairseq training script: [train.sh](https://github.com/IlyaGusev/summarus/blob/master/external/bart_scripts/train.sh)
- Porting: [Colab link](https://colab.research.google.com/drive/13jXOlCpArV-lm4jZQ0VgOpj6nFBYrLAr)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Predicting all summaries:
```python
import json
import torch
from transformers import MBartTokenizer, MBartForConditionalGeneration
from datasets import load_dataset
def gen_batch(inputs, batch_size):
batch_start = 0
while batch_start < len(inputs):
yield inputs[batch_start: batch_start + batch_size]
batch_start += batch_size
def predict(
model_name,
input_records,
output_file,
max_source_tokens_count=600,
batch_size=4
):
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = MBartTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name).to(device)
predictions = []
for batch in gen_batch(inputs, batch_size):
texts = [r["text"] for r in batch]
input_ids = tokenizer(
batch,
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=max_source_tokens_count
)["input_ids"].to(device)
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summaries = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
for s in summaries:
print(s)
predictions.extend(summaries)
with open(output_file, "w") as w:
for p in predictions:
w.write(p.strip().replace("\n", " ") + "\n")
gazeta_test = load_dataset('IlyaGusev/gazeta', script_version="v1.0")["test"]
predict("IlyaGusev/mbart_ru_sum_gazeta", list(gazeta_test), "mbart_predictions.txt")
```
Evaluation: https://github.com/IlyaGusev/summarus/blob/master/evaluate.py
Flags: --language ru --tokenize-after --lower
### BibTeX entry and citation info
```bibtex
@InProceedings{10.1007/978-3-030-59082-6_9,
author="Gusev, Ilya",
editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia",
title="Dataset for Automatic Summarization of Russian News",
booktitle="Artificial Intelligence and Natural Language",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="122--134",
isbn="978-3-030-59082-6"
}
```
| {"language": ["ru"], "license": "apache-2.0", "tags": ["summarization", "mbart"], "datasets": ["IlyaGusev/gazeta"], "inference": {"parameters": {"no_repeat_ngram_size": 4}}, "widget": [{"text": "\u0412\u044b\u0441\u043e\u0442\u0430 \u0431\u0430\u0448\u043d\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 324 \u043c\u0435\u0442\u0440\u0430 (1063 \u0444\u0443\u0442\u0430), \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u0430\u044f \u0436\u0435 \u0432\u044b\u0441\u043e\u0442\u0430, \u043a\u0430\u043a \u0443 81-\u044d\u0442\u0430\u0436\u043d\u043e\u0433\u043e \u0437\u0434\u0430\u043d\u0438\u044f, \u0438 \u0441\u0430\u043c\u043e\u0435 \u0432\u044b\u0441\u043e\u043a\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u0432 \u041f\u0430\u0440\u0438\u0436\u0435. \u0415\u0433\u043e \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u043d\u043e, \u0440\u0430\u0437\u043c\u0435\u0440\u043e\u043c 125 \u043c\u0435\u0442\u0440\u043e\u0432 (410 \u0444\u0443\u0442\u043e\u0432) \u0441 \u043b\u044e\u0431\u043e\u0439 \u0441\u0442\u043e\u0440\u043e\u043d\u044b. \u0412\u043e \u0432\u0440\u0435\u043c\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u043f\u0440\u0435\u0432\u0437\u043e\u0448\u043b\u0430 \u043c\u043e\u043d\u0443\u043c\u0435\u043d\u0442 \u0412\u0430\u0448\u0438\u043d\u0433\u0442\u043e\u043d\u0430, \u0441\u0442\u0430\u0432 \u0441\u0430\u043c\u044b\u043c \u0432\u044b\u0441\u043e\u043a\u0438\u043c \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u043c \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c \u0432 \u043c\u0438\u0440\u0435, \u0438 \u044d\u0442\u043e\u0442 \u0442\u0438\u0442\u0443\u043b \u043e\u043d\u0430 \u0443\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b\u0430 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 41 \u0433\u043e\u0434\u0430 \u0434\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u0432 \u041d\u044c\u044e-\u0419\u043e\u0440\u043a\u0435 \u0432 1930 \u0433\u043e\u0434\u0443. \u042d\u0442\u043e \u043f\u0435\u0440\u0432\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u043e \u0432\u044b\u0441\u043e\u0442\u044b 300 \u043c\u0435\u0442\u0440\u043e\u0432. \u0418\u0437-\u0437\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0432\u0435\u0449\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0430\u043d\u0442\u0435\u043d\u043d\u044b \u043d\u0430 \u0432\u0435\u0440\u0448\u0438\u043d\u0435 \u0431\u0430\u0448\u043d\u0438 \u0432 1957 \u0433\u043e\u0434\u0443 \u043e\u043d\u0430 \u0441\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0448\u0435 \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u043d\u0430 5,2 \u043c\u0435\u0442\u0440\u0430 (17 \u0444\u0443\u0442\u043e\u0432). \u0417\u0430 \u0438\u0441\u043a\u043b\u044e\u0447\u0435\u043d\u0438\u0435\u043c \u043f\u0435\u0440\u0435\u0434\u0430\u0442\u0447\u0438\u043a\u043e\u0432, \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u0442\u043e\u0440\u043e\u0439 \u0441\u0430\u043c\u043e\u0439 \u0432\u044b\u0441\u043e\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e \u0441\u0442\u043e\u044f\u0449\u0435\u0439 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u043e\u0439 \u0432\u043e \u0424\u0440\u0430\u043d\u0446\u0438\u0438 \u043f\u043e\u0441\u043b\u0435 \u0432\u0438\u0430\u0434\u0443\u043a\u0430 \u041c\u0438\u0439\u043e.", "example_title": "\u0412\u0438\u043a\u0438\u043f\u0435\u0434\u0438\u044f"}, {"text": "\u0421 1 \u0441\u0435\u043d\u0442\u044f\u0431\u0440\u044f \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0432\u0441\u0442\u0443\u043f\u0430\u044e\u0442 \u0432 \u0441\u0438\u043b\u0443 \u043f\u043e\u043f\u0440\u0430\u0432\u043a\u0438 \u0432 \u0437\u0430\u043a\u043e\u043d \u00ab\u041e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u0435\u00bb \u2014 \u0442\u0435\u043f\u0435\u0440\u044c \u0434\u043e\u043b\u0436\u043d\u0438\u043a\u0438 \u0441\u043c\u043e\u0433\u0443\u0442 \u043e\u0441\u0432\u043e\u0431\u043e\u0436\u0434\u0430\u0442\u044c\u0441\u044f \u043e\u0442 \u043d\u0435\u043f\u043e\u0441\u0438\u043b\u044c\u043d\u044b\u0445 \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0432\u043e \u0432\u043d\u0435\u0441\u0443\u0434\u0435\u0431\u043d\u043e\u043c \u043f\u043e\u0440\u044f\u0434\u043a\u0435, \u0435\u0441\u043b\u0438 \u0441\u0443\u043c\u043c\u0430 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 \u043d\u0435 \u043c\u0435\u043d\u0435\u0435 50 \u0442\u044b\u0441. \u0440\u0443\u0431\u043b\u0435\u0439 \u0438 \u043d\u0435 \u043f\u0440\u0435\u0432\u044b\u0448\u0430\u0435\u0442 500 \u0442\u044b\u0441. \u0440\u0443\u0431\u043b\u0435\u0439 \u0431\u0435\u0437 \u0443\u0447\u0435\u0442\u0430 \u0448\u0442\u0440\u0430\u0444\u043e\u0432, \u043f\u0435\u043d\u0438, \u043f\u0440\u043e\u0446\u0435\u043d\u0442\u043e\u0432 \u0437\u0430 \u043f\u0440\u043e\u0441\u0440\u043e\u0447\u043a\u0443 \u043f\u043b\u0430\u0442\u0435\u0436\u0430 \u0438 \u043f\u0440\u043e\u0447\u0438\u0445 \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0445 \u0438\u043b\u0438 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u044b\u0445 \u0441\u0430\u043d\u043a\u0446\u0438\u0439. \u0423 \u0444\u0438\u0437\u043b\u0438\u0446 \u0438 \u0438\u043d\u0434\u0438\u0432\u0438\u0434\u0443\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u0442\u0435\u043b\u0435\u0439 \u043f\u043e\u044f\u0432\u0438\u043b\u0430\u0441\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u043e\u0439\u0442\u0438 \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u0443 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u0430 \u0431\u0435\u0437 \u0443\u0447\u0430\u0441\u0442\u0438\u044f \u0441\u0443\u0434\u0430 \u0438 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u043e\u0433\u043e \u0443\u043f\u0440\u0430\u0432\u043b\u044f\u044e\u0449\u0435\u0433\u043e \u2014 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u043f\u043e\u0434\u0430\u0442\u044c \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0443\u044e\u0449\u0435\u0435 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0435 \u0447\u0435\u0440\u0435\u0437 \u041c\u0424\u0426. \u0421\u0443\u043c\u043c\u0443 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0438 \u0441\u043f\u0438\u0441\u043e\u043a \u0432\u0441\u0435\u0445 \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u044b\u0445 \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044e \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432 \u043d\u0443\u0436\u043d\u043e \u043f\u0440\u0435\u0434\u043e\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u0441\u0430\u043c\u043e\u0441\u0442\u043e\u044f\u0442\u0435\u043b\u044c\u043d\u043e. \u0415\u0441\u043b\u0438 \u0432\u0441\u0435 \u0443\u0441\u043b\u043e\u0432\u0438\u044f \u0441\u043e\u0431\u043b\u044e\u0434\u0435\u043d\u044b, \u0441\u0432\u0435\u0434\u0435\u043d\u0438\u044f \u0432\u043d\u0435\u0441\u0443\u0442 \u0432 \u0415\u0434\u0438\u043d\u044b\u0439 \u0444\u0435\u0434\u0435\u0440\u0430\u043b\u044c\u043d\u044b\u0439 \u0440\u0435\u0435\u0441\u0442\u0440 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 \u0442\u0440\u0435\u0445 \u0440\u0430\u0431\u043e\u0447\u0438\u0445 \u0434\u043d\u0435\u0439. \u041f\u0440\u0438 \u044d\u0442\u043e\u043c \u043d\u0430 \u043c\u043e\u043c\u0435\u043d\u0442 \u043f\u043e\u0434\u0430\u0447\u0438 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u044f \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044f \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u043e\u043a\u043e\u043d\u0447\u0435\u043d\u043e \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0441\u0442\u0432\u043e \u0441 \u0432\u043e\u0437\u0432\u0440\u0430\u0449\u0435\u043d\u0438\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0433\u043e \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430 \u0432\u0437\u044b\u0441\u043a\u0430\u0442\u0435\u043b\u044e. \u042d\u0442\u043e \u0437\u043d\u0430\u0447\u0438\u0442, \u0447\u0442\u043e \u0443 \u043f\u043e\u0442\u0435\u043d\u0446\u0438\u0430\u043b\u044c\u043d\u043e\u0433\u043e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0430 \u043d\u0435 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0430, \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u043c\u043e\u0436\u043d\u043e \u0432\u0437\u044b\u0441\u043a\u0430\u0442\u044c. \u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0436\u0434\u0430\u043d\u0438\u043d\u0430 \u043d\u0435 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u0432\u043e\u0437\u0431\u0443\u0436\u0434\u0435\u043d\u043e \u0434\u0440\u0443\u0433\u043e\u0435 \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0441\u0442\u0432\u043e. \u0412 \u043f\u0435\u0440\u0438\u043e\u0434 \u0432\u0441\u0435\u0439 \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u044b \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044c \u043d\u0435 \u0441\u043c\u043e\u0436\u0435\u0442 \u0431\u0440\u0430\u0442\u044c \u0437\u0430\u0439\u043c\u044b, \u043a\u0440\u0435\u0434\u0438\u0442\u044b, \u0432\u044b\u0434\u0430\u0432\u0430\u0442\u044c \u043f\u043e\u0440\u0443\u0447\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430, \u0441\u043e\u0432\u0435\u0440\u0448\u0430\u0442\u044c \u0438\u043d\u044b\u0435 \u043e\u0431\u0435\u0441\u043f\u0435\u0447\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0435 \u0441\u0434\u0435\u043b\u043a\u0438. \u0412\u043d\u0435\u0441\u0443\u0434\u0435\u0431\u043d\u043e\u0435 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u043e \u0431\u0443\u0434\u0435\u0442 \u0434\u043b\u0438\u0442\u044c\u0441\u044f \u0448\u0435\u0441\u0442\u044c \u043c\u0435\u0441\u044f\u0446\u0435\u0432, \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u0442\u0430\u043a\u0436\u0435 \u0431\u0443\u0434\u0435\u0442 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0442\u044c \u043c\u043e\u0440\u0430\u0442\u043e\u0440\u0438\u0439 \u043d\u0430 \u0443\u0434\u043e\u0432\u043b\u0435\u0442\u0432\u043e\u0440\u0435\u043d\u0438\u0435 \u0442\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u0439 \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432, \u043e\u0442\u043c\u0435\u0447\u0435\u043d\u043d\u044b\u0445 \u0432 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0438 \u0434\u043e\u043b\u0436\u043d\u0438\u043a\u0430, \u0438 \u043c\u043e\u0440\u0430\u0442\u043e\u0440\u0438\u0439 \u043e\u0431 \u0443\u043f\u043b\u0430\u0442\u0435 \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u043f\u043b\u0430\u0442\u0435\u0436\u0435\u0439. \u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u043f\u0440\u0435\u043a\u0440\u0430\u0449\u0430\u0435\u0442\u0441\u044f \u043d\u0430\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435 \u043d\u0435\u0443\u0441\u0442\u043e\u0435\u043a \u0438 \u0438\u043d\u044b\u0445 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u044b\u0445 \u0441\u0430\u043d\u043a\u0446\u0438\u0439; \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u0432\u0437\u044b\u0441\u043a\u0430\u043d\u0438\u044f (\u043a\u0440\u043e\u043c\u0435 \u0430\u043b\u0438\u043c\u0435\u043d\u0442\u043e\u0432) \u0442\u0430\u043a\u0436\u0435 \u0431\u0443\u0434\u0443\u0442 \u043f\u0440\u0438\u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043b\u0435\u043d\u044b. \u041f\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044e \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u044b \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044f \u043e\u0441\u0432\u043e\u0431\u043e\u0434\u044f\u0442 \u043e\u0442 \u0434\u0430\u043b\u044c\u043d\u0435\u0439\u0448\u0435\u0433\u043e \u0432\u044b\u043f\u043e\u043b\u043d\u0435\u043d\u0438\u044f \u0442\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u0439 \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432, \u0443\u043a\u0430\u0437\u0430\u043d\u043d\u044b\u0445 \u0432 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0438 \u043e \u043f\u0440\u0438\u0437\u043d\u0430\u043d\u0438\u0438 \u0435\u0433\u043e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u043e\u043c, \u0430 \u044d\u0442\u0430 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u0438\u0437\u043d\u0430\u0435\u0442\u0441\u044f \u0431\u0435\u0437\u043d\u0430\u0434\u0435\u0436\u043d\u043e\u0439. \u0412 \u043f\u0440\u043e\u0448\u043b\u043e\u043c \u043c\u0435\u0441\u044f\u0446\u0435 \u0441\u0442\u0430\u043b\u043e \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u043e, \u0447\u0442\u043e \u0437\u0430 \u043f\u0435\u0440\u0432\u043e\u0435 \u043f\u043e\u043b\u0443\u0433\u043e\u0434\u0438\u0435 2020 \u0433\u043e\u0434\u0430 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u0438\u0435 \u0441\u0443\u0434\u044b \u043f\u0440\u0438\u0437\u043d\u0430\u043b\u0438 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0430\u043c\u0438 42,7 \u0442\u044b\u0441. \u0433\u0440\u0430\u0436\u0434\u0430\u043d (\u0432 \u0442\u043e\u043c \u0447\u0438\u0441\u043b\u0435 \u0438\u043d\u0434\u0438\u0432\u0438\u0434\u0443\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u0442\u0435\u043b\u0435\u0439) \u2014 \u043f\u043e \u0434\u0430\u043d\u043d\u044b\u043c \u0435\u0434\u0438\u043d\u043e\u0433\u043e \u0440\u0435\u0435\u0441\u0442\u0440\u0430 \u00ab\u0424\u0435\u0434\u0440\u0435\u0441\u0443\u0440\u0441\u00bb, \u044d\u0442\u043e \u043d\u0430 47,2% \u0431\u043e\u043b\u044c\u0448\u0435 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044f \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u043e\u0433\u043e \u043f\u0435\u0440\u0438\u043e\u0434\u0430 2019 \u0433\u043e\u0434\u0430. \u0420\u043e\u0441\u0442 \u0447\u0438\u0441\u043b\u0430 \u043e\u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0438\u0432\u0448\u0438\u0445\u0441\u044f \u0433\u0440\u0430\u0436\u0434\u0430\u043d \u0432\u043e \u0432\u0442\u043e\u0440\u043e\u043c \u043a\u0432\u0430\u0440\u0442\u0430\u043b\u0435 \u043f\u043e \u0441\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u044e \u0441 \u043f\u0435\u0440\u0432\u044b\u043c \u0437\u0430\u043c\u0435\u0434\u043b\u0438\u043b\u0441\u044f \u2014 \u0442\u0430\u043a\u0430\u044f \u0434\u0438\u043d\u0430\u043c\u0438\u043a\u0430 \u043e\u0431\u0443\u0441\u043b\u043e\u0432\u043b\u0435\u043d\u0430 \u0442\u0435\u043c, \u0447\u0442\u043e \u0432 \u043f\u0435\u0440\u0438\u043e\u0434 \u043e\u0433\u0440\u0430\u043d\u0438\u0447\u0435\u043d\u0438\u0439 \u0441 19 \u043c\u0430\u0440\u0442\u0430 \u043f\u043e 11 \u043c\u0430\u044f \u0441\u0443\u0434\u044b \u0440\u0435\u0434\u043a\u043e \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u043b\u0438 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u043d\u044b\u0435 \u0434\u0435\u043b\u0430 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439 \u0438 \u043c\u0435\u043d\u044c\u0448\u0435, \u0447\u0435\u043c \u043e\u0431\u044b\u0447\u043d\u043e, \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0436\u0434\u0430\u043d, \u043e\u0431\u044a\u044f\u0441\u043d\u044f\u043b \u0440\u0443\u043a\u043e\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c \u043f\u0440\u043e\u0435\u043a\u0442\u0430 \u00ab\u0424\u0435\u0434\u0440\u0435\u0441\u0443\u0440\u0441\u00bb \u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u042e\u0445\u043d\u0438\u043d. \u041e\u043d \u043f\u0440\u043e\u0433\u043d\u043e\u0437\u0438\u0440\u0443\u0435\u0442, \u0447\u0442\u043e \u0432\u043e \u0432\u0442\u043e\u0440\u043e\u043c \u043f\u043e\u043b\u0443\u0433\u043e\u0434\u0438\u0438 \u043c\u044b \u0443\u0432\u0438\u0434\u0438\u043c \u0440\u043e\u0441\u0442 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044f, \u043a\u043e\u0433\u0434\u0430 \u0441\u0443\u0434\u044b \u0440\u0430\u0441\u0441\u043c\u043e\u0442\u0440\u044f\u0442 \u0432\u0441\u0435 \u0434\u0435\u043b\u0430, \u0447\u0442\u043e \u043d\u0435 \u0441\u043c\u043e\u0433\u043b\u0438 \u0440\u0430\u043d\u0435\u0435 \u0432 \u0440\u0435\u0436\u0438\u043c\u0435 \u043e\u0433\u0440\u0430\u043d\u0438\u0447\u0435\u043d\u0438\u0439. \u041f\u043e \u0435\u0433\u043e \u0434\u0430\u043d\u043d\u044b\u043c, \u0443\u0436\u0435 \u0432 \u0438\u044e\u043d\u0435 \u0447\u0438\u0441\u043b\u043e \u043b\u0438\u0447\u043d\u044b\u0445 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432 \u0432\u044b\u0440\u043e\u0441\u043b\u043e \u0434\u043e 11,5 \u0442\u044b\u0441., \u0447\u0442\u043e \u0432 \u0434\u0432\u0430 \u0440\u0430\u0437\u0430 \u043f\u0440\u0435\u0432\u044b\u0448\u0430\u0435\u0442 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u043e\u0433\u043e \u043f\u0435\u0440\u0438\u043e\u0434\u0430 2019 \u0433\u043e\u0434\u0430.", "example_title": "\u041d\u043e\u0432\u043e\u0441\u0442\u0438"}, {"text": "\u0410\u043a\u0442\u0443\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u044b. \u042d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u0430\u044f \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u044f \u0438\u0433\u0440\u0430\u0435\u0442 \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0443\u044e \u0440\u043e\u043b\u044c \u0432\u043e \u0432\u0441\u0435\u0445 \u0441\u0444\u0435\u0440\u0430\u0445 \u0436\u0438\u0437\u043d\u0438 \u0441\u043e\u0432\u0440\u0435\u043c\u0435\u043d\u043d\u043e\u0433\u043e \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0430. \u0412 \u043f\u043e\u0441\u043b\u0435\u0434\u043d\u0438\u0435 \u0433\u043e\u0434\u044b \u043e\u0431\u044a\u0435\u043c \u043d\u0430\u0443\u0447\u043d\u043e-\u0442\u0435\u0445\u043d\u0438\u0447\u0435\u0441\u043a\u043e\u0439 \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438 \u0432 \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u043e\u043c \u0432\u0438\u0434\u0435 \u0432\u043e\u0437\u0440\u043e\u0441 \u043d\u0430\u0441\u0442\u043e\u043b\u044c\u043a\u043e, \u0447\u0442\u043e \u0432\u043e\u0437\u043d\u0438\u043a\u0430\u0435\u0442 \u0443\u0433\u0440\u043e\u0437\u0430 \u043e\u0431\u0435\u0441\u0446\u0435\u043d\u0438\u0432\u0430\u043d\u0438\u044f \u044d\u0442\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438 \u0432 \u0441\u0432\u044f\u0437\u0438 \u0441 \u0442\u0440\u0443\u0434\u043d\u043e\u0441\u0442\u044f\u043c\u0438 \u043f\u043e\u0438\u0441\u043a\u0430 \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u044b\u0445 \u0441\u0432\u0435\u0434\u0435\u043d\u0438\u0439 \u0441\u0440\u0435\u0434\u0438 \u043c\u043d\u043e\u0436\u0435\u0441\u0442\u0432\u0430 \u0434\u043e\u0441\u0442\u0443\u043f\u043d\u044b\u0445 \u0442\u0435\u043a\u0441\u0442\u043e\u0432. \u0420\u0430\u0437\u0432\u0438\u0442\u0438\u0435 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u044b\u0445 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u0418\u043d\u0442\u0435\u0440\u043d\u0435\u0442 \u043c\u043d\u043e\u0433\u043e\u043a\u0440\u0430\u0442\u043d\u043e \u0443\u0441\u0443\u0433\u0443\u0431\u0438\u043b\u043e \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u0443 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u043f\u0435\u0440\u0435\u0433\u0440\u0443\u0437\u043a\u0438. \u0412 \u044d\u0442\u043e\u0439 \u0441\u0438\u0442\u0443\u0430\u0446\u0438\u0438 \u043e\u0441\u043e\u0431\u0435\u043d\u043d\u043e \u0430\u043a\u0442\u0443\u0430\u043b\u044c\u043d\u044b\u043c\u0438 \u0441\u0442\u0430\u043d\u043e\u0432\u044f\u0442\u0441\u044f \u043c\u0435\u0442\u043e\u0434\u044b \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0437\u0430\u0446\u0438\u0438 \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438, \u0442\u043e \u0435\u0441\u0442\u044c \u043c\u0435\u0442\u043e\u0434\u044b \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u0438\u044f \u0441\u0436\u0430\u0442\u043e\u0433\u043e \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u044b\u0445 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u043e\u0432\u2013\u0440\u0435\u0444\u0435\u0440\u0430\u0442\u043e\u0432 (\u0430\u043d\u043d\u043e\u0442\u0430\u0446\u0438\u0439). \u041f\u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043a\u0430 \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u044b \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0433\u043e \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u0442\u0435\u043a\u0441\u0442\u0430 \u0438 \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u043f\u043e\u043f\u044b\u0442\u043a\u0438 \u0435\u0435 \u0440\u0435\u0448\u0435\u043d\u0438\u044f \u0441 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u0435\u043c \u0440\u0430\u0437\u043b\u0438\u0447\u043d\u044b\u0445 \u043f\u043e\u0434\u0445\u043e\u0434\u043e\u0432 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u043b\u0438\u0441\u044c \u043c\u043d\u043e\u0433\u0438\u043c\u0438 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044f\u043c\u0438. \u0418\u0441\u0442\u043e\u0440\u0438\u044f \u043f\u0440\u0438\u043c\u0435\u043d\u0435\u043d\u0438\u044f \u0432\u044b\u0447\u0438\u0441\u043b\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0442\u0435\u0445\u043d\u0438\u043a\u0438 \u0434\u043b\u044f \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u043d\u0430\u0441\u0447\u0438\u0442\u044b\u0432\u0430\u0435\u0442 \u0443\u0436\u0435 \u0431\u043e\u043b\u0435\u0435 50 \u043b\u0435\u0442 \u0438 \u0441\u0432\u044f\u0437\u0430\u043d\u0430 \u0441 \u0438\u043c\u0435\u043d\u0430\u043c\u0438 \u0442\u0430\u043a\u0438\u0445 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439, \u043a\u0430\u043a \u0413.\u041f. \u041b\u0443\u043d, \u0412.\u0415. \u0411\u0435\u0440\u0437\u043e\u043d, \u0418.\u041f. C\u0435\u0432\u0431\u043e, \u042d.\u0424. \u0421\u043a\u043e\u0440\u043e\u0445\u043e\u0434\u044c\u043a\u043e, \u0414.\u0413. \u041b\u0430\u0445\u0443\u0442\u0438, \u0420.\u0413. \u041f\u0438\u043e\u0442\u0440\u043e\u0432\u0441\u043a\u0438\u0439 \u0438 \u0434\u0440. \u0417\u0430 \u044d\u0442\u0438 \u0433\u043e\u0434\u044b \u0432\u044b\u0440\u0430\u0431\u043e\u0442\u0430\u043d\u044b \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043f\u043e\u0434\u0445\u043e\u0434\u044b \u043a \u0440\u0435\u0448\u0435\u043d\u0438\u044e \u0434\u0430\u043d\u043d\u043e\u0439 \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u044b, \u043a\u043e\u0442\u043e\u0440\u044b\u0435 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u0447\u0435\u0442\u043a\u043e \u043f\u043e\u0434\u0440\u0430\u0437\u0434\u0435\u043b\u044f\u044e\u0442\u0441\u044f \u043d\u0430 \u0434\u0432\u0430 \u043d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u044f: \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0435, \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0435 \u043d\u0430 \u044d\u043a\u0441\u0442\u0440\u0430\u0433\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0438 \u0438\u0437 \u043f\u0435\u0440\u0432\u0438\u0447\u043d\u044b\u0445 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u043e\u0432 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0435\u043d\u043d\u044b\u0445 \u0444\u043e\u0440\u043c\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0438\u0437\u043d\u0430\u043a\u043e\u0432 \u00ab\u043d\u0430\u0438\u0431\u043e\u043b\u0435\u0435 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0442\u0438\u0432\u043d\u044b\u0445\u00bb \u0444\u0440\u0430\u0437 (\u0444\u0440\u0430\u0433\u043c\u0435\u043d\u0442\u043e\u0432), \u0441\u043e\u0432\u043e\u043a\u0443\u043f\u043d\u043e\u0441\u0442\u044c \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043e\u0431\u0440\u0430\u0437\u0443\u0435\u0442 \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u044b\u0439 \u044d\u043a\u0441\u0442\u0440\u0430\u043a\u0442; \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0435, \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0435 \u043d\u0430 \u0432\u044b\u0434\u0435\u043b\u0435\u043d\u0438\u0438 \u0438\u0437 \u0442\u0435\u043a\u0441\u0442\u043e\u0432 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u044b\u0445 \u044f\u0437\u044b\u043a\u043e\u0432 \u043d\u0430\u0438\u0431\u043e\u043b\u0435\u0435 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438 \u0438 \u043f\u043e\u0440\u043e\u0436\u0434\u0435\u043d\u0438\u0438 \u043d\u043e\u0432\u044b\u0445 \u0442\u0435\u043a\u0441\u0442\u043e\u0432 (\u0440\u0435\u0444\u0435\u0440\u0430\u0442\u043e\u0432), \u0441\u043e\u0434\u0435\u0440\u0436\u0430\u0442\u0435\u043b\u044c\u043d\u043e \u043e\u0431\u043e\u0431\u0449\u0430\u044e\u0449\u0438\u0445 \u043f\u0435\u0440\u0432\u0438\u0447\u043d\u044b\u0435 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b.", "example_title": "\u041d\u0430\u0443\u0447\u043d\u0430\u044f \u0441\u0442\u0430\u0442\u044c\u044f"}]} | IlyaGusev/mbart_ru_sum_gazeta | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"arxiv:2006.11063",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2006.11063"
] | [
"ru"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #ru #dataset-IlyaGusev/gazeta #arxiv-2006.11063 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| MBARTRuSumGazeta
================
Model description
-----------------
This is a ported version of fairseq model.
For more details, please see Dataset for Automatic Summarization of Russian News.
Intended uses & limitations
---------------------------
#### How to use
Colab: link
#### Limitations and bias
* The model should work well with URL articles, but for any other agencies it can suffer from domain shift
Training data
-------------
* Dataset: Gazeta
Training procedure
------------------
* Fairseq training script: URL
* Porting: Colab link
Eval results
------------
* Train dataset: Gazeta v1 train
* Test dataset: Gazeta v1 test
* Source max\_length: 600
* Target max\_length: 200
* no\_repeat\_ngram\_size: 4
* num\_beams: 5
* Train dataset: Gazeta v1 train
* Test dataset: Gazeta v2 test
* Source max\_length: 600
* Target max\_length: 200
* no\_repeat\_ngram\_size: 4
* num\_beams: 5
Predicting all summaries:
Evaluation: URL
Flags: --language ru --tokenize-after --lower
### BibTeX entry and citation info
| [
"#### How to use\n\n\nColab: link",
"#### Limitations and bias\n\n\n* The model should work well with URL articles, but for any other agencies it can suffer from domain shift\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Fairseq training script: URL\n* Porting: Colab link\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nPredicting all summaries:\n\n\nEvaluation: URL\n\n\nFlags: --language ru --tokenize-after --lower",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #ru #dataset-IlyaGusev/gazeta #arxiv-2006.11063 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### How to use\n\n\nColab: link",
"#### Limitations and bias\n\n\n* The model should work well with URL articles, but for any other agencies it can suffer from domain shift\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Fairseq training script: URL\n* Porting: Colab link\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nPredicting all summaries:\n\n\nEvaluation: URL\n\n\nFlags: --language ru --tokenize-after --lower",
"### BibTeX entry and citation info"
] | [
74,
11,
242,
10
] | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #ru #dataset-IlyaGusev/gazeta #arxiv-2006.11063 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n#### How to use\n\n\nColab: link#### Limitations and bias\n\n\n* The model should work well with URL articles, but for any other agencies it can suffer from domain shift\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Fairseq training script: URL\n* Porting: Colab link\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nPredicting all summaries:\n\n\nEvaluation: URL\n\n\nFlags: --language ru --tokenize-after --lower### BibTeX entry and citation info"
] |
null | transformers |
# NewsTgRuBERT
Training script: https://github.com/dialogue-evaluation/Russian-News-Clustering-and-Headline-Generation/blob/main/train_mlm.py | {"language": ["ru"], "license": "apache-2.0"} | IlyaGusev/news_tg_rubert | null | [
"transformers",
"pytorch",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #ru #license-apache-2.0 #endpoints_compatible #region-us
|
# NewsTgRuBERT
Training script: URL | [
"# NewsTgRuBERT\n\nTraining script: URL"
] | [
"TAGS\n#transformers #pytorch #ru #license-apache-2.0 #endpoints_compatible #region-us \n",
"# NewsTgRuBERT\n\nTraining script: URL"
] | [
27,
11
] | [
"TAGS\n#transformers #pytorch #ru #license-apache-2.0 #endpoints_compatible #region-us \n# NewsTgRuBERT\n\nTraining script: URL"
] |
token-classification | transformers |
# RuBERTExtSumGazeta
## Model description
Model for extractive summarization based on [rubert-base-cased](DeepPavlov/rubert-base-cased)
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1Q8_v3H-kxdJhZIiyLYat7Kj02qDq7M1L)
```python
import razdel
from transformers import AutoTokenizer, BertForTokenClassification
model_name = "IlyaGusev/rubert_ext_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
sep_token = tokenizer.sep_token
sep_token_id = tokenizer.sep_token_id
model = BertForTokenClassification.from_pretrained(model_name)
article_text = "..."
sentences = [s.text for s in razdel.sentenize(article_text)]
article_text = sep_token.join(sentences)
inputs = tokenizer(
[article_text],
max_length=500,
padding="max_length",
truncation=True,
return_tensors="pt",
)
sep_mask = inputs["input_ids"][0] == sep_token_id
# Fix token_type_ids
current_token_type_id = 0
for pos, input_id in enumerate(inputs["input_ids"][0]):
inputs["token_type_ids"][0][pos] = current_token_type_id
if input_id == sep_token_id:
current_token_type_id = 1 - current_token_type_id
# Infer model
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits[0, :, 1]
# Choose sentences
logits = logits[sep_mask]
logits, indices = logits.sort(descending=True)
logits, indices = logits.cpu().tolist(), indices.cpu().tolist()
pairs = list(zip(logits, indices))
pairs = pairs[:3]
indices = list(sorted([idx for _, idx in pairs]))
summary = " ".join([sentences[idx] for idx in indices])
print(summary)
```
#### Limitations and bias
- The model should work well with Gazeta.ru articles, but for any other agencies it can suffer from domain shift
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
TBD
## Eval results
TBD
Evaluation: https://github.com/IlyaGusev/summarus/blob/master/evaluate.py
Flags: --language ru --tokenize-after --lower
| {"language": ["ru"], "license": "apache-2.0", "tags": ["summarization", "token-classification", "t5"], "datasets": ["IlyaGusev/gazeta"], "inference": false, "widget": [{"text": "\u0421 1 \u0441\u0435\u043d\u0442\u044f\u0431\u0440\u044f \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0432\u0441\u0442\u0443\u043f\u0430\u044e\u0442 \u0432 \u0441\u0438\u043b\u0443 \u043f\u043e\u043f\u0440\u0430\u0432\u043a\u0438 \u0432 \u0437\u0430\u043a\u043e\u043d \u00ab\u041e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u0435\u00bb \u2014 \u0442\u0435\u043f\u0435\u0440\u044c \u0434\u043e\u043b\u0436\u043d\u0438\u043a\u0438 \u0441\u043c\u043e\u0433\u0443\u0442 \u043e\u0441\u0432\u043e\u0431\u043e\u0436\u0434\u0430\u0442\u044c\u0441\u044f \u043e\u0442 \u043d\u0435\u043f\u043e\u0441\u0438\u043b\u044c\u043d\u044b\u0445 \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0432\u043e \u0432\u043d\u0435\u0441\u0443\u0434\u0435\u0431\u043d\u043e\u043c \u043f\u043e\u0440\u044f\u0434\u043a\u0435, \u0435\u0441\u043b\u0438 \u0441\u0443\u043c\u043c\u0430 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 \u043d\u0435 \u043c\u0435\u043d\u0435\u0435 50 \u0442\u044b\u0441. \u0440\u0443\u0431\u043b\u0435\u0439 \u0438 \u043d\u0435 \u043f\u0440\u0435\u0432\u044b\u0448\u0430\u0435\u0442 500 \u0442\u044b\u0441. \u0440\u0443\u0431\u043b\u0435\u0439 \u0431\u0435\u0437 \u0443\u0447\u0435\u0442\u0430 \u0448\u0442\u0440\u0430\u0444\u043e\u0432, \u043f\u0435\u043d\u0438, \u043f\u0440\u043e\u0446\u0435\u043d\u0442\u043e\u0432 \u0437\u0430 \u043f\u0440\u043e\u0441\u0440\u043e\u0447\u043a\u0443 \u043f\u043b\u0430\u0442\u0435\u0436\u0430 \u0438 \u043f\u0440\u043e\u0447\u0438\u0445 \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0445 \u0438\u043b\u0438 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u044b\u0445 \u0441\u0430\u043d\u043a\u0446\u0438\u0439.[SEP]\u0423 \u0444\u0438\u0437\u043b\u0438\u0446 \u0438 \u0438\u043d\u0434\u0438\u0432\u0438\u0434\u0443\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u0442\u0435\u043b\u0435\u0439 \u043f\u043e\u044f\u0432\u0438\u043b\u0430\u0441\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u043e\u0439\u0442\u0438 \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u0443 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u0430 \u0431\u0435\u0437 \u0443\u0447\u0430\u0441\u0442\u0438\u044f \u0441\u0443\u0434\u0430 \u0438 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u043e\u0433\u043e \u0443\u043f\u0440\u0430\u0432\u043b\u044f\u044e\u0449\u0435\u0433\u043e \u2014 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u043f\u043e\u0434\u0430\u0442\u044c \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0443\u044e\u0449\u0435\u0435 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0435 \u0447\u0435\u0440\u0435\u0437 \u041c\u0424\u0426.[SEP]\u0421\u0443\u043c\u043c\u0443 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0438 \u0441\u043f\u0438\u0441\u043e\u043a \u0432\u0441\u0435\u0445 \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u044b\u0445 \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044e \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432 \u043d\u0443\u0436\u043d\u043e \u043f\u0440\u0435\u0434\u043e\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u0441\u0430\u043c\u043e\u0441\u0442\u043e\u044f\u0442\u0435\u043b\u044c\u043d\u043e.[SEP]\u0415\u0441\u043b\u0438 \u0432\u0441\u0435 \u0443\u0441\u043b\u043e\u0432\u0438\u044f \u0441\u043e\u0431\u043b\u044e\u0434\u0435\u043d\u044b, \u0441\u0432\u0435\u0434\u0435\u043d\u0438\u044f \u0432\u043d\u0435\u0441\u0443\u0442 \u0432 \u0415\u0434\u0438\u043d\u044b\u0439 \u0444\u0435\u0434\u0435\u0440\u0430\u043b\u044c\u043d\u044b\u0439 \u0440\u0435\u0435\u0441\u0442\u0440 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 \u0442\u0440\u0435\u0445 \u0440\u0430\u0431\u043e\u0447\u0438\u0445 \u0434\u043d\u0435\u0439.[SEP]\u041f\u0440\u0438 \u044d\u0442\u043e\u043c \u043d\u0430 \u043c\u043e\u043c\u0435\u043d\u0442 \u043f\u043e\u0434\u0430\u0447\u0438 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u044f \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044f \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u043e\u043a\u043e\u043d\u0447\u0435\u043d\u043e \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0441\u0442\u0432\u043e \u0441 \u0432\u043e\u0437\u0432\u0440\u0430\u0449\u0435\u043d\u0438\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0433\u043e \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430 \u0432\u0437\u044b\u0441\u043a\u0430\u0442\u0435\u043b\u044e.[SEP]\u042d\u0442\u043e \u0437\u043d\u0430\u0447\u0438\u0442, \u0447\u0442\u043e \u0443 \u043f\u043e\u0442\u0435\u043d\u0446\u0438\u0430\u043b\u044c\u043d\u043e\u0433\u043e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0430 \u043d\u0435 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0430, \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u043c\u043e\u0436\u043d\u043e \u0432\u0437\u044b\u0441\u043a\u0430\u0442\u044c.[SEP]\u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0436\u0434\u0430\u043d\u0438\u043d\u0430 \u043d\u0435 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u0432\u043e\u0437\u0431\u0443\u0436\u0434\u0435\u043d\u043e \u0434\u0440\u0443\u0433\u043e\u0435 \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0441\u0442\u0432\u043e.[SEP]\u0412 \u043f\u0435\u0440\u0438\u043e\u0434 \u0432\u0441\u0435\u0439 \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u044b \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044c \u043d\u0435 \u0441\u043c\u043e\u0436\u0435\u0442 \u0431\u0440\u0430\u0442\u044c \u0437\u0430\u0439\u043c\u044b, \u043a\u0440\u0435\u0434\u0438\u0442\u044b, \u0432\u044b\u0434\u0430\u0432\u0430\u0442\u044c \u043f\u043e\u0440\u0443\u0447\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430, \u0441\u043e\u0432\u0435\u0440\u0448\u0430\u0442\u044c \u0438\u043d\u044b\u0435 \u043e\u0431\u0435\u0441\u043f\u0435\u0447\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0435 \u0441\u0434\u0435\u043b\u043a\u0438.[SEP]\u0412\u043d\u0435\u0441\u0443\u0434\u0435\u0431\u043d\u043e\u0435 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u043e \u0431\u0443\u0434\u0435\u0442 \u0434\u043b\u0438\u0442\u044c\u0441\u044f \u0448\u0435\u0441\u0442\u044c \u043c\u0435\u0441\u044f\u0446\u0435\u0432, \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u0442\u0430\u043a\u0436\u0435 \u0431\u0443\u0434\u0435\u0442 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0442\u044c \u043c\u043e\u0440\u0430\u0442\u043e\u0440\u0438\u0439 \u043d\u0430 \u0443\u0434\u043e\u0432\u043b\u0435\u0442\u0432\u043e\u0440\u0435\u043d\u0438\u0435 \u0442\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u0439 \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432, \u043e\u0442\u043c\u0435\u0447\u0435\u043d\u043d\u044b\u0445 \u0432 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0438 \u0434\u043e\u043b\u0436\u043d\u0438\u043a\u0430, \u0438 \u043c\u043e\u0440\u0430\u0442\u043e\u0440\u0438\u0439 \u043e\u0431 \u0443\u043f\u043b\u0430\u0442\u0435 \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u043f\u043b\u0430\u0442\u0435\u0436\u0435\u0439.[SEP]\u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u043f\u0440\u0435\u043a\u0440\u0430\u0449\u0430\u0435\u0442\u0441\u044f \u043d\u0430\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435 \u043d\u0435\u0443\u0441\u0442\u043e\u0435\u043a \u0438 \u0438\u043d\u044b\u0445 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u044b\u0445 \u0441\u0430\u043d\u043a\u0446\u0438\u0439; \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u0432\u0437\u044b\u0441\u043a\u0430\u043d\u0438\u044f (\u043a\u0440\u043e\u043c\u0435 \u0430\u043b\u0438\u043c\u0435\u043d\u0442\u043e\u0432) \u0442\u0430\u043a\u0436\u0435 \u0431\u0443\u0434\u0443\u0442 \u043f\u0440\u0438\u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043b\u0435\u043d\u044b.[SEP]\u041f\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044e \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u044b \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044f \u043e\u0441\u0432\u043e\u0431\u043e\u0434\u044f\u0442 \u043e\u0442 \u0434\u0430\u043b\u044c\u043d\u0435\u0439\u0448\u0435\u0433\u043e \u0432\u044b\u043f\u043e\u043b\u043d\u0435\u043d\u0438\u044f \u0442\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u0439 \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432, \u0443\u043a\u0430\u0437\u0430\u043d\u043d\u044b\u0445 \u0432 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0438 \u043e \u043f\u0440\u0438\u0437\u043d\u0430\u043d\u0438\u0438 \u0435\u0433\u043e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u043e\u043c, \u0430 \u044d\u0442\u0430 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u0438\u0437\u043d\u0430\u0435\u0442\u0441\u044f \u0431\u0435\u0437\u043d\u0430\u0434\u0435\u0436\u043d\u043e\u0439.[SEP]\u0412 \u043f\u0440\u043e\u0448\u043b\u043e\u043c \u043c\u0435\u0441\u044f\u0446\u0435 \u0441\u0442\u0430\u043b\u043e \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u043e, \u0447\u0442\u043e \u0437\u0430 \u043f\u0435\u0440\u0432\u043e\u0435 \u043f\u043e\u043b\u0443\u0433\u043e\u0434\u0438\u0435 2020 \u0433\u043e\u0434\u0430 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u0438\u0435 \u0441\u0443\u0434\u044b \u043f\u0440\u0438\u0437\u043d\u0430\u043b\u0438 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0430\u043c\u0438 42,7 \u0442\u044b\u0441. \u0433\u0440\u0430\u0436\u0434\u0430\u043d (\u0432 \u0442\u043e\u043c \u0447\u0438\u0441\u043b\u0435 \u0438\u043d\u0434\u0438\u0432\u0438\u0434\u0443\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u0442\u0435\u043b\u0435\u0439) \u2014 \u043f\u043e \u0434\u0430\u043d\u043d\u044b\u043c \u0435\u0434\u0438\u043d\u043e\u0433\u043e \u0440\u0435\u0435\u0441\u0442\u0440\u0430 \u00ab\u0424\u0435\u0434\u0440\u0435\u0441\u0443\u0440\u0441\u00bb, \u044d\u0442\u043e \u043d\u0430 47,2% \u0431\u043e\u043b\u044c\u0448\u0435 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044f \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u043e\u0433\u043e \u043f\u0435\u0440\u0438\u043e\u0434\u0430 2019 \u0433\u043e\u0434\u0430.[SEP]\u0420\u043e\u0441\u0442 \u0447\u0438\u0441\u043b\u0430 \u043e\u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0438\u0432\u0448\u0438\u0445\u0441\u044f \u0433\u0440\u0430\u0436\u0434\u0430\u043d \u0432\u043e \u0432\u0442\u043e\u0440\u043e\u043c \u043a\u0432\u0430\u0440\u0442\u0430\u043b\u0435 \u043f\u043e \u0441\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u044e \u0441 \u043f\u0435\u0440\u0432\u044b\u043c \u0437\u0430\u043c\u0435\u0434\u043b\u0438\u043b\u0441\u044f \u2014 \u0442\u0430\u043a\u0430\u044f \u0434\u0438\u043d\u0430\u043c\u0438\u043a\u0430 \u043e\u0431\u0443\u0441\u043b\u043e\u0432\u043b\u0435\u043d\u0430 \u0442\u0435\u043c, \u0447\u0442\u043e \u0432 \u043f\u0435\u0440\u0438\u043e\u0434 \u043e\u0433\u0440\u0430\u043d\u0438\u0447\u0435\u043d\u0438\u0439 \u0441 19 \u043c\u0430\u0440\u0442\u0430 \u043f\u043e 11 \u043c\u0430\u044f \u0441\u0443\u0434\u044b \u0440\u0435\u0434\u043a\u043e \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u043b\u0438 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u043d\u044b\u0435 \u0434\u0435\u043b\u0430 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439 \u0438 \u043c\u0435\u043d\u044c\u0448\u0435, \u0447\u0435\u043c \u043e\u0431\u044b\u0447\u043d\u043e, \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0436\u0434\u0430\u043d, \u043e\u0431\u044a\u044f\u0441\u043d\u044f\u043b \u0440\u0443\u043a\u043e\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c \u043f\u0440\u043e\u0435\u043a\u0442\u0430 \u00ab\u0424\u0435\u0434\u0440\u0435\u0441\u0443\u0440\u0441\u00bb \u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u042e\u0445\u043d\u0438\u043d.[SEP]", "example_title": "\u041d\u043e\u0432\u043e\u0441\u0442\u0438"}]} | IlyaGusev/rubert_ext_sum_gazeta | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"summarization",
"t5",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #bert #token-classification #summarization #t5 #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #region-us
|
# RuBERTExtSumGazeta
## Model description
Model for extractive summarization based on rubert-base-cased
## Intended uses & limitations
#### How to use
Colab: link
#### Limitations and bias
- The model should work well with URL articles, but for any other agencies it can suffer from domain shift
## Training data
- Dataset: Gazeta
## Training procedure
TBD
## Eval results
TBD
Evaluation: URL
Flags: --language ru --tokenize-after --lower
| [
"# RuBERTExtSumGazeta",
"## Model description\n\nModel for extractive summarization based on rubert-base-cased",
"## Intended uses & limitations",
"#### How to use\n\nColab: link",
"#### Limitations and bias\n\n- The model should work well with URL articles, but for any other agencies it can suffer from domain shift",
"## Training data\n\n- Dataset: Gazeta",
"## Training procedure\n\nTBD",
"## Eval results\n\nTBD\n\nEvaluation: URL\n\nFlags: --language ru --tokenize-after --lower"
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #summarization #t5 #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #region-us \n",
"# RuBERTExtSumGazeta",
"## Model description\n\nModel for extractive summarization based on rubert-base-cased",
"## Intended uses & limitations",
"#### How to use\n\nColab: link",
"#### Limitations and bias\n\n- The model should work well with URL articles, but for any other agencies it can suffer from domain shift",
"## Training data\n\n- Dataset: Gazeta",
"## Training procedure\n\nTBD",
"## Eval results\n\nTBD\n\nEvaluation: URL\n\nFlags: --language ru --tokenize-after --lower"
] | [
51,
8,
20,
6,
11,
29,
10,
6,
26
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #summarization #t5 #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #region-us \n# RuBERTExtSumGazeta## Model description\n\nModel for extractive summarization based on rubert-base-cased## Intended uses & limitations#### How to use\n\nColab: link#### Limitations and bias\n\n- The model should work well with URL articles, but for any other agencies it can suffer from domain shift## Training data\n\n- Dataset: Gazeta## Training procedure\n\nTBD## Eval results\n\nTBD\n\nEvaluation: URL\n\nFlags: --language ru --tokenize-after --lower"
] |
summarization | transformers |
# RuBertTelegramHeadlines
## Model description
Example model for [Headline generation competition](https://competitions.codalab.org/competitions/29905)
Based on [RuBERT](http://docs.deeppavlov.ai/en/master/features/models/bert.html) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, EncoderDecoderModel
model_name = "IlyaGusev/rubert_telegram_headlines"
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=False, do_basic_tokenize=False, strip_accents=False)
model = EncoderDecoderModel.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
add_special_tokens=True,
max_length=256,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=64,
no_repeat_ngram_size=3,
num_beams=10,
top_p=0.95
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
```python
import random
import torch
from torch.utils.data import Dataset
from tqdm.notebook import tqdm
from transformers import BertTokenizer, EncoderDecoderModel, Trainer, TrainingArguments, logging
def convert_to_tensors(
tokenizer,
text,
max_text_tokens_count,
max_title_tokens_count = None,
title = None
):
inputs = tokenizer(
text,
add_special_tokens=True,
max_length=max_text_tokens_count,
padding="max_length",
truncation=True
)
result = {
"input_ids": torch.tensor(inputs["input_ids"]),
"attention_mask": torch.tensor(inputs["attention_mask"]),
}
if title is not None:
outputs = tokenizer(
title,
add_special_tokens=True,
max_length=max_title_tokens_count,
padding="max_length",
truncation=True
)
decoder_input_ids = torch.tensor(outputs["input_ids"])
decoder_attention_mask = torch.tensor(outputs["attention_mask"])
labels = decoder_input_ids.clone()
labels[decoder_attention_mask == 0] = -100
result.update({
"labels": labels,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask
})
return result
class GetTitleDataset(Dataset):
def __init__(
self,
original_records,
sample_rate,
tokenizer,
max_text_tokens_count,
max_title_tokens_count
):
self.original_records = original_records
self.sample_rate = sample_rate
self.tokenizer = tokenizer
self.max_text_tokens_count = max_text_tokens_count
self.max_title_tokens_count = max_title_tokens_count
self.records = []
for record in tqdm(original_records):
if random.random() > self.sample_rate:
continue
tensors = convert_to_tensors(
tokenizer=tokenizer,
title=record["title"],
text=record["text"],
max_title_tokens_count=self.max_title_tokens_count,
max_text_tokens_count=self.max_text_tokens_count
)
self.records.append(tensors)
def __len__(self):
return len(self.records)
def __getitem__(self, index):
return self.records[index]
def train(
train_records,
val_records,
pretrained_model_path,
train_sample_rate=1.0,
val_sample_rate=1.0,
output_model_path="models",
checkpoint=None,
max_text_tokens_count=256,
max_title_tokens_count=64,
batch_size=8,
logging_steps=1000,
eval_steps=10000,
save_steps=10000,
learning_rate=0.00003,
warmup_steps=2000,
num_train_epochs=3
):
logging.set_verbosity_info()
tokenizer = BertTokenizer.from_pretrained(
pretrained_model_path,
do_lower_case=False,
do_basic_tokenize=False,
strip_accents=False
)
train_dataset = GetTitleDataset(
train_records,
train_sample_rate,
tokenizer,
max_text_tokens_count=max_text_tokens_count,
max_title_tokens_count=max_title_tokens_count
)
val_dataset = GetTitleDataset(
val_records,
val_sample_rate,
tokenizer,
max_text_tokens_count=max_text_tokens_count,
max_title_tokens_count=max_title_tokens_count
)
model = EncoderDecoderModel.from_encoder_decoder_pretrained(pretrained_model_path, pretrained_model_path)
training_args = TrainingArguments(
output_dir=output_model_path,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
do_train=True,
do_eval=True,
overwrite_output_dir=False,
logging_steps=logging_steps,
eval_steps=eval_steps,
evaluation_strategy="steps",
save_steps=save_steps,
learning_rate=learning_rate,
warmup_steps=warmup_steps,
num_train_epochs=num_train_epochs,
max_steps=-1,
save_total_limit=1,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train(checkpoint)
model.save_pretrained(output_model_path)
``` | {"language": ["ru"], "license": "apache-2.0", "tags": ["summarization"], "inference": {"parameters": {"no_repeat_ngram_size": 4}}} | IlyaGusev/rubert_telegram_headlines | null | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #summarization #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# RuBertTelegramHeadlines
## Model description
Example model for Headline generation competition
Based on RuBERT model
## Intended uses & limitations
#### How to use
## Training data
- Dataset: ru_all_split.URL
## Training procedure
| [
"# RuBertTelegramHeadlines",
"## Model description\n\nExample model for Headline generation competition\n\nBased on RuBERT model",
"## Intended uses & limitations",
"#### How to use",
"## Training data\n\n- Dataset: ru_all_split.URL",
"## Training procedure"
] | [
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #summarization #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# RuBertTelegramHeadlines",
"## Model description\n\nExample model for Headline generation competition\n\nBased on RuBERT model",
"## Intended uses & limitations",
"#### How to use",
"## Training data\n\n- Dataset: ru_all_split.URL",
"## Training procedure"
] | [
53,
8,
15,
6,
7,
16,
4
] | [
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #summarization #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# RuBertTelegramHeadlines## Model description\n\nExample model for Headline generation competition\n\nBased on RuBERT model## Intended uses & limitations#### How to use## Training data\n\n- Dataset: ru_all_split.URL## Training procedure"
] |
text-classification | transformers |
# RuBERTConv Toxic Classifier
## Model description
Based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1veKO9hke7myxKigZtZho_F-UM2fD9kp8)
```python
from transformers import pipeline
model_name = "IlyaGusev/rubertconv_toxic_clf"
pipe = pipeline("text-classification", model=model_name, tokenizer=model_name, framework="pt")
text = "ะขั ะฟัะธะดััะพะบ ะธะท ะธะฝัะตัะฝะตัะฐ"
pipe([text])
```
## Training data
Datasets:
- [2ch]( https://www.kaggle.com/blackmoon/russian-language-toxic-comments)
- [Odnoklassniki](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments)
- [Toloka Persona Chat Rus](https://toloka.ai/ru/datasets)
- [Koziev's Conversations](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data) with [toxic words vocabulary](https://www.dropbox.com/s/ou6lx03b10yhrfl/bad_vocab.txt.tar.gz)
Augmentations:
- ั -> ะต
- Remove or add "?" or "!"
- Fix CAPS
- Concatenate toxic and non-toxic texts
- Concatenate two non-toxic texts
- Add toxic words from vocabulary
- Add typos
- Mask toxic words with "*", "@", "$"
## Training procedure
TBA | {"language": ["ru"], "license": "apache-2.0", "tags": ["text-classification"]} | IlyaGusev/rubertconv_toxic_clf | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #bert #text-classification #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# RuBERTConv Toxic Classifier
## Model description
Based on rubert-base-cased-conversational model
## Intended uses & limitations
#### How to use
Colab: link
## Training data
Datasets:
- 2ch
- Odnoklassniki
- Toloka Persona Chat Rus
- Koziev's Conversations with toxic words vocabulary
Augmentations:
- ั -> ะต
- Remove or add "?" or "!"
- Fix CAPS
- Concatenate toxic and non-toxic texts
- Concatenate two non-toxic texts
- Add toxic words from vocabulary
- Add typos
- Mask toxic words with "*", "@", "$"
## Training procedure
TBA | [
"# RuBERTConv Toxic Classifier",
"## Model description\n\nBased on rubert-base-cased-conversational model",
"## Intended uses & limitations",
"#### How to use\n\nColab: link",
"## Training data\n\nDatasets:\n- 2ch\n- Odnoklassniki\n- Toloka Persona Chat Rus\n- Koziev's Conversations with toxic words vocabulary\n\nAugmentations:\n- ั -> ะต\n- Remove or add \"?\" or \"!\"\n- Fix CAPS\n- Concatenate toxic and non-toxic texts\n- Concatenate two non-toxic texts\n- Add toxic words from vocabulary\n- Add typos\n- Mask toxic words with \"*\", \"@\", \"$\"",
"## Training procedure\n\nTBA"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RuBERTConv Toxic Classifier",
"## Model description\n\nBased on rubert-base-cased-conversational model",
"## Intended uses & limitations",
"#### How to use\n\nColab: link",
"## Training data\n\nDatasets:\n- 2ch\n- Odnoklassniki\n- Toloka Persona Chat Rus\n- Koziev's Conversations with toxic words vocabulary\n\nAugmentations:\n- ั -> ะต\n- Remove or add \"?\" or \"!\"\n- Fix CAPS\n- Concatenate toxic and non-toxic texts\n- Concatenate two non-toxic texts\n- Add toxic words from vocabulary\n- Add typos\n- Mask toxic words with \"*\", \"@\", \"$\"",
"## Training procedure\n\nTBA"
] | [
38,
8,
17,
6,
11,
106,
6
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# RuBERTConv Toxic Classifier## Model description\n\nBased on rubert-base-cased-conversational model## Intended uses & limitations#### How to use\n\nColab: link## Training data\n\nDatasets:\n- 2ch\n- Odnoklassniki\n- Toloka Persona Chat Rus\n- Koziev's Conversations with toxic words vocabulary\n\nAugmentations:\n- ั -> ะต\n- Remove or add \"?\" or \"!\"\n- Fix CAPS\n- Concatenate toxic and non-toxic texts\n- Concatenate two non-toxic texts\n- Add toxic words from vocabulary\n- Add typos\n- Mask toxic words with \"*\", \"@\", \"$\"## Training procedure\n\nTBA"
] |
token-classification | transformers |
# RuBERTConv Toxic Editor
## Model description
Tagging model for detoxification based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational).
4 possible classes:
- Equal = save tokens
- Replace = replace tokens with mask
- Delete = remove tokens
- Insert = insert mask before tokens
Use in pair with [mask filler](https://huggingface.co/IlyaGusev/sber_rut5_filler).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1NUSO1QGlDgD-IWXa2SpeND089eVxrCJW)
```python
import torch
from transformers import AutoTokenizer, pipeline
tagger_model_name = "IlyaGusev/rubertconv_toxic_editor"
device = "cuda" if torch.cuda.is_available() else "cpu"
device_num = 0 if device == "cuda" else -1
tagger_pipe = pipeline(
"token-classification",
model=tagger_model_name,
tokenizer=tagger_model_name,
framework="pt",
device=device_num,
aggregation_strategy="max"
)
text = "..."
tagger_predictions = tagger_pipe([text], batch_size=1)
sample_predictions = tagger_predictions[0]
print(sample_predictions)
```
## Training data
- Dataset: [russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022/tree/main/data)
## Training procedure
- Parallel corpus convertion: [compute_tags.py](https://github.com/IlyaGusev/rudetox/blob/main/rudetox/marker/compute_tags.py)
- Training script: [train.py](https://github.com/IlyaGusev/rudetox/blob/main/rudetox/marker/train.py)
- Pipeline step: [dvc.yaml, train_marker](https://github.com/IlyaGusev/rudetox/blob/main/dvc.yaml#L367)
## Eval results
TBA | {"language": ["ru"], "license": "apache-2.0", "tags": ["token-classification"], "widget": [{"text": "\u0401\u043f\u0442\u0430, \u043c\u0435\u043d\u044f \u0437\u043e\u0432\u0443\u0442 \u043f\u0440\u0438\u0434\u0443\u0440\u043e\u043a \u0438 \u044f \u0436\u0438\u0432\u0443 \u0432 \u0436\u043e\u043f\u0435"}]} | IlyaGusev/rubertconv_toxic_editor | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #bert #token-classification #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# RuBERTConv Toxic Editor
## Model description
Tagging model for detoxification based on rubert-base-cased-conversational.
4 possible classes:
- Equal = save tokens
- Replace = replace tokens with mask
- Delete = remove tokens
- Insert = insert mask before tokens
Use in pair with mask filler.
## Intended uses & limitations
#### How to use
Colab: link
## Training data
- Dataset: russe_detox_2022
## Training procedure
- Parallel corpus convertion: compute_tags.py
- Training script: URL
- Pipeline step: URL, train_marker
## Eval results
TBA | [
"# RuBERTConv Toxic Editor",
"## Model description\n\nTagging model for detoxification based on rubert-base-cased-conversational.\n\n4 possible classes:\n- Equal = save tokens\n- Replace = replace tokens with mask\n- Delete = remove tokens\n- Insert = insert mask before tokens\n\nUse in pair with mask filler.",
"## Intended uses & limitations",
"#### How to use\n\nColab: link",
"## Training data\n\n- Dataset: russe_detox_2022",
"## Training procedure\n\n- Parallel corpus convertion: compute_tags.py\n- Training script: URL\n- Pipeline step: URL, train_marker",
"## Eval results\n\nTBA"
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RuBERTConv Toxic Editor",
"## Model description\n\nTagging model for detoxification based on rubert-base-cased-conversational.\n\n4 possible classes:\n- Equal = save tokens\n- Replace = replace tokens with mask\n- Delete = remove tokens\n- Insert = insert mask before tokens\n\nUse in pair with mask filler.",
"## Intended uses & limitations",
"#### How to use\n\nColab: link",
"## Training data\n\n- Dataset: russe_detox_2022",
"## Training procedure\n\n- Parallel corpus convertion: compute_tags.py\n- Training script: URL\n- Pipeline step: URL, train_marker",
"## Eval results\n\nTBA"
] | [
38,
7,
65,
6,
11,
16,
32,
7
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# RuBERTConv Toxic Editor## Model description\n\nTagging model for detoxification based on rubert-base-cased-conversational.\n\n4 possible classes:\n- Equal = save tokens\n- Replace = replace tokens with mask\n- Delete = remove tokens\n- Insert = insert mask before tokens\n\nUse in pair with mask filler.## Intended uses & limitations#### How to use\n\nColab: link## Training data\n\n- Dataset: russe_detox_2022## Training procedure\n\n- Parallel corpus convertion: compute_tags.py\n- Training script: URL\n- Pipeline step: URL, train_marker## Eval results\n\nTBA"
] |
summarization | transformers |
# RuGPT3MediumSumGazeta
## Model description
This is the model for abstractive summarization for Russian based on [rugpt3medium_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1eR-ev0Y5ISWIwGnzYYoHyGMaSIUz8GTN)
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "IlyaGusev/rugpt3medium_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
article_text = "..."
text_tokens = tokenizer(
article_text,
max_length=600,
add_special_tokens=False,
padding=False,
truncation=True
)["input_ids"]
input_ids = text_tokens + [tokenizer.sep_token_id]
input_ids = torch.LongTensor([input_ids])
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summary = tokenizer.decode(output_ids[0], skip_special_tokens=False)
summary = summary.split(tokenizer.sep_token)[1]
summary = summary.split(tokenizer.eos_token)[0]
print(summary)
```
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py)
- Config: [gpt_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/gpt_training_config.json)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py)
Flags: --language ru --tokenize-after --lower
| {"language": ["ru"], "license": ["apache-2.0"], "tags": ["causal-lm", "summarization"], "datasets": ["IlyaGusev/gazeta"], "inference": false, "widget": [{"text": "\u0412\u044b\u0441\u043e\u0442\u0430 \u0431\u0430\u0448\u043d\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 324 \u043c\u0435\u0442\u0440\u0430 (1063 \u0444\u0443\u0442\u0430), \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u0430\u044f \u0436\u0435 \u0432\u044b\u0441\u043e\u0442\u0430, \u043a\u0430\u043a \u0443 81-\u044d\u0442\u0430\u0436\u043d\u043e\u0433\u043e \u0437\u0434\u0430\u043d\u0438\u044f, \u0438 \u0441\u0430\u043c\u043e\u0435 \u0432\u044b\u0441\u043e\u043a\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u0432 \u041f\u0430\u0440\u0438\u0436\u0435. \u0415\u0433\u043e \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u043d\u043e, \u0440\u0430\u0437\u043c\u0435\u0440\u043e\u043c 125 \u043c\u0435\u0442\u0440\u043e\u0432 (410 \u0444\u0443\u0442\u043e\u0432) \u0441 \u043b\u044e\u0431\u043e\u0439 \u0441\u0442\u043e\u0440\u043e\u043d\u044b. \u0412\u043e \u0432\u0440\u0435\u043c\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u043f\u0440\u0435\u0432\u0437\u043e\u0448\u043b\u0430 \u043c\u043e\u043d\u0443\u043c\u0435\u043d\u0442 \u0412\u0430\u0448\u0438\u043d\u0433\u0442\u043e\u043d\u0430, \u0441\u0442\u0430\u0432 \u0441\u0430\u043c\u044b\u043c \u0432\u044b\u0441\u043e\u043a\u0438\u043c \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u043c \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c \u0432 \u043c\u0438\u0440\u0435, \u0438 \u044d\u0442\u043e\u0442 \u0442\u0438\u0442\u0443\u043b \u043e\u043d\u0430 \u0443\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b\u0430 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 41 \u0433\u043e\u0434\u0430 \u0434\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u0432 \u041d\u044c\u044e-\u0419\u043e\u0440\u043a\u0435 \u0432 1930 \u0433\u043e\u0434\u0443. \u042d\u0442\u043e \u043f\u0435\u0440\u0432\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u043e \u0432\u044b\u0441\u043e\u0442\u044b 300 \u043c\u0435\u0442\u0440\u043e\u0432. \u0418\u0437-\u0437\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0432\u0435\u0449\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0430\u043d\u0442\u0435\u043d\u043d\u044b \u043d\u0430 \u0432\u0435\u0440\u0448\u0438\u043d\u0435 \u0431\u0430\u0448\u043d\u0438 \u0432 1957 \u0433\u043e\u0434\u0443 \u043e\u043d\u0430 \u0441\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0448\u0435 \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u043d\u0430 5,2 \u043c\u0435\u0442\u0440\u0430 (17 \u0444\u0443\u0442\u043e\u0432). \u0417\u0430 \u0438\u0441\u043a\u043b\u044e\u0447\u0435\u043d\u0438\u0435\u043c \u043f\u0435\u0440\u0435\u0434\u0430\u0442\u0447\u0438\u043a\u043e\u0432, \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u0442\u043e\u0440\u043e\u0439 \u0441\u0430\u043c\u043e\u0439 \u0432\u044b\u0441\u043e\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e \u0441\u0442\u043e\u044f\u0449\u0435\u0439 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u043e\u0439 \u0432\u043e \u0424\u0440\u0430\u043d\u0446\u0438\u0438 \u043f\u043e\u0441\u043b\u0435 \u0432\u0438\u0430\u0434\u0443\u043a\u0430 \u041c\u0438\u0439\u043e.<s>", "example_title": "\u0412\u0438\u043a\u0438\u043f\u0435\u0434\u0438\u044f"}]} | IlyaGusev/rugpt3medium_sum_gazeta | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"causal-lm",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #causal-lm #summarization #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
| RuGPT3MediumSumGazeta
=====================
Model description
-----------------
This is the model for abstractive summarization for Russian based on rugpt3medium\_based\_on\_gpt2.
Intended uses & limitations
---------------------------
#### How to use
Colab: link
Training data
-------------
* Dataset: Gazeta
Training procedure
------------------
* Training script: URL
* Config: gpt\_training\_config.json
Eval results
------------
* Train dataset: Gazeta v1 train
* Test dataset: Gazeta v1 test
* Source max\_length: 600
* Target max\_length: 200
* no\_repeat\_ngram\_size: 4
* num\_beams: 5
* Train dataset: Gazeta v1 train
* Test dataset: Gazeta v2 test
* Source max\_length: 600
* Target max\_length: 200
* no\_repeat\_ngram\_size: 4
* num\_beams: 5
Evaluation script: URL
Flags: --language ru --tokenize-after --lower
| [
"#### How to use\n\n\nColab: link\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Training script: URL\n* Config: gpt\\_training\\_config.json\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nEvaluation script: URL\n\n\nFlags: --language ru --tokenize-after --lower"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #causal-lm #summarization #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"#### How to use\n\n\nColab: link\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Training script: URL\n* Config: gpt\\_training\\_config.json\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nEvaluation script: URL\n\n\nFlags: --language ru --tokenize-after --lower"
] | [
61,
227
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #causal-lm #summarization #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n#### How to use\n\n\nColab: link\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Training script: URL\n* Config: gpt\\_training\\_config.json\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nEvaluation script: URL\n\n\nFlags: --language ru --tokenize-after --lower"
] |
summarization | transformers |
# RuT5TelegramHeadlines
## Model description
Based on [rut5-base](https://huggingface.co/cointegrated/rut5-base) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_headline_gen_telegram"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py) | {"language": ["ru"], "license": "apache-2.0", "tags": ["summarization"], "widget": [{"text": "\u041a\u043e\u043c\u0438\u0441\u0441\u0438\u044f \u0421\u043e\u0432\u0435\u0442\u0430 \u0424\u0435\u0434\u0435\u0440\u0430\u0446\u0438\u0438 \u043f\u043e \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0435 \u0438 \u0432\u0437\u0430\u0438\u043c\u043e\u0434\u0435\u0439\u0441\u0442\u0432\u0438\u044e \u0441\u043e \u0421\u041c\u0418 \u0441\u043e\u0432\u043c\u0435\u0441\u0442\u043d\u043e \u0441 \u0437\u0430\u0438\u043d\u0442\u0435\u0440\u0435\u0441\u043e\u0432\u0430\u043d\u043d\u044b\u043c\u0438 \u0432\u0435\u0434\u043e\u043c\u0441\u0442\u0432\u0430\u043c\u0438 \u0434\u0443\u043c\u0430\u0435\u0442 \u043d\u0430\u0434 \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u043a\u043e\u0439 \u043d\u0430\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0433\u043e \u0437\u0430\u043a\u043e\u043d\u043e\u0434\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u0432 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043d\u0430\u043b\u043e\u0433\u043e\u043e\u0431\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u0433\u043b\u043e\u0431\u0430\u043b\u044c\u043d\u044b\u0445 \u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442-\u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439, \u0442\u0430\u043a\u0438\u0445 \u043a\u0430\u043a Google \u0438 Facebook. \u041e\u0431 \u044d\u0442\u043e\u043c \u0441\u043e\u043e\u0431\u0449\u0438\u043b \u0422\u0410\u0421\u0421 \u043f\u0440\u0435\u0434\u0441\u0435\u0434\u0430\u0442\u0435\u043b\u044c \u043a\u043e\u043c\u0438\u0441\u0441\u0438\u0438 \u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u0443\u0448\u043a\u043e\u0432. \u00ab\u0412 \u043d\u0430\u0441\u0442\u043e\u044f\u0449\u0435\u0435 \u0432\u0440\u0435\u043c\u044f \u043f\u043e \u043b\u0438\u043d\u0438\u0438 \u041e\u042d\u0421\u0420 [\u041e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u044f \u044d\u043a\u043e\u043d\u043e\u043c\u0438\u0447\u0435\u0441\u043a\u043e\u0433\u043e \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u0447\u0435\u0441\u0442\u0432\u0430 \u0438 \u0440\u0430\u0437\u0432\u0438\u0442\u0438\u044f] \u0432\u0435\u0434\u0435\u0442\u0441\u044f \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u043a\u0430 \u043c\u0435\u0436\u0434\u0443\u043d\u0430\u0440\u043e\u0434\u043d\u043e\u0439 \u043a\u043e\u043d\u0432\u0435\u043d\u0446\u0438\u0438, \u043e\u0434\u043d\u0430\u043a\u043e \u0440\u0430\u0431\u043e\u0442\u0430 \u043d\u0430\u0434 \u043d\u0435\u0439 \u0435\u0449\u0435 \u043d\u0435 \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0430. \u0412 \u044d\u0442\u0438\u0445 \u0443\u0441\u043b\u043e\u0432\u0438\u044f\u0445 \u043c\u044b \u0438\u0441\u0445\u043e\u0434\u0438\u043c \u0438\u0437 \u0442\u043e\u0433\u043e, \u0447\u0442\u043e \u0441\u0430\u043c\u0430\u044f \u0440\u0430\u0437\u0443\u043c\u043d\u0430\u044f \u043f\u043e\u0437\u0438\u0446\u0438\u044f - \u043d\u0430\u0447\u0430\u0442\u044c \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u043a\u0443 \u043d\u0430\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0433\u043e \u0437\u0430\u043a\u043e\u043d\u043e\u0434\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430, \u043d\u0435 \u0434\u043e\u0436\u0438\u0434\u0430\u044f\u0441\u044c \u043a\u043e\u043d\u0432\u0435\u043d\u0446\u0438\u0438\u00bb, \u2014 \u043f\u043e\u044f\u0441\u043d\u0438\u043b \u0441\u0435\u043d\u0430\u0442\u043e\u0440. \u041f\u0443\u0448\u043a\u043e\u0432 \u043e\u0442\u043c\u0435\u0442\u0438\u043b, \u0447\u0442\u043e \u043f\u043e \u0442\u0430\u043a\u043e\u043c\u0443 \u043f\u0443\u0442\u0438 \u043f\u043e\u0448\u043b\u0438 \u0435\u0449\u0435 \u043d\u0435\u0441\u043a\u043e\u043b\u044c\u043a\u043e \u0441\u0442\u0440\u0430\u043d, \u0432 \u0447\u0438\u0441\u043b\u0435 \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u0424\u0440\u0430\u043d\u0446\u0438\u044f, \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u044f \u0438 \u0422\u0443\u0440\u0446\u0438\u044f. \u041f\u043e \u0435\u0433\u043e \u0441\u043b\u043e\u0432\u0430\u043c, \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0432\u0430\u0436\u043d\u043e \u0437\u0430\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0442\u044c \u0432 \u044d\u0442\u043e\u0439 \u0440\u0430\u0431\u043e\u0442\u0435 \u041c\u0438\u043d\u0444\u0438\u043d, \u0424\u041d\u0421, \u041c\u0418\u0414 \u0420\u0424 \u0438 \u0420\u043e\u0441\u043a\u043e\u043c\u043d\u0430\u0434\u0437\u043e\u0440. \u00ab\u0418\u043d\u0442\u0435\u0440\u043d\u0435\u0442-\u043f\u043b\u0430\u0442\u0444\u043e\u0440\u043c\u044b \u043d\u0435 \u0444\u0438\u0433\u0443\u0440\u0438\u0440\u0443\u044e\u0442 \u0443 \u043d\u0430\u0441 \u0441\u0435\u0439\u0447\u0430\u0441 \u043a\u0430\u043a \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u044b\u0439 \u043e\u0431\u044a\u0435\u043a\u0442 \u043d\u0430\u043b\u043e\u0433\u043e\u043e\u0431\u043b\u043e\u0436\u0435\u043d\u0438\u044f. \u041a\u043e\u0433\u0434\u0430 \u043e\u043d\u0438 \u043e\u0442\u043a\u0440\u043e\u044e\u0442 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0441\u0432\u043e\u0438 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u0432 \u0440\u0430\u043c\u043a\u0430\u0445 \u0437\u0430\u043a\u043e\u043d\u0430 \u043e \u00ab\u043f\u0440\u0438\u0437\u0435\u043c\u043b\u0435\u043d\u0438\u0438\u00bb, \u0432\u043e\u0437\u043d\u0438\u043a\u043d\u0435\u0442 \u0432\u043e\u043f\u0440\u043e\u0441: \u043a\u0430\u043a \u0438\u0445 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u0438\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043d\u0430 \u0442\u0435\u0440\u0440\u0438\u0442\u043e\u0440\u0438\u0438 \u0420\u043e\u0441\u0441\u0438\u0438, \u043a\u043e\u0442\u043e\u0440\u043e\u0433\u043e \u0441\u0435\u0439\u0447\u0430\u0441 \u043d\u0435\u0442, \u0431\u0443\u0434\u0435\u0442 \u0441\u043e\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u044c\u0441\u044f \u0441 \u043d\u0430\u0448\u0438\u043c \u043d\u0430\u043b\u043e\u0433\u043e\u0432\u044b\u043c \u0440\u0435\u0436\u0438\u043c\u043e\u043c. \u041c\u044b \u0441\u0435\u0439\u0447\u0430\u0441 \u043f\u0440\u043e\u0434\u0443\u043c\u044b\u0432\u0430\u0435\u043c, \u043a\u0430\u043a \u0443\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u0442\u044c \u044d\u0442\u0443 \u0432\u0437\u0430\u0438\u043c\u043e\u0441\u0432\u044f\u0437\u044c\u00bb, \u2014 \u0441\u043a\u0430\u0437\u0430\u043b \u041f\u0443\u0448\u043a\u043e\u0432, \u0434\u043e\u0431\u0430\u0432\u043b\u044f\u044f, \u0447\u0442\u043e \u0432\u043e\u043f\u0440\u043e\u0441 \u0432\u043d\u0435\u0441\u0435\u043d\u0438\u044f \u0438\u0437\u043c\u0435\u043d\u0435\u043d\u0438\u0439 \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u0435 \u0437\u0430\u043a\u043e\u043d\u043e\u0434\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0432 \u0447\u0430\u0441\u0442\u0438 \u043d\u0430\u043b\u043e\u0433\u043e\u043e\u0431\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043a\u0440\u0443\u043f\u043d\u044b\u0445 IT-\u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439 \u043d\u0430\u0445\u043e\u0434\u0438\u0442\u0441\u044f \u00ab\u043d\u0430 \u043f\u0435\u0440\u0432\u043e\u0439 \u0441\u0442\u0430\u0434\u0438\u0438 \u0438\u0437\u0443\u0447\u0435\u043d\u0438\u044f\u00bb. \u0421\u0430\u043c \u0441\u0435\u043d\u0430\u0442\u043e\u0440 \u0432\u044b\u0441\u0442\u0443\u043f\u0430\u0435\u0442 \u0437\u0430 \u0432\u0432\u0435\u0434\u0435\u043d\u0438\u0435 \u043f\u0440\u043e\u0433\u0440\u0435\u0441\u0441\u0438\u0432\u043d\u043e\u0439 \u0441\u0442\u0430\u0432\u043a\u0438 \u043d\u0430\u043b\u043e\u0433\u0430 \u0432 \u0437\u0430\u0432\u0438\u0441\u0438\u043c\u043e\u0441\u0442\u0438 \u043e\u0442 \u043f\u0440\u0438\u0431\u044b\u043b\u0438 IT-\u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439 \u043d\u0430 \u0442\u0435\u0440\u0440\u0438\u0442\u043e\u0440\u0438\u0438 \u0441\u0442\u0440\u0430\u043d\u044b. \u041f\u0440\u0438 \u044d\u0442\u043e\u043c, \u043f\u043e\u0434\u0447\u0435\u0440\u043a\u043d\u0443\u043b \u043e\u043d, \u043e\u0434\u043d\u0430 \u0438\u0437 \u0437\u0430\u0434\u0430\u0447 \u043d\u0430\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u043d\u0430\u043b\u043e\u0433\u043e\u043e\u0431\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u0431\u0443\u0434\u0435\u0442 \u0437\u0430\u043a\u043b\u044e\u0447\u0430\u0442\u044c\u0441\u044f \u0432 \u043f\u043e\u0434\u0441\u0447\u0435\u0442\u0435 \u043d\u0430\u043b\u043e\u0433\u043e\u043e\u0431\u043b\u0430\u0433\u0430\u0435\u043c\u043e\u0439 \u0431\u0430\u0437\u044b. \u0421\u0435\u0439\u0447\u0430\u0441 \u043a\u0440\u0443\u043f\u043d\u044b\u0435 \u0418\u0422-\u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 \u0441\u0430\u043c\u043e\u0441\u0442\u043e\u044f\u0442\u0435\u043b\u044c\u043d\u043e \u043e\u0442\u0447\u0438\u0442\u044b\u0432\u0430\u044e\u0442\u0441\u044f \u043e \u0441\u0432\u043e\u0435\u0439 \u043f\u0440\u0438\u0431\u044b\u043b\u0438. \u041e\u0434\u043d\u0430\u043a\u043e \u0420\u043e\u0441\u0441\u0438\u0438 \u043d\u0443\u0436\u043d\u0430 \u0441\u043e\u0431\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u0438\u0441\u0442\u0435\u043c\u0430 \u043f\u043e\u0434\u0441\u0447\u0435\u0442\u0430 \u0438\u0445 \u0434\u043e\u0445\u043e\u0434\u043e\u0432, \u043a\u043e\u0442\u043e\u0440\u0430\u044f \u043f\u043e\u0437\u0432\u043e\u043b\u0438\u0442 \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0438\u0442\u044c \u0438\u0445 \u00ab\u0440\u0435\u0430\u043b\u044c\u043d\u0443\u044e \u043d\u0430\u043b\u043e\u0433\u043e\u043e\u0431\u043b\u0430\u0433\u0430\u0435\u043c\u0443\u044e \u0431\u0430\u0437\u0443\u00bb, \u0441\u0447\u0438\u0442\u0430\u0435\u0442 \u041f\u0443\u0448\u043a\u043e\u0432. (https://www.gazeta.ru/tech/news/2021/12/17/n_17024239.shtml)", "example_title": "\u041d\u043e\u0432\u043e\u0441\u0442\u044c \u043f\u0440\u043e \u043d\u0430\u043b\u043e\u0433\u0438 \u0432 IT"}, {"text": "\u041f\u0435\u0440\u0432\u0443\u044e \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0443, \u0443 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u0431\u043e\u043b\u0435\u0435 \u0442\u044b\u0441\u044f\u0447\u0438 \u043d\u043e\u0433, \u043e\u0431\u043d\u0430\u0440\u0443\u0436\u0438\u043b\u0438 \u0432 \u0430\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u0439\u0441\u043a\u0438\u0445 \u043f\u0435\u0449\u0435\u0440\u0430\u0445 \u0431\u0438\u043e\u043b\u043e\u0433\u0438, \u0438\u0437\u0443\u0447\u0430\u0432\u0448\u0438\u0435 \u0442\u0430\u043c \u043f\u043e\u0434\u0437\u0435\u043c\u043d\u044b\u0435 \u0432\u043e\u0434\u044b. \u041f\u0440\u0435\u0434\u044b\u0434\u0443\u0449\u0435\u0439 \u0440\u0435\u043a\u043e\u0440\u0434\u0441\u043c\u0435\u043d\u043a\u043e\u0439 \u043f\u043e \u043a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u0443 \u043d\u043e\u0433 \u0431\u044b\u043b\u0430 700-\u043d\u043e\u0433\u0430\u044f \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0430. \u041d\u043e\u0432\u044b\u0439 \u0432\u0438\u0434 \u0438\u043c\u0435\u0435\u0442 \u0434\u043b\u0438\u043d\u043d\u043e\u0435 \u0442\u043e\u043d\u043a\u043e\u0435 \u0442\u0435\u043b\u043e, \u043f\u043e\u0445\u043e\u0436\u0435\u0435 \u043d\u0430 \u043d\u0438\u0442\u044c, \u0438 \u0431\u043e\u043b\u044c\u0448\u043e\u0435 \u043a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e \u043a\u043e\u043d\u0435\u0447\u043d\u043e\u0441\u0442\u0435\u0439, \u043f\u043e-\u0432\u0438\u0434\u0438\u043c\u043e\u043c\u0443, \u0434\u0430\u0435\u0442 \u043f\u0440\u0435\u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0430 \u0434\u043b\u044f \u0431\u044b\u0441\u0442\u0440\u043e\u0433\u043e \u043f\u0435\u0440\u0435\u043c\u0435\u0449\u0435\u043d\u0438\u044f \u0438 \u043f\u0440\u043e\u043d\u0438\u043a\u043d\u043e\u0432\u0435\u043d\u0438\u044f \u0432 \u0442\u0440\u0443\u0434\u043d\u043e\u0434\u043e\u0441\u0442\u0443\u043f\u043d\u044b\u0435 \u043c\u0435\u0441\u0442\u0430 \u2014 \u0443\u0447\u0435\u043d\u044b\u0435 \u043f\u043e\u043b\u0430\u0433\u0430\u044e\u0442, \u0442\u0430\u043a\u0430\u044f \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0430 \u043c\u043e\u0436\u0435\u0442 \u0441\u043f\u043e\u043a\u043e\u0439\u043d\u043e \u043f\u0435\u0440\u0435\u043c\u0435\u0449\u0430\u0442\u044c\u0441\u044f \u043f\u043e \u0442\u0440\u0435\u0449\u0438\u043d\u0430\u043c \u0432 \u043a\u0430\u043c\u043d\u044f\u0445. \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u044f \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430 \u0441\u0432\u043e\u0438\u043c\u0438 \u043e\u0433\u0440\u043e\u043c\u043d\u044b\u043c\u0438 \u0438 \u0436\u0443\u0442\u043a\u043e\u0432\u0430\u0442\u044b\u043c\u0438 \u0436\u0438\u0432\u043e\u0442\u043d\u044b\u043c\u0438 \u0432\u0440\u043e\u0434\u0435 25-\u0441\u0430\u043d\u0442\u0438\u043c\u0435\u0442\u0440\u043e\u0432\u044b\u0445 \u043f\u0430\u0443\u043a\u043e\u0432. \u0422\u0435\u043f\u0435\u0440\u044c \u0441\u043f\u0438\u0441\u043e\u043a \u043f\u0443\u0433\u0430\u044e\u0449\u0438\u0445 \u0447\u043b\u0435\u043d\u0438\u0441\u0442\u043e\u043d\u043e\u0433\u0438\u0445 \u043f\u043e\u043f\u043e\u043b\u043d\u0438\u043b\u0441\u044f \u0441\u0430\u043c\u043e\u0439 \u00ab\u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0433\u043e\u0439\u00bb \u0432 \u043c\u0438\u0440\u0435 \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u043e\u0439, \u0443 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u0431\u043e\u043b\u0435\u0435 \u0442\u044b\u0441\u044f\u0447\u0438 \u043d\u043e\u0433. \u041d\u0435\u043e\u0431\u044b\u0447\u043d\u043e\u0435 \u0436\u0438\u0432\u043e\u0442\u043d\u043e\u0435 \u043e\u0431\u043d\u0430\u0440\u0443\u0436\u0438\u043b\u0430 \u0433\u0440\u0443\u043f\u043f\u0430 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439 \u0438\u0437 \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u0438 \u0438 \u0421\u0428\u0410 \u0432 \u043f\u0435\u0449\u0435\u0440\u0430\u0445 \u043d\u0430 \u0437\u0430\u043f\u0430\u0434\u0435 \u0441\u0442\u0440\u0430\u043d\u044b. \u041f\u043e\u0434\u0440\u043e\u0431\u043d\u0435\u0435 \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0443 \u0443\u0447\u0435\u043d\u044b\u0435 \u043e\u043f\u0438\u0441\u0430\u043b\u0438 \u0432 \u0441\u0442\u0430\u0442\u044c\u0435 \u0432 \u0436\u0443\u0440\u043d\u0430\u043b\u0435 Scientific Reports. \u0418\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u0438 \u0437\u0430\u043d\u0438\u043c\u0430\u043b\u0438\u0441\u044c \u043e\u0446\u0435\u043d\u043a\u043e\u0439 \u0432\u043e\u0437\u0434\u0435\u0439\u0441\u0442\u0432\u0438\u044f \u043f\u043e\u0434\u0437\u0435\u043c\u043d\u044b\u0445 \u0432\u043e\u0434 \u043d\u0430 \u043e\u043a\u0440\u0443\u0436\u0430\u044e\u0449\u0443\u044e \u0441\u0440\u0435\u0434\u0443 \u0432 \u0437\u043e\u043d\u0435 \u0434\u043e\u0431\u044b\u0447\u0438 \u043f\u043e\u043b\u0435\u0437\u043d\u044b\u0445 \u0438\u0441\u043a\u043e\u043f\u0430\u0435\u043c\u044b\u0445 \u043d\u0430 \u0437\u0430\u043f\u0430\u0434\u0435 \u0441\u0442\u0440\u0430\u043d\u044b, \u043a\u043e\u0433\u0434\u0430 \u043d\u0430\u0442\u043a\u043d\u0443\u043b\u0438\u0441\u044c \u043d\u0430 \u043d\u043e\u0432\u044b\u0439 \u0432\u0438\u0434 \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u0435\u043a. \u0412 \u043e\u0442\u043b\u0438\u0447\u0438\u0435 \u043e\u0442 \u0431\u043e\u043b\u044c\u0448\u0438\u043d\u0441\u0442\u0432\u0430 \u0441\u043e\u0440\u043e\u0434\u0438\u0447\u0435\u0439, \u0436\u0438\u0432\u0443\u0449\u0438\u0445 \u043d\u0430 \u043f\u043e\u0432\u0435\u0440\u0445\u043d\u043e\u0441\u0442\u0438, \u044d\u0442\u0438 \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0438 \u043e\u0431\u0438\u0442\u0430\u043b\u0438 \u0432 \u043f\u0435\u0449\u0435\u0440\u0430\u0445 \u043d\u0430 \u0433\u043b\u0443\u0431\u0438\u043d\u0435 \u0434\u043e 60 \u043c\u0435\u0442\u0440\u043e\u0432. \u041d\u043e\u0432\u044b\u0439 \u0432\u0438\u0434 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u0438 \u043d\u0430\u0437\u0432\u0430\u043b\u0438 Eumillipes persephone, \u0432 \u0447\u0435\u0441\u0442\u044c \u041f\u0435\u0440\u0441\u0435\u0444\u043e\u043d\u044b \u2014 \u0434\u0440\u0435\u0432\u043d\u0435\u0433\u0440\u0435\u0447\u0435\u0441\u043a\u043e\u0439 \u0431\u043e\u0433\u0438\u043d\u0438 \u043f\u043e\u0434\u0437\u0435\u043c\u043d\u043e\u0433\u043e \u043c\u0438\u0440\u0430. \u0423 \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0438 \u043e\u043a\u0430\u0437\u0430\u043b\u043e\u0441\u044c 1306 \u043d\u043e\u0433 \u2014 \u0431\u043e\u043b\u044c\u0448\u0435, \u0447\u0435\u043c \u0443 \u043b\u044e\u0431\u043e\u0433\u043e \u0434\u0440\u0443\u0433\u043e\u0433\u043e \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u043e\u0433\u043e \u0432\u0438\u0434\u0430. \u041f\u0440\u0435\u0434\u044b\u0434\u0443\u0449\u0435\u0439 \u0440\u0435\u043a\u043e\u0440\u0434\u0441\u043c\u0435\u043d\u043a\u043e\u0439 \u0431\u044b\u043b\u0430 \u043a\u0430\u043b\u0438\u0444\u043e\u0440\u043d\u0438\u0439\u0441\u043a\u0430\u044f Illacme plenipes, \u0443 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u043d\u0430\u0441\u0447\u0438\u0442\u044b\u0432\u0430\u043b\u043e\u0441\u044c \u0434\u043e 750 \u043d\u043e\u0433. \u00ab\u042d\u0442\u0438 \u0436\u0438\u0432\u043e\u0442\u043d\u044b\u0435 \u0431\u044b\u043b\u0438 \u043d\u0430\u0441\u0442\u043e\u043b\u044c\u043a\u043e \u0443\u043d\u0438\u043a\u0430\u043b\u044c\u043d\u044b, \u2014 \u0433\u043e\u0432\u043e\u0440\u0438\u0442 \u0431\u0438\u043e\u043b\u043e\u0433 \u0411\u0440\u0443\u043d\u043e \u0411\u0443\u0437\u0430\u0442\u0442\u043e. \u2014 \u041a\u0430\u043a \u0442\u043e\u043b\u044c\u043a\u043e \u044f \u043f\u043e\u043d\u044f\u043b, \u043a\u0430\u043a\u043e\u0439 \u0434\u043b\u0438\u043d\u044b \u043e\u043d\u0438 \u0431\u044b\u043b\u0438... \u0421\u0442\u0430\u043b\u043e \u044f\u0441\u043d\u043e, \u0447\u0442\u043e \u044d\u0442\u043e \u0447\u0442\u043e-\u0442\u043e \u0441\u043e\u0432\u0435\u0440\u0448\u0435\u043d\u043d\u043e \u043d\u043e\u0432\u043e\u0435\u00bb. \u0423 \u0415. persephone \u043d\u0438\u0442\u0435\u0432\u0438\u0434\u043d\u043e\u0435 \u0442\u0435\u043b\u043e \u0434\u043b\u0438\u043d\u043e\u0439 \u043e\u043a\u043e\u043b\u043e 9,5 \u0441\u043c \u0438 \u0448\u0438\u0440\u0438\u043d\u043e\u0439 \u0432\u0441\u0435\u0433\u043e \u043c\u0438\u043b\u043b\u0438\u043c\u0435\u0442\u0440, \u0441\u043e\u0441\u0442\u043e\u044f\u0449\u0435\u0435 \u0438\u0437 330 \u0441\u0435\u0433\u043c\u0435\u043d\u0442\u043e\u0432, \u043a\u043e\u0440\u043e\u0442\u043a\u0438\u0435 \u043d\u043e\u0433\u0438 \u0438 \u043a\u043e\u043d\u0443\u0441\u043e\u043e\u0431\u0440\u0430\u0437\u043d\u0430\u044f \u0433\u043e\u043b\u043e\u0432\u0430. \u041a\u0430\u043a \u0438 \u0434\u0440\u0443\u0433\u0438\u0435 \u0436\u0438\u0432\u043e\u0442\u043d\u044b\u0435, \u0436\u0438\u0432\u0443\u0449\u0438\u0435 \u0432 \u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e\u0439 \u0442\u0435\u043c\u043d\u043e\u0442\u0435, \u044d\u0442\u0438 \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0438 \u0431\u043b\u0435\u0434\u043d\u044b \u0438 \u0441\u043b\u0435\u043f\u044b. \u042d\u043d\u0442\u043e\u043c\u043e\u043b\u043e\u0433 \u041f\u043e\u043b \u041c\u0430\u0440\u0435\u043a \u0441\u0440\u0430\u0432\u043d\u0438\u0432\u0430\u0435\u0442 \u0435\u0435 \u0441 \u0431\u0435\u043b\u043e\u0439 \u043d\u0438\u0442\u044c\u044e, \u0432\u044b\u0434\u0435\u0440\u043d\u0443\u0442\u043e\u0439 \u0438\u0437 \u0440\u0443\u0431\u0430\u0448\u043a\u0438. \u0427\u0442\u043e\u0431\u044b \u043f\u043e\u0441\u0447\u0438\u0442\u0430\u0442\u044c \u043a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e \u043d\u043e\u0433, \u0443\u0447\u0435\u043d\u044b\u043c \u043f\u0440\u0438\u0448\u043b\u043e\u0441\u044c \u0441\u043d\u0430\u0447\u0430\u043b\u0430 \u0441\u043d\u044f\u0442\u044c \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0443 \u0432 \u0432\u044b\u0441\u043e\u043a\u043e\u043c \u0440\u0430\u0437\u0440\u0435\u0448\u0435\u043d\u0438\u0438, \u0430 \u0437\u0430\u0442\u0435\u043c \u0437\u0430\u043a\u0440\u0430\u0448\u0438\u0432\u0430\u0442\u044c \u043d\u0430 \u0444\u043e\u0442\u043e \u043a\u0430\u0436\u0434\u044b\u0439 \u0434\u0435\u0441\u044f\u0442\u043e\u043a \u043d\u043e\u0433 \u0434\u0440\u0443\u0433\u0438\u043c \u0446\u0432\u0435\u0442\u043e\u043c. (https://www.gazeta.ru/science/2021/12/17_a_14325355.shtml)", "example_title": "\u041d\u043e\u0432\u043e\u0441\u0442\u044c \u043f\u0440\u043e \u043c\u043d\u043e\u0433\u043e\u043d\u043e\u0436\u043a\u0443"}, {"text": "\u0412\u044b\u0441\u043e\u0442\u0430 \u0431\u0430\u0448\u043d\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 324 \u043c\u0435\u0442\u0440\u0430 (1063 \u0444\u0443\u0442\u0430), \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u0430\u044f \u0436\u0435 \u0432\u044b\u0441\u043e\u0442\u0430, \u043a\u0430\u043a \u0443 81-\u044d\u0442\u0430\u0436\u043d\u043e\u0433\u043e \u0437\u0434\u0430\u043d\u0438\u044f, \u0438 \u0441\u0430\u043c\u043e\u0435 \u0432\u044b\u0441\u043e\u043a\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u0432 \u041f\u0430\u0440\u0438\u0436\u0435. \u0415\u0433\u043e \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u043d\u043e, \u0440\u0430\u0437\u043c\u0435\u0440\u043e\u043c 125 \u043c\u0435\u0442\u0440\u043e\u0432 (410 \u0444\u0443\u0442\u043e\u0432) \u0441 \u043b\u044e\u0431\u043e\u0439 \u0441\u0442\u043e\u0440\u043e\u043d\u044b. \u0412\u043e \u0432\u0440\u0435\u043c\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u043f\u0440\u0435\u0432\u0437\u043e\u0448\u043b\u0430 \u043c\u043e\u043d\u0443\u043c\u0435\u043d\u0442 \u0412\u0430\u0448\u0438\u043d\u0433\u0442\u043e\u043d\u0430, \u0441\u0442\u0430\u0432 \u0441\u0430\u043c\u044b\u043c \u0432\u044b\u0441\u043e\u043a\u0438\u043c \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u043c \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c \u0432 \u043c\u0438\u0440\u0435, \u0438 \u044d\u0442\u043e\u0442 \u0442\u0438\u0442\u0443\u043b \u043e\u043d\u0430 \u0443\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b\u0430 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 41 \u0433\u043e\u0434\u0430 \u0434\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u0432 \u041d\u044c\u044e-\u0419\u043e\u0440\u043a\u0435 \u0432 1930 \u0433\u043e\u0434\u0443. \u042d\u0442\u043e \u043f\u0435\u0440\u0432\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u043e \u0432\u044b\u0441\u043e\u0442\u044b 300 \u043c\u0435\u0442\u0440\u043e\u0432. \u0418\u0437-\u0437\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0432\u0435\u0449\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0430\u043d\u0442\u0435\u043d\u043d\u044b \u043d\u0430 \u0432\u0435\u0440\u0448\u0438\u043d\u0435 \u0431\u0430\u0448\u043d\u0438 \u0432 1957 \u0433\u043e\u0434\u0443 \u043e\u043d\u0430 \u0441\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0448\u0435 \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u043d\u0430 5,2 \u043c\u0435\u0442\u0440\u0430 (17 \u0444\u0443\u0442\u043e\u0432). \u0417\u0430 \u0438\u0441\u043a\u043b\u044e\u0447\u0435\u043d\u0438\u0435\u043c \u043f\u0435\u0440\u0435\u0434\u0430\u0442\u0447\u0438\u043a\u043e\u0432, \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u0442\u043e\u0440\u043e\u0439 \u0441\u0430\u043c\u043e\u0439 \u0432\u044b\u0441\u043e\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e \u0441\u0442\u043e\u044f\u0449\u0435\u0439 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u043e\u0439 \u0432\u043e \u0424\u0440\u0430\u043d\u0446\u0438\u0438 \u043f\u043e\u0441\u043b\u0435 \u0432\u0438\u0430\u0434\u0443\u043a\u0430 \u041c\u0438\u0439\u043e.", "example_title": "\u0412\u0438\u043a\u0438\u043f\u0435\u0434\u0438\u044f"}]} | IlyaGusev/rut5_base_headline_gen_telegram | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #summarization #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# RuT5TelegramHeadlines
## Model description
Based on rut5-base model
## Intended uses & limitations
#### How to use
## Training data
- Dataset: ru_all_split.URL
## Training procedure
- Training script: URL | [
"# RuT5TelegramHeadlines",
"## Model description\n\nBased on rut5-base model",
"## Intended uses & limitations",
"#### How to use",
"## Training data\n\n- Dataset: ru_all_split.URL",
"## Training procedure\n\n- Training script: URL"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #summarization #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# RuT5TelegramHeadlines",
"## Model description\n\nBased on rut5-base model",
"## Intended uses & limitations",
"#### How to use",
"## Training data\n\n- Dataset: ru_all_split.URL",
"## Training procedure\n\n- Training script: URL"
] | [
51,
9,
12,
6,
7,
16,
10
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #summarization #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# RuT5TelegramHeadlines## Model description\n\nBased on rut5-base model## Intended uses & limitations#### How to use## Training data\n\n- Dataset: ru_all_split.URL## Training procedure\n\n- Training script: URL"
] |
summarization | transformers |
# RuT5SumGazeta
## Model description
This is the model for abstractive summarization for Russian based on [rut5-base](https://huggingface.co/cointegrated/rut5-base).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1re5E26ZIDUpAx1gOCZkbF3hcwjozmgG0)
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
```
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py)
- Config: [t5_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/t5_training_config.json)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Predicting all summaries:
```python
import json
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
from datasets import load_dataset
def gen_batch(inputs, batch_size):
batch_start = 0
while batch_start < len(inputs):
yield inputs[batch_start: batch_start + batch_size]
batch_start += batch_size
def predict(
model_name,
input_records,
output_file,
max_source_tokens_count=600,
batch_size=8
):
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
predictions = []
for batch in gen_batch(input_records, batch_size):
texts = [r["text"] for r in batch]
input_ids = tokenizer(
texts,
add_special_tokens=True,
max_length=max_source_tokens_count,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"].to(device)
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summaries = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
for s in summaries:
print(s)
predictions.extend(summaries)
with open(output_file, "w") as w:
for p in predictions:
w.write(p.strip().replace("\n", " ") + "\n")
gazeta_test = load_dataset('IlyaGusev/gazeta', script_version="v1.0")["test"]
predict("IlyaGusev/rut5_base_sum_gazeta", list(gazeta_test), "t5_predictions.txt")
```
Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py)
Flags: --language ru --tokenize-after --lower
| {"language": ["ru"], "license": ["apache-2.0"], "tags": ["summarization", "t5"], "datasets": ["IlyaGusev/gazeta"], "inference": {"parameters": {"no_repeat_ngram_size": 4}}, "widget": [{"text": "\u0412\u044b\u0441\u043e\u0442\u0430 \u0431\u0430\u0448\u043d\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 324 \u043c\u0435\u0442\u0440\u0430 (1063 \u0444\u0443\u0442\u0430), \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u0430\u044f \u0436\u0435 \u0432\u044b\u0441\u043e\u0442\u0430, \u043a\u0430\u043a \u0443 81-\u044d\u0442\u0430\u0436\u043d\u043e\u0433\u043e \u0437\u0434\u0430\u043d\u0438\u044f, \u0438 \u0441\u0430\u043c\u043e\u0435 \u0432\u044b\u0441\u043e\u043a\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u0432 \u041f\u0430\u0440\u0438\u0436\u0435. \u0415\u0433\u043e \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u043d\u043e, \u0440\u0430\u0437\u043c\u0435\u0440\u043e\u043c 125 \u043c\u0435\u0442\u0440\u043e\u0432 (410 \u0444\u0443\u0442\u043e\u0432) \u0441 \u043b\u044e\u0431\u043e\u0439 \u0441\u0442\u043e\u0440\u043e\u043d\u044b. \u0412\u043e \u0432\u0440\u0435\u043c\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u043f\u0440\u0435\u0432\u0437\u043e\u0448\u043b\u0430 \u043c\u043e\u043d\u0443\u043c\u0435\u043d\u0442 \u0412\u0430\u0448\u0438\u043d\u0433\u0442\u043e\u043d\u0430, \u0441\u0442\u0430\u0432 \u0441\u0430\u043c\u044b\u043c \u0432\u044b\u0441\u043e\u043a\u0438\u043c \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u043c \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c \u0432 \u043c\u0438\u0440\u0435, \u0438 \u044d\u0442\u043e\u0442 \u0442\u0438\u0442\u0443\u043b \u043e\u043d\u0430 \u0443\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b\u0430 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 41 \u0433\u043e\u0434\u0430 \u0434\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u0432 \u041d\u044c\u044e-\u0419\u043e\u0440\u043a\u0435 \u0432 1930 \u0433\u043e\u0434\u0443. \u042d\u0442\u043e \u043f\u0435\u0440\u0432\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u043e \u0432\u044b\u0441\u043e\u0442\u044b 300 \u043c\u0435\u0442\u0440\u043e\u0432. \u0418\u0437-\u0437\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0432\u0435\u0449\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0430\u043d\u0442\u0435\u043d\u043d\u044b \u043d\u0430 \u0432\u0435\u0440\u0448\u0438\u043d\u0435 \u0431\u0430\u0448\u043d\u0438 \u0432 1957 \u0433\u043e\u0434\u0443 \u043e\u043d\u0430 \u0441\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0448\u0435 \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u043d\u0430 5,2 \u043c\u0435\u0442\u0440\u0430 (17 \u0444\u0443\u0442\u043e\u0432). \u0417\u0430 \u0438\u0441\u043a\u043b\u044e\u0447\u0435\u043d\u0438\u0435\u043c \u043f\u0435\u0440\u0435\u0434\u0430\u0442\u0447\u0438\u043a\u043e\u0432, \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u0442\u043e\u0440\u043e\u0439 \u0441\u0430\u043c\u043e\u0439 \u0432\u044b\u0441\u043e\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e \u0441\u0442\u043e\u044f\u0449\u0435\u0439 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u043e\u0439 \u0432\u043e \u0424\u0440\u0430\u043d\u0446\u0438\u0438 \u043f\u043e\u0441\u043b\u0435 \u0432\u0438\u0430\u0434\u0443\u043a\u0430 \u041c\u0438\u0439\u043e.", "example_title": "\u0412\u0438\u043a\u0438\u043f\u0435\u0434\u0438\u044f"}, {"text": "\u0421 1 \u0441\u0435\u043d\u0442\u044f\u0431\u0440\u044f \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0432\u0441\u0442\u0443\u043f\u0430\u044e\u0442 \u0432 \u0441\u0438\u043b\u0443 \u043f\u043e\u043f\u0440\u0430\u0432\u043a\u0438 \u0432 \u0437\u0430\u043a\u043e\u043d \u00ab\u041e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u0435\u00bb \u2014 \u0442\u0435\u043f\u0435\u0440\u044c \u0434\u043e\u043b\u0436\u043d\u0438\u043a\u0438 \u0441\u043c\u043e\u0433\u0443\u0442 \u043e\u0441\u0432\u043e\u0431\u043e\u0436\u0434\u0430\u0442\u044c\u0441\u044f \u043e\u0442 \u043d\u0435\u043f\u043e\u0441\u0438\u043b\u044c\u043d\u044b\u0445 \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0432\u043e \u0432\u043d\u0435\u0441\u0443\u0434\u0435\u0431\u043d\u043e\u043c \u043f\u043e\u0440\u044f\u0434\u043a\u0435, \u0435\u0441\u043b\u0438 \u0441\u0443\u043c\u043c\u0430 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 \u043d\u0435 \u043c\u0435\u043d\u0435\u0435 50 \u0442\u044b\u0441. \u0440\u0443\u0431\u043b\u0435\u0439 \u0438 \u043d\u0435 \u043f\u0440\u0435\u0432\u044b\u0448\u0430\u0435\u0442 500 \u0442\u044b\u0441. \u0440\u0443\u0431\u043b\u0435\u0439 \u0431\u0435\u0437 \u0443\u0447\u0435\u0442\u0430 \u0448\u0442\u0440\u0430\u0444\u043e\u0432, \u043f\u0435\u043d\u0438, \u043f\u0440\u043e\u0446\u0435\u043d\u0442\u043e\u0432 \u0437\u0430 \u043f\u0440\u043e\u0441\u0440\u043e\u0447\u043a\u0443 \u043f\u043b\u0430\u0442\u0435\u0436\u0430 \u0438 \u043f\u0440\u043e\u0447\u0438\u0445 \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0445 \u0438\u043b\u0438 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u044b\u0445 \u0441\u0430\u043d\u043a\u0446\u0438\u0439. \u0423 \u0444\u0438\u0437\u043b\u0438\u0446 \u0438 \u0438\u043d\u0434\u0438\u0432\u0438\u0434\u0443\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u0442\u0435\u043b\u0435\u0439 \u043f\u043e\u044f\u0432\u0438\u043b\u0430\u0441\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u043e\u0439\u0442\u0438 \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u0443 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u0430 \u0431\u0435\u0437 \u0443\u0447\u0430\u0441\u0442\u0438\u044f \u0441\u0443\u0434\u0430 \u0438 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u043e\u0433\u043e \u0443\u043f\u0440\u0430\u0432\u043b\u044f\u044e\u0449\u0435\u0433\u043e \u2014 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u043f\u043e\u0434\u0430\u0442\u044c \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0443\u044e\u0449\u0435\u0435 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0435 \u0447\u0435\u0440\u0435\u0437 \u041c\u0424\u0426. \u0421\u0443\u043c\u043c\u0443 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0438 \u0441\u043f\u0438\u0441\u043e\u043a \u0432\u0441\u0435\u0445 \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u044b\u0445 \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044e \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432 \u043d\u0443\u0436\u043d\u043e \u043f\u0440\u0435\u0434\u043e\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u0441\u0430\u043c\u043e\u0441\u0442\u043e\u044f\u0442\u0435\u043b\u044c\u043d\u043e. \u0415\u0441\u043b\u0438 \u0432\u0441\u0435 \u0443\u0441\u043b\u043e\u0432\u0438\u044f \u0441\u043e\u0431\u043b\u044e\u0434\u0435\u043d\u044b, \u0441\u0432\u0435\u0434\u0435\u043d\u0438\u044f \u0432\u043d\u0435\u0441\u0443\u0442 \u0432 \u0415\u0434\u0438\u043d\u044b\u0439 \u0444\u0435\u0434\u0435\u0440\u0430\u043b\u044c\u043d\u044b\u0439 \u0440\u0435\u0435\u0441\u0442\u0440 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 \u0442\u0440\u0435\u0445 \u0440\u0430\u0431\u043e\u0447\u0438\u0445 \u0434\u043d\u0435\u0439. \u041f\u0440\u0438 \u044d\u0442\u043e\u043c \u043d\u0430 \u043c\u043e\u043c\u0435\u043d\u0442 \u043f\u043e\u0434\u0430\u0447\u0438 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u044f \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044f \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u043e\u043a\u043e\u043d\u0447\u0435\u043d\u043e \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0441\u0442\u0432\u043e \u0441 \u0432\u043e\u0437\u0432\u0440\u0430\u0449\u0435\u043d\u0438\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0433\u043e \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430 \u0432\u0437\u044b\u0441\u043a\u0430\u0442\u0435\u043b\u044e. \u042d\u0442\u043e \u0437\u043d\u0430\u0447\u0438\u0442, \u0447\u0442\u043e \u0443 \u043f\u043e\u0442\u0435\u043d\u0446\u0438\u0430\u043b\u044c\u043d\u043e\u0433\u043e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0430 \u043d\u0435 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0430, \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u043c\u043e\u0436\u043d\u043e \u0432\u0437\u044b\u0441\u043a\u0430\u0442\u044c. \u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0436\u0434\u0430\u043d\u0438\u043d\u0430 \u043d\u0435 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u0442\u044c \u0432\u043e\u0437\u0431\u0443\u0436\u0434\u0435\u043d\u043e \u0434\u0440\u0443\u0433\u043e\u0435 \u0438\u0441\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0441\u0442\u0432\u043e. \u0412 \u043f\u0435\u0440\u0438\u043e\u0434 \u0432\u0441\u0435\u0439 \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u044b \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044c \u043d\u0435 \u0441\u043c\u043e\u0436\u0435\u0442 \u0431\u0440\u0430\u0442\u044c \u0437\u0430\u0439\u043c\u044b, \u043a\u0440\u0435\u0434\u0438\u0442\u044b, \u0432\u044b\u0434\u0430\u0432\u0430\u0442\u044c \u043f\u043e\u0440\u0443\u0447\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430, \u0441\u043e\u0432\u0435\u0440\u0448\u0430\u0442\u044c \u0438\u043d\u044b\u0435 \u043e\u0431\u0435\u0441\u043f\u0435\u0447\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0435 \u0441\u0434\u0435\u043b\u043a\u0438. \u0412\u043d\u0435\u0441\u0443\u0434\u0435\u0431\u043d\u043e\u0435 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432\u043e \u0431\u0443\u0434\u0435\u0442 \u0434\u043b\u0438\u0442\u044c\u0441\u044f \u0448\u0435\u0441\u0442\u044c \u043c\u0435\u0441\u044f\u0446\u0435\u0432, \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u0442\u0430\u043a\u0436\u0435 \u0431\u0443\u0434\u0435\u0442 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0442\u044c \u043c\u043e\u0440\u0430\u0442\u043e\u0440\u0438\u0439 \u043d\u0430 \u0443\u0434\u043e\u0432\u043b\u0435\u0442\u0432\u043e\u0440\u0435\u043d\u0438\u0435 \u0442\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u0439 \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432, \u043e\u0442\u043c\u0435\u0447\u0435\u043d\u043d\u044b\u0445 \u0432 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0438 \u0434\u043e\u043b\u0436\u043d\u0438\u043a\u0430, \u0438 \u043c\u043e\u0440\u0430\u0442\u043e\u0440\u0438\u0439 \u043e\u0431 \u0443\u043f\u043b\u0430\u0442\u0435 \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u043f\u043b\u0430\u0442\u0435\u0436\u0435\u0439. \u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u043f\u0440\u0435\u043a\u0440\u0430\u0449\u0430\u0435\u0442\u0441\u044f \u043d\u0430\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435 \u043d\u0435\u0443\u0441\u0442\u043e\u0435\u043a \u0438 \u0438\u043d\u044b\u0445 \u0444\u0438\u043d\u0430\u043d\u0441\u043e\u0432\u044b\u0445 \u0441\u0430\u043d\u043a\u0446\u0438\u0439; \u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u0432\u0437\u044b\u0441\u043a\u0430\u043d\u0438\u044f (\u043a\u0440\u043e\u043c\u0435 \u0430\u043b\u0438\u043c\u0435\u043d\u0442\u043e\u0432) \u0442\u0430\u043a\u0436\u0435 \u0431\u0443\u0434\u0443\u0442 \u043f\u0440\u0438\u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043b\u0435\u043d\u044b. \u041f\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044e \u043f\u0440\u043e\u0446\u0435\u0434\u0443\u0440\u044b \u0437\u0430\u044f\u0432\u0438\u0442\u0435\u043b\u044f \u043e\u0441\u0432\u043e\u0431\u043e\u0434\u044f\u0442 \u043e\u0442 \u0434\u0430\u043b\u044c\u043d\u0435\u0439\u0448\u0435\u0433\u043e \u0432\u044b\u043f\u043e\u043b\u043d\u0435\u043d\u0438\u044f \u0442\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u0439 \u043a\u0440\u0435\u0434\u0438\u0442\u043e\u0440\u043e\u0432, \u0443\u043a\u0430\u0437\u0430\u043d\u043d\u044b\u0445 \u0432 \u0437\u0430\u044f\u0432\u043b\u0435\u043d\u0438\u0438 \u043e \u043f\u0440\u0438\u0437\u043d\u0430\u043d\u0438\u0438 \u0435\u0433\u043e \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u043e\u043c, \u0430 \u044d\u0442\u0430 \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u0438\u0437\u043d\u0430\u0435\u0442\u0441\u044f \u0431\u0435\u0437\u043d\u0430\u0434\u0435\u0436\u043d\u043e\u0439. \u0412 \u043f\u0440\u043e\u0448\u043b\u043e\u043c \u043c\u0435\u0441\u044f\u0446\u0435 \u0441\u0442\u0430\u043b\u043e \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u043e, \u0447\u0442\u043e \u0437\u0430 \u043f\u0435\u0440\u0432\u043e\u0435 \u043f\u043e\u043b\u0443\u0433\u043e\u0434\u0438\u0435 2020 \u0433\u043e\u0434\u0430 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u0438\u0435 \u0441\u0443\u0434\u044b \u043f\u0440\u0438\u0437\u043d\u0430\u043b\u0438 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0430\u043c\u0438 42,7 \u0442\u044b\u0441. \u0433\u0440\u0430\u0436\u0434\u0430\u043d (\u0432 \u0442\u043e\u043c \u0447\u0438\u0441\u043b\u0435 \u0438\u043d\u0434\u0438\u0432\u0438\u0434\u0443\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u0442\u0435\u043b\u0435\u0439) \u2014 \u043f\u043e \u0434\u0430\u043d\u043d\u044b\u043c \u0435\u0434\u0438\u043d\u043e\u0433\u043e \u0440\u0435\u0435\u0441\u0442\u0440\u0430 \u00ab\u0424\u0435\u0434\u0440\u0435\u0441\u0443\u0440\u0441\u00bb, \u044d\u0442\u043e \u043d\u0430 47,2% \u0431\u043e\u043b\u044c\u0448\u0435 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044f \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u043e\u0433\u043e \u043f\u0435\u0440\u0438\u043e\u0434\u0430 2019 \u0433\u043e\u0434\u0430. \u0420\u043e\u0441\u0442 \u0447\u0438\u0441\u043b\u0430 \u043e\u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0438\u0432\u0448\u0438\u0445\u0441\u044f \u0433\u0440\u0430\u0436\u0434\u0430\u043d \u0432\u043e \u0432\u0442\u043e\u0440\u043e\u043c \u043a\u0432\u0430\u0440\u0442\u0430\u043b\u0435 \u043f\u043e \u0441\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u044e \u0441 \u043f\u0435\u0440\u0432\u044b\u043c \u0437\u0430\u043c\u0435\u0434\u043b\u0438\u043b\u0441\u044f \u2014 \u0442\u0430\u043a\u0430\u044f \u0434\u0438\u043d\u0430\u043c\u0438\u043a\u0430 \u043e\u0431\u0443\u0441\u043b\u043e\u0432\u043b\u0435\u043d\u0430 \u0442\u0435\u043c, \u0447\u0442\u043e \u0432 \u043f\u0435\u0440\u0438\u043e\u0434 \u043e\u0433\u0440\u0430\u043d\u0438\u0447\u0435\u043d\u0438\u0439 \u0441 19 \u043c\u0430\u0440\u0442\u0430 \u043f\u043e 11 \u043c\u0430\u044f \u0441\u0443\u0434\u044b \u0440\u0435\u0434\u043a\u043e \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u043b\u0438 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u043d\u044b\u0435 \u0434\u0435\u043b\u0430 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439 \u0438 \u043c\u0435\u043d\u044c\u0448\u0435, \u0447\u0435\u043c \u043e\u0431\u044b\u0447\u043d\u043e, \u0432 \u043e\u0442\u043d\u043e\u0448\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0436\u0434\u0430\u043d, \u043e\u0431\u044a\u044f\u0441\u043d\u044f\u043b \u0440\u0443\u043a\u043e\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c \u043f\u0440\u043e\u0435\u043a\u0442\u0430 \u00ab\u0424\u0435\u0434\u0440\u0435\u0441\u0443\u0440\u0441\u00bb \u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u042e\u0445\u043d\u0438\u043d. \u041e\u043d \u043f\u0440\u043e\u0433\u043d\u043e\u0437\u0438\u0440\u0443\u0435\u0442, \u0447\u0442\u043e \u0432\u043e \u0432\u0442\u043e\u0440\u043e\u043c \u043f\u043e\u043b\u0443\u0433\u043e\u0434\u0438\u0438 \u043c\u044b \u0443\u0432\u0438\u0434\u0438\u043c \u0440\u043e\u0441\u0442 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044f, \u043a\u043e\u0433\u0434\u0430 \u0441\u0443\u0434\u044b \u0440\u0430\u0441\u0441\u043c\u043e\u0442\u0440\u044f\u0442 \u0432\u0441\u0435 \u0434\u0435\u043b\u0430, \u0447\u0442\u043e \u043d\u0435 \u0441\u043c\u043e\u0433\u043b\u0438 \u0440\u0430\u043d\u0435\u0435 \u0432 \u0440\u0435\u0436\u0438\u043c\u0435 \u043e\u0433\u0440\u0430\u043d\u0438\u0447\u0435\u043d\u0438\u0439. \u041f\u043e \u0435\u0433\u043e \u0434\u0430\u043d\u043d\u044b\u043c, \u0443\u0436\u0435 \u0432 \u0438\u044e\u043d\u0435 \u0447\u0438\u0441\u043b\u043e \u043b\u0438\u0447\u043d\u044b\u0445 \u0431\u0430\u043d\u043a\u0440\u043e\u0442\u0441\u0442\u0432 \u0432\u044b\u0440\u043e\u0441\u043b\u043e \u0434\u043e 11,5 \u0442\u044b\u0441., \u0447\u0442\u043e \u0432 \u0434\u0432\u0430 \u0440\u0430\u0437\u0430 \u043f\u0440\u0435\u0432\u044b\u0448\u0430\u0435\u0442 \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u043e\u0433\u043e \u043f\u0435\u0440\u0438\u043e\u0434\u0430 2019 \u0433\u043e\u0434\u0430.", "example_title": "\u041d\u043e\u0432\u043e\u0441\u0442\u0438"}, {"text": "\u0410\u043a\u0442\u0443\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u044b. \u042d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u0430\u044f \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u044f \u0438\u0433\u0440\u0430\u0435\u0442 \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0443\u044e \u0440\u043e\u043b\u044c \u0432\u043e \u0432\u0441\u0435\u0445 \u0441\u0444\u0435\u0440\u0430\u0445 \u0436\u0438\u0437\u043d\u0438 \u0441\u043e\u0432\u0440\u0435\u043c\u0435\u043d\u043d\u043e\u0433\u043e \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0430. \u0412 \u043f\u043e\u0441\u043b\u0435\u0434\u043d\u0438\u0435 \u0433\u043e\u0434\u044b \u043e\u0431\u044a\u0435\u043c \u043d\u0430\u0443\u0447\u043d\u043e-\u0442\u0435\u0445\u043d\u0438\u0447\u0435\u0441\u043a\u043e\u0439 \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438 \u0432 \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u043e\u043c \u0432\u0438\u0434\u0435 \u0432\u043e\u0437\u0440\u043e\u0441 \u043d\u0430\u0441\u0442\u043e\u043b\u044c\u043a\u043e, \u0447\u0442\u043e \u0432\u043e\u0437\u043d\u0438\u043a\u0430\u0435\u0442 \u0443\u0433\u0440\u043e\u0437\u0430 \u043e\u0431\u0435\u0441\u0446\u0435\u043d\u0438\u0432\u0430\u043d\u0438\u044f \u044d\u0442\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438 \u0432 \u0441\u0432\u044f\u0437\u0438 \u0441 \u0442\u0440\u0443\u0434\u043d\u043e\u0441\u0442\u044f\u043c\u0438 \u043f\u043e\u0438\u0441\u043a\u0430 \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u044b\u0445 \u0441\u0432\u0435\u0434\u0435\u043d\u0438\u0439 \u0441\u0440\u0435\u0434\u0438 \u043c\u043d\u043e\u0436\u0435\u0441\u0442\u0432\u0430 \u0434\u043e\u0441\u0442\u0443\u043f\u043d\u044b\u0445 \u0442\u0435\u043a\u0441\u0442\u043e\u0432. \u0420\u0430\u0437\u0432\u0438\u0442\u0438\u0435 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u044b\u0445 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u0418\u043d\u0442\u0435\u0440\u043d\u0435\u0442 \u043c\u043d\u043e\u0433\u043e\u043a\u0440\u0430\u0442\u043d\u043e \u0443\u0441\u0443\u0433\u0443\u0431\u0438\u043b\u043e \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u0443 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u043f\u0435\u0440\u0435\u0433\u0440\u0443\u0437\u043a\u0438. \u0412 \u044d\u0442\u043e\u0439 \u0441\u0438\u0442\u0443\u0430\u0446\u0438\u0438 \u043e\u0441\u043e\u0431\u0435\u043d\u043d\u043e \u0430\u043a\u0442\u0443\u0430\u043b\u044c\u043d\u044b\u043c\u0438 \u0441\u0442\u0430\u043d\u043e\u0432\u044f\u0442\u0441\u044f \u043c\u0435\u0442\u043e\u0434\u044b \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0437\u0430\u0446\u0438\u0438 \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438, \u0442\u043e \u0435\u0441\u0442\u044c \u043c\u0435\u0442\u043e\u0434\u044b \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u0438\u044f \u0441\u0436\u0430\u0442\u043e\u0433\u043e \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u044b\u0445 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u043e\u0432\u2013\u0440\u0435\u0444\u0435\u0440\u0430\u0442\u043e\u0432 (\u0430\u043d\u043d\u043e\u0442\u0430\u0446\u0438\u0439). \u041f\u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043a\u0430 \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u044b \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0433\u043e \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u0442\u0435\u043a\u0441\u0442\u0430 \u0438 \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u043f\u043e\u043f\u044b\u0442\u043a\u0438 \u0435\u0435 \u0440\u0435\u0448\u0435\u043d\u0438\u044f \u0441 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u0435\u043c \u0440\u0430\u0437\u043b\u0438\u0447\u043d\u044b\u0445 \u043f\u043e\u0434\u0445\u043e\u0434\u043e\u0432 \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u0438\u043c\u0430\u043b\u0438\u0441\u044c \u043c\u043d\u043e\u0433\u0438\u043c\u0438 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044f\u043c\u0438. \u0418\u0441\u0442\u043e\u0440\u0438\u044f \u043f\u0440\u0438\u043c\u0435\u043d\u0435\u043d\u0438\u044f \u0432\u044b\u0447\u0438\u0441\u043b\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0442\u0435\u0445\u043d\u0438\u043a\u0438 \u0434\u043b\u044f \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u043d\u0430\u0441\u0447\u0438\u0442\u044b\u0432\u0430\u0435\u0442 \u0443\u0436\u0435 \u0431\u043e\u043b\u0435\u0435 50 \u043b\u0435\u0442 \u0438 \u0441\u0432\u044f\u0437\u0430\u043d\u0430 \u0441 \u0438\u043c\u0435\u043d\u0430\u043c\u0438 \u0442\u0430\u043a\u0438\u0445 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439, \u043a\u0430\u043a \u0413.\u041f. \u041b\u0443\u043d, \u0412.\u0415. \u0411\u0435\u0440\u0437\u043e\u043d, \u0418.\u041f. C\u0435\u0432\u0431\u043e, \u042d.\u0424. \u0421\u043a\u043e\u0440\u043e\u0445\u043e\u0434\u044c\u043a\u043e, \u0414.\u0413. \u041b\u0430\u0445\u0443\u0442\u0438, \u0420.\u0413. \u041f\u0438\u043e\u0442\u0440\u043e\u0432\u0441\u043a\u0438\u0439 \u0438 \u0434\u0440. \u0417\u0430 \u044d\u0442\u0438 \u0433\u043e\u0434\u044b \u0432\u044b\u0440\u0430\u0431\u043e\u0442\u0430\u043d\u044b \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043f\u043e\u0434\u0445\u043e\u0434\u044b \u043a \u0440\u0435\u0448\u0435\u043d\u0438\u044e \u0434\u0430\u043d\u043d\u043e\u0439 \u043f\u0440\u043e\u0431\u043b\u0435\u043c\u044b, \u043a\u043e\u0442\u043e\u0440\u044b\u0435 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u0447\u0435\u0442\u043a\u043e \u043f\u043e\u0434\u0440\u0430\u0437\u0434\u0435\u043b\u044f\u044e\u0442\u0441\u044f \u043d\u0430 \u0434\u0432\u0430 \u043d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u044f: \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0435, \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0435 \u043d\u0430 \u044d\u043a\u0441\u0442\u0440\u0430\u0433\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0438 \u0438\u0437 \u043f\u0435\u0440\u0432\u0438\u0447\u043d\u044b\u0445 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u043e\u0432 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0435\u043d\u043d\u044b\u0445 \u0444\u043e\u0440\u043c\u0430\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0438\u0437\u043d\u0430\u043a\u043e\u0432 \u00ab\u043d\u0430\u0438\u0431\u043e\u043b\u0435\u0435 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0442\u0438\u0432\u043d\u044b\u0445\u00bb \u0444\u0440\u0430\u0437 (\u0444\u0440\u0430\u0433\u043c\u0435\u043d\u0442\u043e\u0432), \u0441\u043e\u0432\u043e\u043a\u0443\u043f\u043d\u043e\u0441\u0442\u044c \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043e\u0431\u0440\u0430\u0437\u0443\u0435\u0442 \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u044b\u0439 \u044d\u043a\u0441\u0442\u0440\u0430\u043a\u0442; \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u0440\u0435\u0444\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0435, \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0435 \u043d\u0430 \u0432\u044b\u0434\u0435\u043b\u0435\u043d\u0438\u0438 \u0438\u0437 \u0442\u0435\u043a\u0441\u0442\u043e\u0432 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u044b\u0445 \u044f\u0437\u044b\u043a\u043e\u0432 \u043d\u0430\u0438\u0431\u043e\u043b\u0435\u0435 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0439 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u0438 \u0438 \u043f\u043e\u0440\u043e\u0436\u0434\u0435\u043d\u0438\u0438 \u043d\u043e\u0432\u044b\u0445 \u0442\u0435\u043a\u0441\u0442\u043e\u0432 (\u0440\u0435\u0444\u0435\u0440\u0430\u0442\u043e\u0432), \u0441\u043e\u0434\u0435\u0440\u0436\u0430\u0442\u0435\u043b\u044c\u043d\u043e \u043e\u0431\u043e\u0431\u0449\u0430\u044e\u0449\u0438\u0445 \u043f\u0435\u0440\u0432\u0438\u0447\u043d\u044b\u0435 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b.", "example_title": "\u041d\u0430\u0443\u0447\u043d\u0430\u044f \u0441\u0442\u0430\u0442\u044c\u044f"}]} | IlyaGusev/rut5_base_sum_gazeta | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #summarization #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| RuT5SumGazeta
=============
Model description
-----------------
This is the model for abstractive summarization for Russian based on rut5-base.
Intended uses & limitations
---------------------------
#### How to use
Colab: link
Training data
-------------
* Dataset: Gazeta
Training procedure
------------------
* Training script: URL
* Config: t5\_training\_config.json
Eval results
------------
* Train dataset: Gazeta v1 train
* Test dataset: Gazeta v1 test
* Source max\_length: 600
* Target max\_length: 200
* no\_repeat\_ngram\_size: 4
* num\_beams: 5
* Train dataset: Gazeta v1 train
* Test dataset: Gazeta v2 test
* Source max\_length: 600
* Target max\_length: 200
* no\_repeat\_ngram\_size: 4
* num\_beams: 5
Predicting all summaries:
Evaluation script: URL
Flags: --language ru --tokenize-after --lower
| [
"#### How to use\n\n\nColab: link\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Training script: URL\n* Config: t5\\_training\\_config.json\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nPredicting all summaries:\n\n\nEvaluation script: URL\n\n\nFlags: --language ru --tokenize-after --lower"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #summarization #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"#### How to use\n\n\nColab: link\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Training script: URL\n* Config: t5\\_training\\_config.json\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nPredicting all summaries:\n\n\nEvaluation script: URL\n\n\nFlags: --language ru --tokenize-after --lower"
] | [
66,
233
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #summarization #ru #dataset-IlyaGusev/gazeta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n#### How to use\n\n\nColab: link\n\n\nTraining data\n-------------\n\n\n* Dataset: Gazeta\n\n\nTraining procedure\n------------------\n\n\n* Training script: URL\n* Config: t5\\_training\\_config.json\n\n\nEval results\n------------\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v1 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\n* Train dataset: Gazeta v1 train\n* Test dataset: Gazeta v2 test\n* Source max\\_length: 600\n* Target max\\_length: 200\n* no\\_repeat\\_ngram\\_size: 4\n* num\\_beams: 5\n\n\n\nPredicting all summaries:\n\n\nEvaluation script: URL\n\n\nFlags: --language ru --tokenize-after --lower"
] |
text-classification | transformers |
# XLM-RoBERTa HeadlineCause Full
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
ะะตัะบะพะฒ ะพะฟัะพะฒะตัะณ ัะฒะพะน ะฟะตัะตะฒะพะด ะฝะฐ ัะดะฐะปะตะฝะบั</s>ะะผะธััะธะน ะะตัะบะพะฒ ะฟะตัะตัะตะป ะฝะฐ ัะดะฐะปะตะฝะบั
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_full"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governorโs restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"ะะตัะบะพะฒ ะพะฟัะพะฒะตัะณ ัะฒะพะน ะฟะตัะตะฒะพะด ะฝะฐ ัะดะฐะปะตะฝะบั",
"ะะผะธััะธะน ะะตัะบะพะฒ ะฟะตัะตัะตะป ะฝะฐ ัะดะฐะปะตะฝะบั"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": ["ru", "en"], "license": "apache-2.0", "tags": ["xlm-roberta-large"], "datasets": ["IlyaGusev/headline_cause"], "widget": [{"text": "\u041f\u0435\u0441\u043a\u043e\u0432 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0433 \u0441\u0432\u043e\u0439 \u043f\u0435\u0440\u0435\u0432\u043e\u0434 \u043d\u0430 \u0443\u0434\u0430\u043b\u0435\u043d\u043a\u0443</s>\u0414\u043c\u0438\u0442\u0440\u0438\u0439 \u041f\u0435\u0441\u043a\u043e\u0432 \u043f\u0435\u0440\u0435\u0448\u0435\u043b \u043d\u0430 \u0443\u0434\u0430\u043b\u0435\u043d\u043a\u0443"}]} | IlyaGusev/xlm_roberta_large_headline_cause_full | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2108.12626"
] | [
"ru",
"en"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #ru #en #dataset-IlyaGusev/headline_cause #arxiv-2108.12626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# XLM-RoBERTa HeadlineCause Full
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.
For example:
## Intended uses & limitations
#### How to use
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: IlyaGusev/headline_cause
* GitHub: IlyaGusev/HeadlineCause
## Training procedure
* Notebook: HeadlineCause
* Stand-alone script: URL
## Eval results
Evaluation results can be found in the arxiv paper.
### BibTeX entry and citation info
| [
"# XLM-RoBERTa HeadlineCause Full",
"## Model description\n\nThis model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.\n\nYou can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.\nFor example:",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\nThe models are intended to be used on news headlines. No other limitations are known.",
"## Training data\n\n* HuggingFace dataset: IlyaGusev/headline_cause\n* GitHub: IlyaGusev/HeadlineCause",
"## Training procedure\n\n* Notebook: HeadlineCause\n* Stand-alone script: URL",
"## Eval results\n\nEvaluation results can be found in the arxiv paper.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #ru #en #dataset-IlyaGusev/headline_cause #arxiv-2108.12626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# XLM-RoBERTa HeadlineCause Full",
"## Model description\n\nThis model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.\n\nYou can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.\nFor example:",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\nThe models are intended to be used on news headlines. No other limitations are known.",
"## Training data\n\n* HuggingFace dataset: IlyaGusev/headline_cause\n* GitHub: IlyaGusev/HeadlineCause",
"## Training procedure\n\n* Notebook: HeadlineCause\n* Stand-alone script: URL",
"## Eval results\n\nEvaluation results can be found in the arxiv paper.",
"### BibTeX entry and citation info"
] | [
72,
9,
111,
6,
7,
24,
31,
18,
17,
10
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #ru #en #dataset-IlyaGusev/headline_cause #arxiv-2108.12626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# XLM-RoBERTa HeadlineCause Full## Model description\n\nThis model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.\n\nYou can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.\nFor example:## Intended uses & limitations#### How to use#### Limitations and bias\n\nThe models are intended to be used on news headlines. No other limitations are known.## Training data\n\n* HuggingFace dataset: IlyaGusev/headline_cause\n* GitHub: IlyaGusev/HeadlineCause## Training procedure\n\n* Notebook: HeadlineCause\n* Stand-alone script: URL## Eval results\n\nEvaluation results can be found in the arxiv paper.### BibTeX entry and citation info"
] |
text-classification | transformers |
# XLM-RoBERTa HeadlineCause Simple
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
ะะตัะบะพะฒ ะพะฟัะพะฒะตัะณ ัะฒะพะน ะฟะตัะตะฒะพะด ะฝะฐ ัะดะฐะปะตะฝะบั</s>ะะผะธััะธะน ะะตัะบะพะฒ ะฟะตัะตัะตะป ะฝะฐ ัะดะฐะปะตะฝะบั
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_simple"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governorโs restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"ะะตัะบะพะฒ ะพะฟัะพะฒะตัะณ ัะฒะพะน ะฟะตัะตะฒะพะด ะฝะฐ ัะดะฐะปะตะฝะบั",
"ะะผะธััะธะน ะะตัะบะพะฒ ะฟะตัะตัะตะป ะฝะฐ ัะดะฐะปะตะฝะบั"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["ru", "en"], "license": "apache-2.0", "tags": ["xlm-roberta-large"], "datasets": ["IlyaGusev/headline_cause"], "widget": [{"text": "\u041f\u0435\u0441\u043a\u043e\u0432 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0433 \u0441\u0432\u043e\u0439 \u043f\u0435\u0440\u0435\u0432\u043e\u0434 \u043d\u0430 \u0443\u0434\u0430\u043b\u0435\u043d\u043a\u0443</s>\u0414\u043c\u0438\u0442\u0440\u0438\u0439 \u041f\u0435\u0441\u043a\u043e\u0432 \u043f\u0435\u0440\u0435\u0448\u0435\u043b \u043d\u0430 \u0443\u0434\u0430\u043b\u0435\u043d\u043a\u0443"}]} | IlyaGusev/xlm_roberta_large_headline_cause_simple | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2108.12626"
] | [
"ru",
"en"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #ru #en #dataset-IlyaGusev/headline_cause #arxiv-2108.12626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# XLM-RoBERTa HeadlineCause Simple
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.
For example:
## Intended uses & limitations
#### How to use
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: IlyaGusev/headline_cause
* GitHub: IlyaGusev/HeadlineCause
## Training procedure
* Notebook: HeadlineCause
* Stand-alone script: URL
## Eval results
Evaluation results can be found in the arxiv paper.
### BibTeX entry and citation info
| [
"# XLM-RoBERTa HeadlineCause Simple",
"## Model description\n\nThis model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.\n\nYou can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.\nFor example:",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\nThe models are intended to be used on news headlines. No other limitations are known.",
"## Training data\n\n* HuggingFace dataset: IlyaGusev/headline_cause\n* GitHub: IlyaGusev/HeadlineCause",
"## Training procedure\n\n* Notebook: HeadlineCause\n* Stand-alone script: URL",
"## Eval results\n\nEvaluation results can be found in the arxiv paper.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #ru #en #dataset-IlyaGusev/headline_cause #arxiv-2108.12626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# XLM-RoBERTa HeadlineCause Simple",
"## Model description\n\nThis model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.\n\nYou can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.\nFor example:",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\nThe models are intended to be used on news headlines. No other limitations are known.",
"## Training data\n\n* HuggingFace dataset: IlyaGusev/headline_cause\n* GitHub: IlyaGusev/HeadlineCause",
"## Training procedure\n\n* Notebook: HeadlineCause\n* Stand-alone script: URL",
"## Eval results\n\nEvaluation results can be found in the arxiv paper.",
"### BibTeX entry and citation info"
] | [
72,
9,
82,
6,
7,
24,
31,
18,
17,
10
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #ru #en #dataset-IlyaGusev/headline_cause #arxiv-2108.12626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# XLM-RoBERTa HeadlineCause Simple## Model description\n\nThis model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.\n\nYou can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with token.\nFor example:## Intended uses & limitations#### How to use#### Limitations and bias\n\nThe models are intended to be used on news headlines. No other limitations are known.## Training data\n\n* HuggingFace dataset: IlyaGusev/headline_cause\n* GitHub: IlyaGusev/HeadlineCause## Training procedure\n\n* Notebook: HeadlineCause\n* Stand-alone script: URL## Eval results\n\nEvaluation results can be found in the arxiv paper.### BibTeX entry and citation info"
] |
text-generation | transformers |
# Harry Botter Model | {"tags": ["conversational"]} | Ilyabarigou/Genesis-harrybotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Botter Model | [
"# Harry Botter Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Botter Model"
] | [
39,
5
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Botter Model"
] |
automatic-speech-recognition | transformers | ## Evaluation on Common Voice FR Test
The script used for training and evaluation can be found here: https://github.com/irebai/wav2vec2
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import re
model_name = "Ilyes/wav2vec2-large-xlsr-53-french"
device = "cpu" # "cuda"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "fr", split="test", cache_dir="./data/fr")
chars_to_ignore_regex = '[\,\?\.\!\;\:\"\โ\%\โ\โ\๏ฟฝ\โ\โ\โ\โ\โ\โฆ\ยท\!\ว\?\ยซ\โน\ยป\โบโ\โ\\สฟ\สพ\โ\โ\\|\.\,\;\:\*\โ\โ\โ\โ\_\/\:\ห\;\,\=\ยซ\ยป\โ]'
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("โ", "'")
return batch
resampler = torchaudio.transforms.Resample(48_000, 16_000)
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
## Results
WER=12.82%
CER=4.40%
| {"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-French by Ilyes Rebai", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fr", "type": "common_voice", "args": "fr"}, "metrics": [{"type": "wer", "value": 12.82, "name": "Test WER"}]}]}]} | Ilyes/wav2vec2-large-xlsr-53-french | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| ## Evaluation on Common Voice FR Test
The script used for training and evaluation can be found here: URL
## Results
WER=12.82%
CER=4.40%
| [
"## Evaluation on Common Voice FR Test\nThe script used for training and evaluation can be found here: URL",
"## Results\n\nWER=12.82%\n\nCER=4.40%"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"## Evaluation on Common Voice FR Test\nThe script used for training and evaluation can be found here: URL",
"## Results\n\nWER=12.82%\n\nCER=4.40%"
] | [
68,
22,
17
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n## Evaluation on Common Voice FR Test\nThe script used for training and evaluation can be found here: URL## Results\n\nWER=12.82%\n\nCER=4.40%"
] |
automatic-speech-recognition | transformers | ## Evaluation on Common Voice FR Test
```python
import re
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
model_name = "Ilyes/wav2vec2-large-xlsr-53-french_punctuation"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to('cuda')
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "fr", split="test")
chars_to_ignore_regex = '[\;\:\"\โ\%\โ\โ\๏ฟฝ\โ\โ\โ\โ\โ\โฆ\ยท\ว\ยซ\โน\ยป\โบโ\โ\\สฟ\สพ\โ\โ\\|\;\:\*\โ\โ\โ\โ\_\/\:\ห\;\=\ยซ\ยป\โ]'
def normalize_text(text):
text = text.lower().strip()
text = re.sub('ล', 'oe', text)
text = re.sub('รฆ', 'ae', text)
text = re.sub("โ|ยด|โฒ|สผ|โ|สป|`", "'", text)
text = re.sub("'+ ", " ", text)
text = re.sub(" '+", " ", text)
text = re.sub("'$", " ", text)
text = re.sub("' ", " ", text)
text = re.sub("โ|โ", "-", text)
text = re.sub(" -", "", text)
text = re.sub("- ", "", text)
text = re.sub(chars_to_ignore_regex, '', text)
return text
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = normalize_text(batch["sentence"])
return batch
ds = ds.map(map_to_array)
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
# remove duplicates
batch["target"] = re.sub('\.+', '.', batch["target"])
batch["target"] = re.sub('\?+', '?', batch["target"])
batch["target"] = re.sub('!+', '!', batch["target"])
batch["target"] = re.sub(',+', ',', batch["target"])
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
## Some results
| Reference | Prediction |
| ------------- | ------------- |
| il vรฉcut ร new york et y enseigna une grande partie de sa vie. | il a vรฉcu ร new york et y enseigna une grande partie de sa vie. |
| au classement par nations, l'allemagne est la tenante du titre. | au classement der nation l'allemagne est la tenante du titre. |
| voici un petit calcul pour fixer les idรฉes. | voici un petit calcul pour fixer les idรฉes. |
| oh! tu dois รชtre beau avec | oh! tu dois รชtre beau avec. |
| babochet vous le voulez? | baboche, vous le voulez? |
| la commission est, par consรฉquent, dรฉfavorable ร cet amendement. | la commission est, par consรฉquent, dรฉfavorable ร cet amendement. |
All the references and predictions of the test corpus are already available in this repository.
## Results
text + punctuation
WER=21.47% CER=7.21%
text (without punctuation)
WER=19.71% CER=6.91%
| {"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning"], "datasets": ["common_voice"]} | Ilyes/wav2vec2-large-xlsr-53-french_punctuation | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning",
"fr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #fr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
| Evaluation on Common Voice FR Test
----------------------------------
Some results
------------
All the references and predictions of the test corpus are already available in this repository.
Results
-------
text + punctuation
WER=21.47% CER=7.21%
text (without punctuation)
WER=19.71% CER=6.91%
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #fr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n"
] | [
60
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning #fr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Albert DialoGPT Model | {"tags": ["conversational"]} | ImAPizza/DialoGPT-medium-albert | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Albert DialoGPT Model | [
"# Albert DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Albert DialoGPT Model"
] | [
39,
6
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Albert DialoGPT Model"
] |
text-generation | transformers |
# Alberttwo DialoGPT Model | {"tags": ["conversational"]} | ImAPizza/DialoGPT-medium-alberttwo | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Alberttwo DialoGPT Model | [
"# Alberttwo DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Alberttwo DialoGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Alberttwo DialoGPT Model"
] |
fill-mask | transformers | ## Usage:
```
from sentence_transformers import models
from sentence_transformers import SentenceTransformer
word_embedding_model = models.Transformer('Cro-CoV-cseBERT')
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model], device='') ## device = 'gpu' or 'cpu'
texts_emb = model.encode(texts)
```
## Datasets:
https://github.com/InfoCoV/InfoCoV
## Paper:
Please cite https://www.mdpi.com/2076-3417/11/21/10442 | {} | InfoCoV/Cro-CoV-cseBERT | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| ## Usage:
## Datasets:
URL
## Paper:
Please cite URL | [
"## Usage:",
"## Datasets:\nURL",
"## Paper:\nPlease cite URL"
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"## Usage:",
"## Datasets:\nURL",
"## Paper:\nPlease cite URL"
] | [
28,
4,
8,
8
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n## Usage:## Datasets:\nURL## Paper:\nPlease cite URL"
] |
text-generation | transformers |
# Inkdrop/gpt2-property-classifier
| {"language": ["de"], "license": "mit", "tags": ["text-generation"], "widget": [{"text": "\"Ideal als kleine Aufmerksamkeit mit emotionalem Wert Neue Tuchmasken-Referenz \"Verw\u00f6hnmoment\u00bb exklusiv im Set Langanhaltende Feuchtigkeit und Erholung Mit strahlendem Teint Sofort-Effekt Naturnahe Kosmetik Inhalt: Badekristalle Kleiner Gruss von Herzen 60 g, Tuchmaske Verw\u00f6hnmoment 1x\" is a", "example_title": "Bullet point classification"}]} | Inkdrop/gpt2-property-classifier | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Inkdrop/gpt2-property-classifier
| [
"# Inkdrop/gpt2-property-classifier"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Inkdrop/gpt2-property-classifier"
] | [
45,
12
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Inkdrop/gpt2-property-classifier"
] |
null | null | # Welcome to my model | {"tags": ["chemistry", "climate"]} | Intae/mymodel | null | [
"chemistry",
"climate",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#chemistry #climate #region-us
| # Welcome to my model | [
"# Welcome to my model"
] | [
"TAGS\n#chemistry #climate #region-us \n",
"# Welcome to my model"
] | [
9,
5
] | [
"TAGS\n#chemistry #climate #region-us \n# Welcome to my model"
] |
fill-mask | transformers |
# Sparse BERT base model fine tuned to MNLI without classifier layer (uncased)
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from [bert-base-uncased-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-sparse-70-unstructured).
<br>
This model doesn't have a classifier layer to enable easier loading of the model for training to other downstream tasks.
In all the other layers this model is similar to [bert-base-uncased-mnli-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-mnli-sparse-70-unstructured).
<br><br>
Note: This model requires `transformers==2.10.0`
## Evaluation Results
Matched: 82.5%
Mismatched: 83.3%
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| Task | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|------|--------------|------------|-------------|---------------------|--------------------|
| | 90.2/86.7 | 90.3 | 91.5 | 88.9/88.6 | 80.5/88.2 |
| {"language": "en"} | Intel/bert-base-uncased-mnli-sparse-70-unstructured-no-classifier | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us
| Sparse BERT base model fine tuned to MNLI without classifier layer (uncased)
============================================================================
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from bert-base-uncased-sparse-70-unstructured.
This model doesn't have a classifier layer to enable easier loading of the model for training to other downstream tasks.
In all the other layers this model is similar to bert-base-uncased-mnli-sparse-70-unstructured.
Note: This model requires 'transformers==2.10.0'
Evaluation Results
------------------
```
Matched: 82.5%
Mismatched: 83.3%
```
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
30
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
# Sparse BERT base model fine tuned to MNLI (uncased)
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from [bert-base-uncased-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-sparse-70-unstructured).
<br><br>
Note: This model requires `transformers==2.10.0`
## Evaluation Results
Matched: 82.5%
Mismatched: 83.3%
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| Task | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|------|--------------|------------|-------------|---------------------|--------------------|
| | 90.2/86.7 | 90.3 | 91.5 | 88.9/88.6 | 80.5/88.2 |
| {"language": "en"} | Intel/bert-base-uncased-mnli-sparse-70-unstructured | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us
| Sparse BERT base model fine tuned to MNLI (uncased)
===================================================
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from bert-base-uncased-sparse-70-unstructured.
Note: This model requires 'transformers==2.10.0'
Evaluation Results
------------------
```
Matched: 82.5%
Mismatched: 83.3%
```
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
30
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #en #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | transformers |
# Sparse BERT base model (uncased)
Pretrained model pruned to 1:2 structured sparsity.
The model is a pruned version of the [BERT base model](https://huggingface.co/bert-base-uncased).
## Intended Use
The model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.
To keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros.
## Evaluation Results
We get the following results on the tasks development set, all results are mean of 5 different seeded models:
| Task | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|------|--------------|---------------|--------------|------------|-------------|---------------------|--------------------|
| | 83.3 | 83.9 | 90.8/87.6 | 90.4 | 91.3 | 88.8/88.3 | 80.5/88.2 | | {"language": "en"} | Intel/bert-base-uncased-sparse-1_2 | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #en #endpoints_compatible #region-us
| Sparse BERT base model (uncased)
================================
Pretrained model pruned to 1:2 structured sparsity.
The model is a pruned version of the BERT base model.
Intended Use
------------
The model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.
To keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros.
Evaluation Results
------------------
We get the following results on the tasks development set, all results are mean of 5 different seeded models:
| [] | [
"TAGS\n#transformers #pytorch #bert #pretraining #en #endpoints_compatible #region-us \n"
] | [
25
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #en #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# Sparse BERT base model (uncased)
Pretrained model pruned to 70% sparsity.
The model is a pruned version of the [BERT base model](https://huggingface.co/bert-base-uncased).
## Intended Use
The model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.
To keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros. | {"language": "en"} | Intel/bert-base-uncased-sparse-70-unstructured | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us
|
# Sparse BERT base model (uncased)
Pretrained model pruned to 70% sparsity.
The model is a pruned version of the BERT base model.
## Intended Use
The model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.
To keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros. | [
"# Sparse BERT base model (uncased)\n\nPretrained model pruned to 70% sparsity.\nThe model is a pruned version of the BERT base model.",
"## Intended Use\n\nThe model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.\nTo keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros."
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# Sparse BERT base model (uncased)\n\nPretrained model pruned to 70% sparsity.\nThe model is a pruned version of the BERT base model.",
"## Intended Use\n\nThe model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.\nTo keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros."
] | [
30,
37,
55
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us \n# Sparse BERT base model (uncased)\n\nPretrained model pruned to 70% sparsity.\nThe model is a pruned version of the BERT base model.## Intended Use\n\nThe model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.\nTo keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros."
] |
fill-mask | transformers | ## Model Details: 85% Sparse BERT-Base (uncased) Prune Once for All
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754).
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754):

| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | September 30, 2021 |
| Version | 1 |
| Type | NLP - General sparse language model |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification. |
| Primary intended users | Anyone who needs an efficient general language model for other downstream tasks. |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is an example of how to import this model in Python:
```python
import transformers
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/bert-base-uncased-sparse-85-unstructured-pruneofa')
```
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
### Metrics (Model Performance):
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
| [80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
| [**85% Sparse BERT-Base uncased**](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
| [90% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
| [90% Sparse BERT-Large uncased](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
| [85% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
| [90% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | [English Wikipedia Dataset](https://huggingface.co/datasets/wikipedia) (2500M words). |
| Motivation | To build an efficient and accurate base model for several downstream language tasks. |
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the modelsโ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{zafrir2021prune,
title={Prune Once for All: Sparse Pre-Trained Language Models},
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
journal={arXiv preprint arXiv:2111.05754},
year={2021}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["fill-mask"], "datasets": ["wikipedia", "bookcorpus"]} | Intel/bert-base-uncased-sparse-85-unstructured-pruneofa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"arxiv:1810.04805",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754",
"1810.04805",
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us
| Model Details: 85% Sparse BERT-Base (uncased) Prune Once for All
----------------------------------------------------------------
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).
Visualization of Prunce Once for All method from Zafrir et al. (2021):
!Zafrir2021\_Fig1.png
### How to use
Here is an example of how to import this model in Python:
For more code examples, refer to the GitHub Repo.
### Metrics (Model Performance):
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
### BibTeX entry and citation info
| [
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
83,
33,
31,
10
] | [
"TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us \n### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.### BibTeX entry and citation info"
] |
fill-mask | transformers | ## Model Details: 90% Sparse BERT-Base (uncased) Prune Once for All
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754).
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754):

| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | September 30, 2021 |
| Version | 1 |
| Type | NLP - General sparse language model |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification. |
| Primary intended users | Anyone who needs an efficient general language model for other downstream tasks. |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is an example of how to import this model in Python:
```python
import transformers
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/bert-base-uncased-sparse-90-unstructured-pruneofa')
```
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
### Metrics (Model Performance):
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
| [80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
| [85% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
| [**90% Sparse BERT-Base uncased**](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
| [90% Sparse BERT-Large uncased](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
| [85% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
| [90% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | [English Wikipedia Dataset](https://huggingface.co/datasets/wikipedia) (2500M words). |
| Motivation | To build an efficient and accurate base model for several downstream language tasks. |
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the modelsโ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{zafrir2021prune,
title={Prune Once for All: Sparse Pre-Trained Language Models},
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
journal={arXiv preprint arXiv:2111.05754},
year={2021}
}
```
| {"language": "en", "license": "apache-2.0", "tags": ["fill-mask", "bert"], "datasets": ["wikipedia", "bookcorpus"]} | Intel/bert-base-uncased-sparse-90-unstructured-pruneofa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"arxiv:1810.04805",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754",
"1810.04805",
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us
| Model Details: 90% Sparse BERT-Base (uncased) Prune Once for All
----------------------------------------------------------------
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).
Visualization of Prunce Once for All method from Zafrir et al. (2021):
!Zafrir2021\_Fig1.png
### How to use
Here is an example of how to import this model in Python:
For more code examples, refer to the GitHub Repo.
### Metrics (Model Performance):
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
### BibTeX entry and citation info
| [
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
83,
33,
31,
10
] | [
"TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us \n### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.### BibTeX entry and citation info"
] |
question-answering | transformers | ## Model Details: 80% 1x4 Block Sparse BERT-Base (uncased) Fine Tuned on SQuADv1.1
This model has been fine-tuned for the NLP task of question answering, trained on the SQuAD 1.1 dataset. It is a result of fine-tuning a Prune Once For All 80% 1x4 block sparse pre-trained BERT-Base model, combined with knowledge distillation.
> We present a new method for training sparse pre-trained Transformer language models by integrating weight pruning and model distillation. These sparse pre-trained models can be used to transfer learning for a wide range of tasks while maintaining their sparsity pattern. We show how the compressed sparse pre-trained models we trained transfer their knowledge to five different downstream natural language tasks with minimal accuracy loss.
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Model Card Authors | Intel |
| Date | February 27, 2022 |
| Version | 1 |
| Type | NLP - Question Answering |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
| Paper or Other Resources | [Paper: Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754). More details can be found in their paper.

| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the model for the NLP task of question answering: given a corpus of text, you can ask it a question about that text, and it will find the answer in the text. |
| Primary intended users | Anyone doing question answering |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is how to import this model in Python:
```python
import transformers
import model_compression_research as model_comp
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa')
scheduler = mcr.pruning_scheduler_factory(model, '../../examples/transformers/question-answering/config/lock_config.json')
# Train your model...
scheduler.remove_pruning()
```
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
### Metrics (Model Performance):
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
| [**80% 1x4 Block Sparse BERT-Base uncased**](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
| [85% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
| [90% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
| [90% Sparse BERT-Large uncased](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
| [85% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
| [90% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | SQuAD1.1: "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable." (https://huggingface.co/datasets/squad)|
| Motivation | To build an efficient and accurate model for the question answering task. |
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the modelsโ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." Following the pre-training on Wikipedia, fine-tuning is completed on the SQuAD1.1 dataset. |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{zafrir2021prune,
title={Prune Once for All: Sparse Pre-Trained Language Models},
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
journal={arXiv preprint arXiv:2111.05754},
year={2021}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["question-answering", "bert"], "datasets": ["squad"]} | Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2111.05754",
"arxiv:1810.04805",
"arxiv:1910.01108",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754",
"1810.04805",
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #question-answering #en #dataset-squad #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Model Details: 80% 1x4 Block Sparse BERT-Base (uncased) Fine Tuned on SQuADv1.1
-------------------------------------------------------------------------------
This model has been fine-tuned for the NLP task of question answering, trained on the SQuAD 1.1 dataset. It is a result of fine-tuning a Prune Once For All 80% 1x4 block sparse pre-trained BERT-Base model, combined with knowledge distillation.
>
> We present a new method for training sparse pre-trained Transformer language models by integrating weight pruning and model distillation. These sparse pre-trained models can be used to transfer learning for a wide range of tasks while maintaining their sparsity pattern. We show how the compressed sparse pre-trained models we trained transfer their knowledge to five different downstream natural language tasks with minimal accuracy loss.
>
>
>
Visualization of Prunce Once for All method from Zafrir et al. (2021). More details can be found in their paper.
!Zafrir2021\_Fig1.png
### How to use
Here is how to import this model in Python:
For more code examples, refer to the GitHub Repo.
### Metrics (Model Performance):
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
### BibTeX entry and citation info
| [
"### How to use\n\n\nHere is how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nHere is how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
77,
30,
31,
10
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### How to use\n\n\nHere is how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.### BibTeX entry and citation info"
] |
fill-mask | transformers | ## Model Details: 90% Sparse BERT-Large (uncased) Prune Once for All
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754).
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754):

| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | September 30, 2021 |
| Version | 1 |
| Type | NLP - General sparse language model |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification. |
| Primary intended users | Anyone who needs an efficient general language model for other downstream tasks. |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is an example of how to import this model in Python:
```python
import transformers
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/bert-large-uncased-sparse-90-unstructured-pruneofa')
```
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
### Metrics (Model Performance):
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
| [80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
| [85% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
| [90% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
| [**90% Sparse BERT-Large uncased**](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
| [85% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
| [90% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | [English Wikipedia Dataset](https://huggingface.co/datasets/wikipedia) (2500M words). |
| Motivation | To build an efficient and accurate base model for several downstream language tasks. |
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the modelsโ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{zafrir2021prune,
title={Prune Once for All: Sparse Pre-Trained Language Models},
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
journal={arXiv preprint arXiv:2111.05754},
year={2021}
}
```
| {"language": "en", "license": "apache-2.0", "tags": ["fill-mask"], "datasets": ["wikipedia", "bookcorpus"]} | Intel/bert-large-uncased-sparse-90-unstructured-pruneofa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"arxiv:1810.04805",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754",
"1810.04805",
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us
| Model Details: 90% Sparse BERT-Large (uncased) Prune Once for All
-----------------------------------------------------------------
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).
Visualization of Prunce Once for All method from Zafrir et al. (2021):
!Zafrir2021\_Fig1.png
### How to use
Here is an example of how to import this model in Python:
For more code examples, refer to the GitHub Repo.
### Metrics (Model Performance):
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
### BibTeX entry and citation info
| [
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
83,
33,
31,
10
] | [
"TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #en #dataset-wikipedia #dataset-bookcorpus #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #endpoints_compatible #region-us \n### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.### BibTeX entry and citation info"
] |
question-answering | transformers | # 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
`{"exact_match": 83.56669820245979, "f1": 90.20829352733487}`
For further details see our paper, [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754), and our open source implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
| {"language": "en"} | Intel/bert-large-uncased-squadv1.1-sparse-90-unstructured | null | [
"transformers",
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"arxiv:2111.05754",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #bert #question-answering #en #arxiv-2111.05754 #endpoints_compatible #region-us
| # 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
'{"exact_match": 83.56669820245979, "f1": 90.20829352733487}'
For further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here.
| [
"# 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1\nThis model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.\nThis model yields the following results on SQuADv1.1 development set:<br>\n'{\"exact_match\": 83.56669820245979, \"f1\": 90.20829352733487}'\n\nFor further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here."
] | [
"TAGS\n#transformers #pytorch #tf #bert #question-answering #en #arxiv-2111.05754 #endpoints_compatible #region-us \n",
"# 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1\nThis model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.\nThis model yields the following results on SQuADv1.1 development set:<br>\n'{\"exact_match\": 83.56669820245979, \"f1\": 90.20829352733487}'\n\nFor further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here."
] | [
39,
130
] | [
"TAGS\n#transformers #pytorch #tf #bert #question-answering #en #arxiv-2111.05754 #endpoints_compatible #region-us \n# 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1\nThis model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.\nThis model yields the following results on SQuADv1.1 development set:<br>\n'{\"exact_match\": 83.56669820245979, \"f1\": 90.20829352733487}'\n\nFor further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here."
] |
fill-mask | transformers | ## Model Details: 85% Sparse DistilBERT-Base (uncased) Prune Once for All
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754).
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754):

| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | September 30, 2021 |
| Version | 1 |
| Type | NLP - General sparse language model |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification. |
| Primary intended users | Anyone who needs an efficient general language model for other downstream tasks. |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is an example of how to import this model in Python:
```python
import transformers
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa')
```
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
### Metrics (Model Performance):
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
| [80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
| [85% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
| [90% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
| [90% Sparse BERT-Large uncased](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
| [**85% Sparse DistilBERT uncased**](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
| [90% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | [English Wikipedia Dataset](https://huggingface.co/datasets/wikipedia) (2500M words). |
| Motivation | To build an efficient and accurate base model for several downstream language tasks. |
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the modelsโ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{zafrir2021prune,
title={Prune Once for All: Sparse Pre-Trained Language Models},
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
journal={arXiv preprint arXiv:2111.05754},
year={2021}
}
```
| {"language": "en", "license": "apache-2.0", "datasets": ["wikipedia"]} | Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"en",
"dataset:wikipedia",
"arxiv:2111.05754",
"arxiv:1810.04805",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754",
"1810.04805",
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #distilbert #fill-mask #en #dataset-wikipedia #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Model Details: 85% Sparse DistilBERT-Base (uncased) Prune Once for All
----------------------------------------------------------------------
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).
Visualization of Prunce Once for All method from Zafrir et al. (2021):
!Zafrir2021\_Fig1.png
### How to use
Here is an example of how to import this model in Python:
For more code examples, refer to the GitHub Repo.
### Metrics (Model Performance):
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
### BibTeX entry and citation info
| [
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #en #dataset-wikipedia #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
79,
33,
31,
10
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #en #dataset-wikipedia #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.### BibTeX entry and citation info"
] |
fill-mask | transformers | ### Model Details: 90% Sparse DistilBERT-Base (uncased) Prune Once for All
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754).
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754):

| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | September 30, 2021 |
| Version | 1 |
| Type | NLP - General sparse language model |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification. |
| Primary intended users | Anyone who needs an efficient general language model for other downstream tasks. |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is an example of how to import this model in Python:
```python
import transformers
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa')
```
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
### Metrics (Model Performance):
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
| [80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
| [85% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
| [90% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
| [90% Sparse BERT-Large uncased](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
| [85% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
| [**90% Sparse DistilBERT uncased**](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | [English Wikipedia Dataset](https://huggingface.co/datasets/wikipedia) (2500M words). |
| Motivation | To build an efficient and accurate base model for several downstream language tasks. |
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the modelsโ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@article{zafrir2021prune,
title={Prune Once for All: Sparse Pre-Trained Language Models},
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
journal={arXiv preprint arXiv:2111.05754},
year={2021}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["wikipedia"]} | Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"en",
"dataset:wikipedia",
"arxiv:2111.05754",
"arxiv:1810.04805",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.05754",
"1810.04805",
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #distilbert #fill-mask #en #dataset-wikipedia #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ### Model Details: 90% Sparse DistilBERT-Base (uncased) Prune Once for All
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).
Visualization of Prunce Once for All method from Zafrir et al. (2021):
!Zafrir2021\_Fig1.png
### How to use
Here is an example of how to import this model in Python:
For more code examples, refer to the GitHub Repo.
### Metrics (Model Performance):
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
### BibTeX entry and citation info
| [
"### Model Details: 90% Sparse DistilBERT-Base (uncased) Prune Once for All\n\n\nThis model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term \"sparse\" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).\n\n\nVisualization of Prunce Once for All method from Zafrir et al. (2021):\n!Zafrir2021\\_Fig1.png",
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #en #dataset-wikipedia #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Model Details: 90% Sparse DistilBERT-Base (uncased) Prune Once for All\n\n\nThis model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term \"sparse\" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).\n\n\nVisualization of Prunce Once for All method from Zafrir et al. (2021):\n!Zafrir2021\\_Fig1.png",
"### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.",
"### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.",
"### BibTeX entry and citation info"
] | [
79,
183,
33,
31,
10
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #en #dataset-wikipedia #arxiv-2111.05754 #arxiv-1810.04805 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Model Details: 90% Sparse DistilBERT-Base (uncased) Prune Once for All\n\n\nThis model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term \"sparse\" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).\n\n\nVisualization of Prunce Once for All method from Zafrir et al. (2021):\n!Zafrir2021\\_Fig1.png### How to use\n\n\nHere is an example of how to import this model in Python:\n\n\nFor more code examples, refer to the GitHub Repo.### Metrics (Model Performance):\n\n\n\nAll the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.### BibTeX entry and citation info"
] |
question-answering | transformers |
## Model Details: Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic Sequence Length
Dynamic-TinyBERT has been fine-tuned for the NLP task of question answering, trained on the SQuAD 1.1 dataset. [Guskin et al. (2021)](https://neurips2021-nlp.github.io/papers/16/CameraReady/Dynamic_TinyBERT_NLSP2021_camera_ready.pdf) note:
> Dynamic-TinyBERT is a TinyBERT model that utilizes sequence-length reduction and Hyperparameter Optimization for enhanced inference efficiency per any computational budget. Dynamic-TinyBERT is trained only once, performing on-par with BERT and achieving an accuracy-speedup trade-off superior to any other efficient approaches (up to 3.3x with <1% loss-drop).
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Model Card Authors | Intel in collaboration with Hugging Face |
| Date | November 22, 2021 |
| Version | 1 |
| Type | NLP - Question Answering |
| Architecture | "For our Dynamic-TinyBERT model we use the architecture of TinyBERT6L: a small BERT model with 6 layers, a hidden size of 768, a feed forward size of 3072 and 12 heads." [Guskin et al. (2021)](https://gyuwankim.github.io/publication/dynamic-tinybert/poster.pdf) |
| Paper or Other Resources | [Paper](https://neurips2021-nlp.github.io/papers/16/CameraReady/Dynamic_TinyBERT_NLSP2021_camera_ready.pdf); [Poster](https://gyuwankim.github.io/publication/dynamic-tinybert/poster.pdf); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/dynamic_tinybert/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the model for the NLP task of question answering: given a corpus of text, you can ask it a question about that text, and it will find the answer in the text. |
| Primary intended users | Anyone doing question answering |
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Here is how to import this model in Python:
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Intel/dynamic_tinybert")
model = AutoModelForQuestionAnswering.from_pretrained("Intel/dynamic_tinybert")
```
</details>
| Factors | Description |
| ----------- | ----------- |
| Groups | Many Wikipedia articles with question and answer labels are contained in the training data |
| Instrumentation | - |
| Environment | Training was completed on a Titan GPU. |
| Card Prompts | Model deployment on alternate hardware and software will change model performance |
| Metrics | Description |
| ----------- | ----------- |
| Model performance measures | F1 |
| Decision thresholds | - |
| Approaches to uncertainty and variability | - |
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | SQuAD1.1: "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable." (https://huggingface.co/datasets/squad)|
| Motivation | To build an efficient and accurate model for the question answering task. |
| Preprocessing | "We start with a pre-trained general-TinyBERT student, which was trained to learn the general knowledge of BERT using the general-distillation method presented by TinyBERT. We perform transformer distillation from a fine- tuned BERT teacher to the student, following the same training steps used in the original TinyBERT: (1) intermediate-layer distillation (ID) โ learning the knowledge residing in the hidden states and attentions matrices, and (2) prediction-layer distillation (PD) โ fitting the predictions of the teacher." ([Guskin et al., 2021](https://neurips2021-nlp.github.io/papers/16/CameraReady/Dynamic_TinyBERT_NLSP2021_camera_ready.pdf))|
Model Performance Analysis:
| Model | Max F1 (full model) | Best Speedup within BERT-1% |
|------------------|---------------------|-----------------------------|
| Dynamic-TinyBERT | 88.71 | 3.3x |
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The training data come from Wikipedia articles |
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
| Use cases | - |
| Caveats and Recommendations |
| ----------- |
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2111.09645,
doi = {10.48550/ARXIV.2111.09645},
url = {https://arxiv.org/abs/2111.09645},
author = {Guskin, Shira and Wasserblat, Moshe and Ding, Ke and Kim, Gyuwan},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic Sequence Length},
publisher = {arXiv},
year = {2021},
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["question-answering", "bert"], "datasets": ["squad"]} | Intel/dynamic_tinybert | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2111.09645",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.09645"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #question-answering #en #dataset-squad #arxiv-2111.09645 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Model Details: Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic Sequence Length
-------------------------------------------------------------------------------------------------
Dynamic-TinyBERT has been fine-tuned for the NLP task of question answering, trained on the SQuAD 1.1 dataset. Guskin et al. (2021) note:
>
> Dynamic-TinyBERT is a TinyBERT model that utilizes sequence-length reduction and Hyperparameter Optimization for enhanced inference efficiency per any computational budget. Dynamic-TinyBERT is trained only once, performing on-par with BERT and achieving an accuracy-speedup trade-off superior to any other efficient approaches (up to 3.3x with <1% loss-drop).
>
>
>
### How to use
Here is how to import this model in Python:
Click to expand
Model Performance Analysis:
Model: Dynamic-TinyBERT, Max F1 (full model): 88.71, Best Speedup within BERT-1%: 3.3x
### BibTeX entry and citation info
| [
"### How to use\n\n\nHere is how to import this model in Python:\n\n\n\n Click to expand \n\n\n\n\nModel Performance Analysis:\n\n\nModel: Dynamic-TinyBERT, Max F1 (full model): 88.71, Best Speedup within BERT-1%: 3.3x",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad #arxiv-2111.09645 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nHere is how to import this model in Python:\n\n\n\n Click to expand \n\n\n\n\nModel Performance Analysis:\n\n\nModel: Dynamic-TinyBERT, Max F1 (full model): 88.71, Best Speedup within BERT-1%: 3.3x",
"### BibTeX entry and citation info"
] | [
57,
54,
10
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad #arxiv-2111.09645 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n### How to use\n\n\nHere is how to import this model in Python:\n\n\n\n Click to expand \n\n\n\n\nModel Performance Analysis:\n\n\nModel: Dynamic-TinyBERT, Max F1 (full model): 88.71, Best Speedup within BERT-1%: 3.3x### BibTeX entry and citation info"
] |
text-generation | transformers | #harry potter | {"tags": ["conversational"]} | Invincible/Chat_bot-Harrypotter-medium | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #harry potter | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#harry potter Model | {"tags": ["conversational"]} | Invincible/Chat_bot-Harrypotter-small | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
#harry potter Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] | [
43
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation | null | #Harry Potter DialoDPT Model | {"tags": ["conversational"]} | Invincible/DialoGPT-medium-harryPotter | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
| #Harry Potter DialoDPT Model | [] | [
"TAGS\n#conversational #region-us \n"
] | [
8
] | [
"TAGS\n#conversational #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}}]}]} | IsabellaKarabasz/roberta-base-bne-finetuned-amazon_reviews_multi | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon_reviews_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| [
"# roberta-base-bne-finetuned-amazon_reviews_multi\n\nThis model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon_reviews_multi dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-base-bne-finetuned-amazon_reviews_multi\n\nThis model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon_reviews_multi dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] | [
53,
47,
7,
9,
9,
4,
93,
44
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n# roberta-base-bne-finetuned-amazon_reviews_multi\n\nThis model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon_reviews_multi dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| {"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | Iskaj/hf-challenge-test | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ab"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| [
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8789\n- Wer: 1.3456",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.1.dev0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8789\n- Wer: 1.3456",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.1.dev0\n- Tokenizers 0.11.0"
] | [
62,
68,
7,
9,
9,
4,
100,
5,
50
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8789\n- Wer: 1.3456## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.1.dev0\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newnew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4375
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "newnew", "results": []}]} | Iskaj/newnew | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"nl",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #nl #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# newnew
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4375
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| [
"# newnew\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 11.4375\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 4000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3.dev0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #nl #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# newnew\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 11.4375\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 4000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3.dev0\n- Tokenizers 0.11.0"
] | [
67,
74,
7,
9,
9,
4,
135,
5,
50
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #nl #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n# newnew\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 11.4375\n- Wer: 1.0## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 4000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3.dev0\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers | Copy of "facebook/wav2vec2-large-xlsr-53-dutch"
| {} | Iskaj/w2v-xlsr-dutch-lm-added | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
| Copy of "facebook/wav2vec2-large-xlsr-53-dutch"
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] | [
32
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers | Model cloned from https://huggingface.co/facebook/wav2vec2-large-xlsr-53-dutch
Currently bugged: Logits size 48, vocab size 50 | {} | Iskaj/w2v-xlsr-dutch-lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
| Model cloned from URL
Currently bugged: Logits size 48, vocab size 50 | [] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] | [
30
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers | # xlsr300m_cv_7.0_nl_lm | {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "nl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Dutch", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8 NL", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 32, "name": "Test WER"}, {"type": "cer", "value": 17, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 37.44, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 38.74, "name": "Test WER"}]}]}]} | Iskaj/xlsr300m_cv_7.0_nl_lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| # xlsr300m_cv_7.0_nl_lm | [
"# xlsr300m_cv_7.0_nl_lm"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# xlsr300m_cv_7.0_nl_lm"
] | [
96,
17
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# xlsr300m_cv_7.0_nl_lm"
] |
automatic-speech-recognition | transformers |
# xlsr300m_cv_8.0_nl
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset mozilla-foundation/common_voice_8_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_8.0_nl"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
inputs = processor(resampled_audio, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
transcription[0].lower()
#'het kontine schip lag aangemeert in de aven'
```
| {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "mozilla-foundation/common_voice_7_0", "nl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Dutch", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8 NL", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 46.94, "name": "Test WER"}, {"type": "cer", "value": 21.65, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": "???", "name": "Test WER"}, {"type": "cer", "value": "???", "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 42.56, "name": "Test WER"}]}]}]} | Iskaj/xlsr300m_cv_8.0_nl | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"mozilla-foundation/common_voice_7_0",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #mozilla-foundation/common_voice_7_0 #nl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# xlsr300m_cv_8.0_nl
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
### Inference
| [
"# xlsr300m_cv_8.0_nl",
"#### Evaluation Commands\n1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'\n\n\n\n2. To evaluate on 'speech-recognition-community-v2/dev_data'",
"### Inference"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #mozilla-foundation/common_voice_7_0 #nl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# xlsr300m_cv_8.0_nl",
"#### Evaluation Commands\n1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'\n\n\n\n2. To evaluate on 'speech-recognition-community-v2/dev_data'",
"### Inference"
] | [
110,
14,
50,
4
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #mozilla-foundation/common_voice_7_0 #nl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# xlsr300m_cv_8.0_nl#### Evaluation Commands\n1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'\n\n\n\n2. To evaluate on 'speech-recognition-community-v2/dev_data'### Inference"
] |
automatic-speech-recognition | transformers |
# xlsr_300m_CV_8.0_50_EP_new_params_nl | {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "nl", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Dutch", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8 NL", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 35.44, "name": "Test WER"}, {"type": "cer", "value": 19.57, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 37.17, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 38.73, "name": "Test WER"}]}]}]} | Iskaj/xlsr_300m_CV_8.0_50_EP_new_params_nl | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# xlsr_300m_CV_8.0_50_EP_new_params_nl | [
"# xlsr_300m_CV_8.0_50_EP_new_params_nl"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# xlsr_300m_CV_8.0_50_EP_new_params_nl"
] | [
96,
23
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# xlsr_300m_CV_8.0_50_EP_new_params_nl"
] |
text-generation | null | #sherlock | {"tags": ["conversational"]} | Istiaque190515/Sherlock | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
| #sherlock | [] | [
"TAGS\n#conversational #region-us \n"
] | [
8
] | [
"TAGS\n#conversational #region-us \n"
] |
text-generation | transformers | #harry_bot | {"tags": ["conversational"]} | Istiaque190515/harry_bot_discord | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #harry_bot | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | #harry_potter | {"tags": ["conversational"]} | Istiaque190515/harry_potter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #harry_potter | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Tohru DialoGPT model | {"tags": ["conversational"]} | ItoYagura/DialoGPT-medium-tohru | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tohru DialoGPT model | [
"# Tohru DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tohru DialoGPT model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Tohru DialoGPT model"
] |
text-generation | transformers |
# Pickle Rick DialoGPT Model | {"tags": ["conversational"]} | ItzJorinoPlays/DialoGPT-small-PickleRick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Pickle Rick DialoGPT Model | [
"# Pickle Rick DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Pickle Rick DialoGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Pickle Rick DialoGPT Model"
] |
text-generation | transformers |
# Thor DialogGPT Model | {"tags": ["conversational"]} | J-Chiang/DialoGPT-small-thor | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Thor DialogGPT Model | [
"# Thor DialogGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Thor DialogGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Thor DialogGPT Model"
] |
question-answering | transformers |
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 7
max_seq_len = max_length
learning_rate = AdamW: 2e-5
```
| {"language": "en", "tags": ["pytorch", "question-answering"], "datasets": ["squad2", "cord19"], "metrics": ["f1"], "widget": [{"text": "How can I protect myself against covid-19?", "context": "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."}, {"text": "How can I protect myself against covid-19?", "context": " "}]} | JAlexis/Bertv1_fine | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #question-answering #en #dataset-squad2 #dataset-cord19 #endpoints_compatible #has_space #region-us
|
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
## Overview
## Hyperparameters
| [
"## Model description \nThis model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.",
"## How to use",
"## Overview",
"## Hyperparameters"
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad2 #dataset-cord19 #endpoints_compatible #has_space #region-us \n",
"## Model description \nThis model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.",
"## How to use",
"## Overview",
"## Hyperparameters"
] | [
41,
30,
5,
3,
6
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad2 #dataset-cord19 #endpoints_compatible #has_space #region-us \n## Model description \nThis model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.## How to use## Overview## Hyperparameters"
] |
question-answering | transformers |
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 9
max_seq_len = max_length
learning_rate = AdamW: 1e-5
```
| {"language": "en", "tags": ["pytorch", "question-answering"], "datasets": ["squad2", "cord19"], "metrics": ["EM (exact match)"], "widget": [{"text": "How can I protect myself against covid-19?", "context": "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."}, {"text": "How can I protect myself against covid-19?", "context": " "}]} | JAlexis/PruebaBert | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #question-answering #en #dataset-squad2 #dataset-cord19 #endpoints_compatible #region-us
|
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
## Overview
## Hyperparameters
| [
"## Model description \nThis model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.",
"## How to use",
"## Overview",
"## Hyperparameters"
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad2 #dataset-cord19 #endpoints_compatible #region-us \n",
"## Model description \nThis model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.",
"## How to use",
"## Overview",
"## Hyperparameters"
] | [
37,
30,
5,
3,
6
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #en #dataset-squad2 #dataset-cord19 #endpoints_compatible #region-us \n## Model description \nThis model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.## How to use## Overview## Hyperparameters"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5432 | 0.4243 |
| 0.3447 | 2.0 | 1070 | 0.4968 | 0.5187 |
| 0.2347 | 3.0 | 1605 | 0.6540 | 0.5280 |
| 0.1747 | 4.0 | 2140 | 0.7547 | 0.5367 |
| 0.1255 | 5.0 | 2675 | 0.8366 | 0.5472 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5471613867597194, "name": "Matthews Correlation"}]}]}]} | JBNLRY/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8366
* Matthews Correlation: 0.5472
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation | transformers |
# T5 Question Generation and Question Answering
## Model description
This model is a T5 Transformers model (airklizz/t5-base-multi-fr-wiki-news) that was fine-tuned in french on 3 different tasks
* question generation
* question answering
* answer extraction
It obtains quite good results on FQuAD validation dataset.
## Intended uses & limitations
This model functions for the 3 tasks mentionned earlier and was not tested on other tasks.
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("JDBN/t5-base-fr-qg-fquad")
tokenizer = T5Tokenizer.from_pretrained("JDBN/t5-base-fr-qg-fquad")
```
## Training data
The initial model used was https://huggingface.co/airKlizz/t5-base-multi-fr-wiki-news. This model was finetuned on a dataset composed of FQuAD and PIAF on the 3 tasks mentioned previously.
The data were preprocessed like this
* question generation: "generate question: Barack Hussein Obama, nรฉ le 4 aout 1961, est un homme politique amรฉricain et avocat. Il a รฉtรฉ รฉlu <hl> en 2009 <hl> pour devenir le 44รจme prรฉsident des Etats-Unis d'Amรฉrique."
* question answering: "question: Quand Barack Hussein Obamaa-t-il รฉtรฉ รฉlu prรฉsident des Etats-Unis dโAmรฉrique? context: Barack Hussein Obama, nรฉ le 4 aout 1961, est un homme politique amรฉricain et avocat. Il a รฉtรฉ รฉlu en 2009 pour devenir le 44รจme prรฉsident des Etats-Unis dโAmรฉrique."
* answer extraction: "extract_answers: Barack Hussein Obama, nรฉ le 4 aout 1961, est un homme politique amรฉricain et avocat. <hl> Il a รฉtรฉ รฉlu en 2009 pour devenir le 44รจme prรฉsident des Etats-Unis dโAmรฉrique <hl>."
The preprocessing we used was implemented in https://github.com/patil-suraj/question_generation
## Eval results
#### On FQuAD validation set
| BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 | METEOR | ROUGE_L | CIDEr |
|--------|--------|--------|--------|--------|---------|-------|
| 0.290 | 0.203 | 0.149 | 0.111 | 0.197 | 0.284 | 1.038 |
#### Question Answering metrics
For these metrics, the performance of this question answering model (https://huggingface.co/illuin/camembert-base-fquad) on FQuAD original question and on T5 generated questions are compared.
| Questions | Exact Match | F1 Score |
|------------------|--------|--------|
|Original FQuAD | 54.015 | 77.466 |
|Generated | 45.765 | 67.306 |
### BibTeX entry and citation info
```bibtex
@misc{githubPatil,
author = {Patil Suraj},
title = {question generation GitHub repository},
year = {2020},
howpublished={\url{https://github.com/patil-suraj/question_generation}}
}
@article{T5,
title={Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
author={Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
year={2019},
eprint={1910.10683},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{dhoffschmidt2020fquad,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Wacim Belblidia and Tom Brendlรฉ and Quentin Heinrich and Maxime Vidal},
year={2020},
eprint={2002.06071},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "fr", "tags": ["pytorch", "t5", "question-generation", "seq2seq"], "datasets": ["fquad", "piaf"], "widget": [{"text": "generate question: Barack Hussein Obama, n\u00e9 le 4 aout 1961, est un homme politique am\u00e9ricain et avocat. Il a \u00e9t\u00e9 \u00e9lu <hl> en 2009 <hl> pour devenir le 44\u00e8me pr\u00e9sident des Etats-Unis d'Am\u00e9rique. </s>"}, {"text": "question: Quand Barack Obama a t'il \u00e9t\u00e9 \u00e9lu pr\u00e9sident? context: Barack Hussein Obama, n\u00e9 le 4 aout 1961, est un homme politique am\u00e9ricain et avocat. Il a \u00e9t\u00e9 \u00e9lu en 2009 pour devenir le 44\u00e8me pr\u00e9sident des Etats-Unis d'Am\u00e9rique. </s>"}]} | JDBN/t5-base-fr-qg-fquad | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"question-generation",
"seq2seq",
"fr",
"dataset:fquad",
"dataset:piaf",
"arxiv:1910.10683",
"arxiv:2002.06071",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.10683",
"2002.06071"
] | [
"fr"
] | TAGS
#transformers #pytorch #jax #t5 #text2text-generation #question-generation #seq2seq #fr #dataset-fquad #dataset-piaf #arxiv-1910.10683 #arxiv-2002.06071 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| T5 Question Generation and Question Answering
=============================================
Model description
-----------------
This model is a T5 Transformers model (airklizz/t5-base-multi-fr-wiki-news) that was fine-tuned in french on 3 different tasks
* question generation
* question answering
* answer extraction
It obtains quite good results on FQuAD validation dataset.
Intended uses & limitations
---------------------------
This model functions for the 3 tasks mentionned earlier and was not tested on other tasks.
Training data
-------------
The initial model used was URL This model was finetuned on a dataset composed of FQuAD and PIAF on the 3 tasks mentioned previously.
The data were preprocessed like this
* question generation: "generate question: Barack Hussein Obama, nรฉ le 4 aout 1961, est un homme politique amรฉricain et avocat. Il a รฉtรฉ รฉlu en 2009 pour devenir le 44รจme prรฉsident des Etats-Unis d'Amรฉrique."
* question answering: "question: Quand Barack Hussein Obamaa-t-il รฉtรฉ รฉlu prรฉsident des Etats-Unis dโAmรฉrique? context: Barack Hussein Obama, nรฉ le 4 aout 1961, est un homme politique amรฉricain et avocat. Il a รฉtรฉ รฉlu en 2009 pour devenir le 44รจme prรฉsident des Etats-Unis dโAmรฉrique."
* answer extraction: "extract\_answers: Barack Hussein Obama, nรฉ le 4 aout 1961, est un homme politique amรฉricain et avocat. Il a รฉtรฉ รฉlu en 2009 pour devenir le 44รจme prรฉsident des Etats-Unis dโAmรฉrique ."
The preprocessing we used was implemented in URL
Eval results
------------
#### On FQuAD validation set
#### Question Answering metrics
For these metrics, the performance of this question answering model (URL on FQuAD original question and on T5 generated questions are compared.
Questions: Original FQuAD, Exact Match: 54.015, F1 Score: 77.466
Questions: Generated, Exact Match: 45.765, F1 Score: 67.306
### BibTeX entry and citation info
| [
"#### On FQuAD validation set",
"#### Question Answering metrics\n\n\nFor these metrics, the performance of this question answering model (URL on FQuAD original question and on T5 generated questions are compared.\n\n\nQuestions: Original FQuAD, Exact Match: 54.015, F1 Score: 77.466\nQuestions: Generated, Exact Match: 45.765, F1 Score: 67.306",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #question-generation #seq2seq #fr #dataset-fquad #dataset-piaf #arxiv-1910.10683 #arxiv-2002.06071 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"#### On FQuAD validation set",
"#### Question Answering metrics\n\n\nFor these metrics, the performance of this question answering model (URL on FQuAD original question and on T5 generated questions are compared.\n\n\nQuestions: Original FQuAD, Exact Match: 54.015, F1 Score: 77.466\nQuestions: Generated, Exact Match: 45.765, F1 Score: 67.306",
"### BibTeX entry and citation info"
] | [
85,
10,
78,
10
] | [
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #question-generation #seq2seq #fr #dataset-fquad #dataset-piaf #arxiv-1910.10683 #arxiv-2002.06071 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n#### On FQuAD validation set#### Question Answering metrics\n\n\nFor these metrics, the performance of this question answering model (URL on FQuAD original question and on T5 generated questions are compared.\n\n\nQuestions: Original FQuAD, Exact Match: 54.015, F1 Score: 77.466\nQuestions: Generated, Exact Match: 45.765, F1 Score: 67.306### BibTeX entry and citation info"
] |
text-generation | transformers |
@ Harry Potter DialoGPT Model | {"tags": ["conversational"]} | JDS22/DialoGPT-medium-HarryPotterBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
@ Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6210
- Accuracy: 0.085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.6210 | 0.085 |
| No log | 2.0 | 392 | 0.5421 | 0.0643 |
| 0.5048 | 3.0 | 588 | 0.5523 | 0.062 |
| 0.5048 | 4.0 | 784 | 0.5769 | 0.0533 |
| 0.5048 | 5.0 | 980 | 0.5959 | 0.052 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "datasets": ["klue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-finetuned-nli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "klue", "type": "klue", "args": "nli"}, "metrics": [{"type": "accuracy", "value": 0.085, "name": "Accuracy"}]}]}]} | JIWON/bert-base-finetuned-nli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #dataset-klue #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-base-finetuned-nli
=======================
This model is a fine-tuned version of klue/bert-base on the klue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6210
* Accuracy: 0.085
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #dataset-klue #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
45,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #dataset-klue #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
fill-mask | transformers |
# aristoBERTo
aristoBERTo is a transformer model for ancient Greek, a low resource language. We initialized the pre-training with weights from [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1), a Greek version of BERT which was trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed.
Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA.
aristoBERTo is provided by the [Diogenet project](https://diogenet.ucsd.edu) of the University of California, San Diego.
## Intended uses
This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets as well as a NER corpus produced by the [Diogenet project](https://diogenet.ucsd.edu). As a fill-mask model, AristoBERTo can also be used in the restoration of damaged Greek papyri, inscriptions, and manuscripts.
It achieves the following results on the evaluation set:
- Loss: 1.6323
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 1.377 | 20.0 | 3414220 | 1.6314 |
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"language": ["grc"], "widget": [{"text": "\u03a0\u03bb\u03ac\u03c4\u03c9\u03bd \u1f41 \u03a0\u03b5\u03c1\u03b9\u03ba\u03c4\u03b9\u03cc\u03bd\u03b7\u03c2 [MASK] \u03b3\u03ad\u03bd\u03bf\u03c2 \u1f00\u03bd\u03ad\u03c6\u03b5\u03c1\u03b5\u03bd \u03b5\u1f30\u03c2 \u03a3\u03cc\u03bb\u03c9\u03bd\u03b1."}, {"text": "\u1f41 \u039a\u03c1\u03b9\u03c4\u03af\u03b1\u03c2 \u1f00\u03c0\u03ad\u03b2\u03bb\u03b5\u03c8\u03b5 [MASK] \u03c4\u1f74\u03bd \u03b8\u03cd\u03c1\u03b1\u03bd."}, {"text": "\u03c0\u03c1\u1ff6\u03c4\u03bf\u03b9 \u03b4\u1f72 \u03ba\u03b1\u1f76 \u03bf\u1f50\u03bd\u03cc\u03bc\u03b1\u03c4\u03b1 \u1f31\u03c1\u1f70 \u1f14\u03b3\u03bd\u03c9\u03c3\u03b1\u03bd \u03ba\u03b1\u1f76 [MASK] \u1f31\u03c1\u03bf\u1f7a\u03c2 \u1f14\u03bb\u03b5\u03be\u03b1\u03bd."}], "model-index": [{"name": "aristoBERTo", "results": []}]} | Jacobo/aristoBERTo | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"grc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"grc"
] | TAGS
#transformers #pytorch #bert #fill-mask #grc #autotrain_compatible #endpoints_compatible #region-us
| aristoBERTo
===========
aristoBERTo is a transformer model for ancient Greek, a low resource language. We initialized the pre-training with weights from GreekBERT, a Greek version of BERT which was trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed.
Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA.
aristoBERTo is provided by the Diogenet project of the University of California, San Diego.
Intended uses
-------------
This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets as well as a NER corpus produced by the Diogenet project. As a fill-mask model, AristoBERTo can also be used in the restoration of damaged Greek papyri, inscriptions, and manuscripts.
It achieves the following results on the evaluation set:
* Loss: 1.6323
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.14.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #grc #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
31,
114,
5,
47
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #grc #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# axiothea
This is an experimental roberta model trained with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. The training dataset will be soon available in the Huggingface datasets hub. Training a model of ancient Greek is challenging given that it is a low resource language from which 50% of the register has only survived in fragmentary texts. The model is provided by the Diogenet project at the University of California, San Diego.
It achieves the following results on the evaluation set:
- Loss: 3.3351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7013 | 1.0 | 341422 | 4.8813 |
| 4.2866 | 2.0 | 682844 | 4.4422 |
| 4.0496 | 3.0 | 1024266 | 4.2132 |
| 3.8503 | 4.0 | 1365688 | 4.0246 |
| 3.6917 | 5.0 | 1707110 | 3.8756 |
| 3.4917 | 6.0 | 2048532 | 3.7381 |
| 3.3907 | 7.0 | 2389954 | 3.6107 |
| 3.2876 | 8.0 | 2731376 | 3.5044 |
| 3.1994 | 9.0 | 3072798 | 3.3980 |
| 3.0806 | 10.0 | 3414220 | 3.3095 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"language": ["grc"], "tags": ["generated_from_trainer"], "widget": [{"text": "\u03a0\u03bb\u03ac\u03c4\u03c9\u03bd \u1f41 \u03a0\u03b5\u03c1\u03b9\u03ba\u03c4\u03b9\u03cc\u03bd\u03b7\u03c2 <mask> \u03b3\u03ad\u03bd\u03bf\u03c2 \u1f00\u03bd\u03ad\u03c6\u03b5\u03c1\u03b5\u03bd \u03b5\u1f30\u03c2 \u03a3\u03cc\u03bb\u03c9\u03bd\u03b1."}, {"text": "\u1f41 \u039a\u03c1\u03b9\u03c4\u03af\u03b1\u03c2 \u1f00\u03c0\u03ad\u03b2\u03bb\u03b5\u03c8\u03b5 <mask> \u03c4\u1f74\u03bd \u03b8\u03cd\u03c1\u03b1\u03bd."}, {"text": "\u1f6e \u03c6\u03af\u03bb\u03b5 \u039a\u03bb\u03b5\u03b9\u03bd\u03af\u03b1, \u03ba\u03b1\u03bb\u1ff6\u03c2 \u03bc\u1f72\u03bd <mask>."}], "model-index": [{"name": "dioBERTo", "results": []}]} | Jacobo/axiothea | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"grc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"grc"
] | TAGS
#transformers #pytorch #roberta #fill-mask #generated_from_trainer #grc #autotrain_compatible #endpoints_compatible #region-us
| axiothea
========
This is an experimental roberta model trained with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. The training dataset will be soon available in the Huggingface datasets hub. Training a model of ancient Greek is challenging given that it is a low resource language from which 50% of the register has only survived in fragmentary texts. The model is provided by the Diogenet project at the University of California, San Diego.
It achieves the following results on the evaluation set:
* Loss: 3.3351
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0+cu102
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #generated_from_trainer #grc #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
47
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #generated_from_trainer #grc #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-csa-10-rev3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5869
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 18.7934 | 25.0 | 200 | 3.5869 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-csa-10-rev3", "results": []}]} | Jainil30/wav2vec2-base-csa-10-rev3 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-csa-10-rev3
=========================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.5869
* Wer: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
47,
128,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2469
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9351 | 1.0 | 500 | 0.2469 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "sagemaker-distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9165, "name": "Accuracy"}]}]}]} | JaviBJ/sagemaker-distilbert-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| sagemaker-distilbert-emotion
============================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2469
* Accuracy: 0.9165
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 32
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
53,
128,
5,
40
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5466
- Accuracy: 0.8890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3057 | 0.8630 |
| 0.4091 | 2.0 | 688 | 0.2964 | 0.8880 |
| 0.1322 | 3.0 | 1032 | 0.4465 | 0.8820 |
| 0.1322 | 4.0 | 1376 | 0.5466 | 0.8890 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5
=================================================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5466
* Accuracy: 0.8890
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 4e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
40,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.2782
- Accuracy: 0.9040
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.2700 | 0.8940 |
| 0.349 | 2.0 | 688 | 0.2782 | 0.9040 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-semeval2020-task4a
==============================================
This model is a fine-tuned version of bert-base-uncased on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
* Loss: 0.2782
* Accuracy: 0.9040
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
40,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5121
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3603 | 0.8550 |
| 0.3894 | 2.0 | 688 | 0.4011 | 0.8630 |
| 0.1088 | 3.0 | 1032 | 0.5121 | 0.8700 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5
=================================================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5121
* Accuracy: 0.8700
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 4e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
40,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4114
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3773 | 0.8490 |
| 0.3812 | 2.0 | 688 | 0.4114 | 0.8700 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5
===============================================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4114
* Accuracy: 0.8700
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
40,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.6760
- Accuracy: 0.8760
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5016 | 1.0 | 688 | 0.3502 | 0.8600 |
| 0.2528 | 2.0 | 1376 | 0.5769 | 0.8620 |
| 0.0598 | 3.0 | 2064 | 0.6720 | 0.8700 |
| 0.0197 | 4.0 | 2752 | 0.6760 | 0.8760 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-semeval2020-task4b
==============================================
This model is a fine-tuned version of bert-base-uncased on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
* Loss: 0.6760
* Accuracy: 0.8760
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
40,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4### Training results### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag-e1-b16-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5202
- Accuracy: 0.7997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.701 | 1.0 | 4597 | 0.5202 | 0.7997 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["swag"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-swag-e1-b16-l5e5", "results": []}]} | JazibEijaz/bert-base-uncased-finetuned-swag-e1-b16-l5e5 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #dataset-swag #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetuned-swag-e1-b16-l5e5
============================================
This model is a fine-tuned version of bert-base-uncased on the swag dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5202
* Accuracy: 0.7997
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.12.2
* Pytorch 1.9.1
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #dataset-swag #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
46,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #dataset-swag #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
token-classification | transformers |
# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).
## Introduction
[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.
Model was trained on enriched version of wikiner-fr dataset (~170 634 sentences).
On my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).
Dateparser library can still be be used on the output of this model in order to convert text to python datetime object
(https://dateparser.readthedocs.io/en/latest/).
## How to use camembert-ner-with-dates with HuggingFace
##### Load camembert-ner-with-dates and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple est crรฉรฉe le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs ร Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituรฉe sous forme de sociรฉtรฉ le 3 janvier 1977 ร l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour reflรฉter la diversification de ses produits, le mot ยซ computer ยป est retirรฉ le 9 janvier 2015.")
[{'entity_group': 'ORG',
'score': 0.9776379466056824,
'word': 'Apple',
'start': 0,
'end': 5},
{'entity_group': 'DATE',
'score': 0.9793774570737567,
'word': 'le 1er avril 1976 dans le',
'start': 15,
'end': 41},
{'entity_group': 'PER',
'score': 0.9958226680755615,
'word': 'Steve Jobs',
'start': 74,
'end': 85},
{'entity_group': 'LOC',
'score': 0.995087186495463,
'word': 'Los Altos',
'start': 87,
'end': 97},
{'entity_group': 'LOC',
'score': 0.9953305125236511,
'word': 'Californie',
'start': 100,
'end': 111},
{'entity_group': 'PER',
'score': 0.9961076378822327,
'word': 'Steve Jobs',
'start': 115,
'end': 126},
{'entity_group': 'PER',
'score': 0.9960325956344604,
'word': 'Steve Wozniak',
'start': 127,
'end': 141},
{'entity_group': 'PER',
'score': 0.9957776467005411,
'word': 'Ronald Wayne',
'start': 144,
'end': 157},
{'entity_group': 'DATE',
'score': 0.994030773639679,
'word': 'le 3 janvier 1977 ร ',
'start': 198,
'end': 218},
{'entity_group': 'ORG',
'score': 0.9720810294151306,
'word': "d'Apple Computer",
'start': 240,
'end': 257},
{'entity_group': 'DATE',
'score': 0.9924157659212748,
'word': '30 ans et',
'start': 272,
'end': 282},
{'entity_group': 'DATE',
'score': 0.9934852868318558,
'word': 'le 9 janvier 2015.',
'start': 363,
'end': 382}]
```
## Model performances (metric: seqeval)
Global
```
'precision': 0.928
'recall': 0.928
'f1': 0.928
```
By entity
```
Label LOC: (precision:0.929, recall:0.932, f1:0.931, support:9510)
Label PER: (precision:0.952, recall:0.965, f1:0.959, support:9399)
Label MISC: (precision:0.878, recall:0.844, f1:0.860, support:5364)
Label ORG: (precision:0.848, recall:0.883, f1:0.865, support:2299)
Label DATE: Not relevant because of method used to add date tag on wikiner dataset (estimated f1 ~90%)
```
| {"language": "fr", "license": "mit", "datasets": ["Jean-Baptiste/wikiner_fr"], "widget": [{"text": "Je m'appelle jean-baptiste et j'habite \u00e0 montr\u00e9al depuis fevr 2012"}]} | Jean-Baptiste/camembert-ner-with-dates | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).
## Introduction
[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.
Model was trained on enriched version of wikiner-fr dataset (~170 634 sentences).
On my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).
Dateparser library can still be be used on the output of this model in order to convert text to python datetime object
(URL
## How to use camembert-ner-with-dates with HuggingFace
##### Load camembert-ner-with-dates and its sub-word tokenizer :
## Model performances (metric: seqeval)
Global
By entity
| [
"# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).",
"## Introduction\n\n[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.\nModel was trained on enriched version of wikiner-fr dataset (~170 634 sentences).\n\nOn my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).\nDateparser library can still be be used on the output of this model in order to convert text to python datetime object \n(URL",
"## How to use camembert-ner-with-dates with HuggingFace",
"##### Load camembert-ner-with-dates and its sub-word tokenizer :",
"## Model performances (metric: seqeval)\n\nGlobal\n\n\nBy entity"
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).",
"## Introduction\n\n[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.\nModel was trained on enriched version of wikiner-fr dataset (~170 634 sentences).\n\nOn my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).\nDateparser library can still be be used on the output of this model in order to convert text to python datetime object \n(URL",
"## How to use camembert-ner-with-dates with HuggingFace",
"##### Load camembert-ner-with-dates and its sub-word tokenizer :",
"## Model performances (metric: seqeval)\n\nGlobal\n\n\nBy entity"
] | [
60,
26,
119,
18,
24,
15
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).## Introduction\n\n[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.\nModel was trained on enriched version of wikiner-fr dataset (~170 634 sentences).\n\nOn my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).\nDateparser library can still be be used on the output of this model in order to convert text to python datetime object \n(URL## How to use camembert-ner-with-dates with HuggingFace##### Load camembert-ner-with-dates and its sub-word tokenizer :## Model performances (metric: seqeval)\n\nGlobal\n\n\nBy entity"
] |
token-classification | transformers |
# camembert-ner: model fine-tuned from camemBERT for NER task.
## Introduction
[camembert-ner] is a NER model that was fine-tuned from camemBERT on wikiner-fr dataset.
Model was trained on wikiner-fr dataset (~170 634 sentences).
Model was validated on emails/chat data and overperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O |Outside of a named entity
MISC |Miscellaneous entity
PER |Personโs name
ORG |Organization
LOC |Location
## How to use camembert-ner with HuggingFace
##### Load camembert-ner and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple est crรฉรฉe le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs ร Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituรฉe sous forme de sociรฉtรฉ le 3 janvier 1977 ร l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour reflรฉter la diversification de ses produits, le mot ยซ computer ยป est retirรฉ le 9 janvier 2015.")
[{'entity_group': 'ORG',
'score': 0.9472818374633789,
'word': 'Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.9838564991950989,
'word': 'Steve Jobs',
'start': 74,
'end': 85},
{'entity_group': 'LOC',
'score': 0.9831605950991312,
'word': 'Los Altos',
'start': 87,
'end': 97},
{'entity_group': 'LOC',
'score': 0.9834540486335754,
'word': 'Californie',
'start': 100,
'end': 111},
{'entity_group': 'PER',
'score': 0.9841555754343668,
'word': 'Steve Jobs',
'start': 115,
'end': 126},
{'entity_group': 'PER',
'score': 0.9843501806259155,
'word': 'Steve Wozniak',
'start': 127,
'end': 141},
{'entity_group': 'PER',
'score': 0.9841533899307251,
'word': 'Ronald Wayne',
'start': 144,
'end': 157},
{'entity_group': 'ORG',
'score': 0.9468960364659628,
'word': 'Apple Computer',
'start': 243,
'end': 257}]
```
## Model performances (metric: seqeval)
Overall
precision|recall|f1
-|-|-
0.8859|0.8971|0.8914
By entity
entity|precision|recall|f1
-|-|-|-
PER|0.9372|0.9598|0.9483
ORG|0.8099|0.8265|0.8181
LOC|0.8905|0.9005|0.8955
MISC|0.8175|0.8117|0.8146
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
| {"language": "fr", "license": "mit", "datasets": ["Jean-Baptiste/wikiner_fr"], "widget": [{"text": "Je m'appelle jean-baptiste et je vis \u00e0 montr\u00e9al"}, {"text": "george washington est all\u00e9 \u00e0 washington"}]} | Jean-Baptiste/camembert-ner | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| camembert-ner: model fine-tuned from camemBERT for NER task.
============================================================
Introduction
------------
[camembert-ner] is a NER model that was fine-tuned from camemBERT on wikiner-fr dataset.
Model was trained on wikiner-fr dataset (~170 634 sentences).
Model was validated on emails/chat data and overperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
Training data
-------------
Training data was classified as follow:
How to use camembert-ner with HuggingFace
-----------------------------------------
##### Load camembert-ner and its sub-word tokenizer :
Model performances (metric: seqeval)
------------------------------------
Overall
precision: 0.8859, recall: 0.8971, f1: 0.8914
By entity
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
URL
| [
"##### Load camembert-ner and its sub-word tokenizer :\n\n\nModel performances (metric: seqeval)\n------------------------------------\n\n\nOverall\n\n\nprecision: 0.8859, recall: 0.8971, f1: 0.8914\n\n\nBy entity\n\n\n\nFor those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:\nURL"
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"##### Load camembert-ner and its sub-word tokenizer :\n\n\nModel performances (metric: seqeval)\n------------------------------------\n\n\nOverall\n\n\nprecision: 0.8859, recall: 0.8971, f1: 0.8914\n\n\nBy entity\n\n\n\nFor those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:\nURL"
] | [
60,
126
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #camembert #token-classification #fr #dataset-Jean-Baptiste/wikiner_fr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n##### Load camembert-ner and its sub-word tokenizer :\n\n\nModel performances (metric: seqeval)\n------------------------------------\n\n\nOverall\n\n\nprecision: 0.8859, recall: 0.8971, f1: 0.8914\n\n\nBy entity\n\n\n\nFor those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:\nURL"
] |
token-classification | transformers |
# roberta-large-ner-english: model fine-tuned from roberta-large for NER task
## Introduction
[roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset.
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O |Outside of a named entity
MISC |Miscellaneous entity
PER |Personโs name
ORG |Organization
LOC |Location
In order to simplify, the prefix B- or I- from original conll2003 was removed.
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
Train | Validation
-|-
17494 | 3250
## How to use roberta-large-ner-english with HuggingFace
##### Load roberta-large-ner-english and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer")
[{'entity_group': 'ORG',
'score': 0.99381506,
'word': ' Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.99970853,
'word': ' Steve Jobs',
'start': 29,
'end': 39},
{'entity_group': 'PER',
'score': 0.99981767,
'word': ' Steve Wozniak',
'start': 41,
'end': 54},
{'entity_group': 'PER',
'score': 0.99956465,
'word': ' Ronald Wayne',
'start': 59,
'end': 71},
{'entity_group': 'PER',
'score': 0.9997918,
'word': ' Wozniak',
'start': 92,
'end': 99},
{'entity_group': 'MISC',
'score': 0.99956393,
'word': ' Apple I',
'start': 102,
'end': 109}]
```
## Model performances
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
entity|precision|recall|f1
-|-|-|-
PER|0.9914|0.9927|0.9920
ORG|0.9627|0.9661|0.9644
LOC|0.9795|0.9862|0.9828
MISC|0.9292|0.9262|0.9277
Overall|0.9740|0.9766|0.9753
On private dataset (email, chat, informal discussion), computed on word predictions:
entity|precision|recall|f1
-|-|-|-
PER|0.8823|0.9116|0.8967
ORG|0.7694|0.7292|0.7487
LOC|0.8619|0.7768|0.8171
By comparison on the same private dataset, Spacy (en_core_web_trf-3.2.0) was giving:
entity|precision|recall|f1
-|-|-|-
PER|0.9146|0.8287|0.8695
ORG|0.7655|0.6437|0.6993
LOC|0.8727|0.6180|0.7236
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
| {"language": "en", "license": "mit", "datasets": ["conll2003"], "widget": [{"text": "My name is jean-baptiste and I live in montreal"}, {"text": "My name is clara and I live in berkeley, california."}, {"text": "My name is wolfgang and I live in berlin"}], "train-eval-index": [{"config": "conll2003", "task": "token-classification", "task_id": "entity_extraction", "splits": {"eval_split": "validation"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}}]} | Jean-Baptiste/roberta-large-ner-english | null | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"roberta",
"token-classification",
"en",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tf #onnx #safetensors #roberta #token-classification #en #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| roberta-large-ner-english: model fine-tuned from roberta-large for NER task
===========================================================================
Introduction
------------
[roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset.
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
Training data
-------------
Training data was classified as follow:
In order to simplify, the prefix B- or I- from original conll2003 was removed.
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
How to use roberta-large-ner-english with HuggingFace
-----------------------------------------------------
##### Load roberta-large-ner-english and its sub-word tokenizer :
Model performances
------------------
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
On private dataset (email, chat, informal discussion), computed on word predictions:
By comparison on the same private dataset, Spacy (en\_core\_web\_trf-3.2.0) was giving:
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
URL
| [
"##### Load roberta-large-ner-english and its sub-word tokenizer :\n\n\nModel performances\n------------------\n\n\nModel performances computed on conll2003 validation dataset (computed on the tokens predictions)\n\n\n\nOn private dataset (email, chat, informal discussion), computed on word predictions:\n\n\n\nBy comparison on the same private dataset, Spacy (en\\_core\\_web\\_trf-3.2.0) was giving:\n\n\n\nFor those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:\nURL"
] | [
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #roberta #token-classification #en #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"##### Load roberta-large-ner-english and its sub-word tokenizer :\n\n\nModel performances\n------------------\n\n\nModel performances computed on conll2003 validation dataset (computed on the tokens predictions)\n\n\n\nOn private dataset (email, chat, informal discussion), computed on word predictions:\n\n\n\nBy comparison on the same private dataset, Spacy (en\\_core\\_web\\_trf-3.2.0) was giving:\n\n\n\nFor those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:\nURL"
] | [
56,
148
] | [
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #roberta #token-classification #en #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n##### Load roberta-large-ner-english and its sub-word tokenizer :\n\n\nModel performances\n------------------\n\n\nModel performances computed on conll2003 validation dataset (computed on the tokens predictions)\n\n\n\nOn private dataset (email, chat, informal discussion), computed on word predictions:\n\n\n\nBy comparison on the same private dataset, Spacy (en\\_core\\_web\\_trf-3.2.0) was giving:\n\n\n\nFor those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:\nURL"
] |
token-classification | transformers |
# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers
## Introduction
This is a model specifically designed to identify tickers in text.
Model was trained on transformed dataset from following Kaggle dataset:
https://www.kaggle.com/omermetinn/tweets-about-the-top-companies-from-2015-to-2020
## How to use roberta-ticker with HuggingFace
##### Load roberta-ticker and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-ticker")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-ticker")
##### Process text sample
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("I am going to buy 100 shares of cake tomorrow")
[{'entity_group': 'TICKER',
'score': 0.9612462520599365,
'word': ' cake',
'start': 32,
'end': 36}]
nlp("I am going to eat a cake tomorrow")
[]
```
## Model performances
```
precision: 0.914157
recall: 0.788824
f1: 0.846878
```
| {"language": "en", "widget": [{"text": "I am going to buy 100 shares of cake tomorrow"}]} | Jean-Baptiste/roberta-ticker | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #roberta #token-classification #en #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers
## Introduction
This is a model specifically designed to identify tickers in text.
Model was trained on transformed dataset from following Kaggle dataset:
URL
## How to use roberta-ticker with HuggingFace
##### Load roberta-ticker and its sub-word tokenizer :
## Model performances
| [
"# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers",
"## Introduction\n\nThis is a model specifically designed to identify tickers in text.\nModel was trained on transformed dataset from following Kaggle dataset:\nURL",
"## How to use roberta-ticker with HuggingFace",
"##### Load roberta-ticker and its sub-word tokenizer :",
"## Model performances"
] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #token-classification #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers",
"## Introduction\n\nThis is a model specifically designed to identify tickers in text.\nModel was trained on transformed dataset from following Kaggle dataset:\nURL",
"## How to use roberta-ticker with HuggingFace",
"##### Load roberta-ticker and its sub-word tokenizer :",
"## Model performances"
] | [
34,
18,
32,
12,
18,
4
] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #token-classification #en #autotrain_compatible #endpoints_compatible #region-us \n# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers## Introduction\n\nThis is a model specifically designed to identify tickers in text.\nModel was trained on transformed dataset from following Kaggle dataset:\nURL## How to use roberta-ticker with HuggingFace##### Load roberta-ticker and its sub-word tokenizer :## Model performances"
] |
text-generation | transformers | # Tony Stark | {"tags": ["conversational"]} | Jedi33/tonystarkAI | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Tony Stark | [
"# Tony Stark"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tony Stark"
] | [
39,
3
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Tony Stark"
] |
null | null | First 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``.
For downloading next 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts1/). | {} | Jeevesh8/feather_berts | null | [
"arxiv:1911.02969",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02969"
] | [] | TAGS
#arxiv-1911.02969 #region-us
| First 50 Feather BERT-s compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ''.from_pretrained()''.
For downloading next 50 Feather BERT-s, see here. | [] | [
"TAGS\n#arxiv-1911.02969 #region-us \n"
] | [
16
] | [
"TAGS\n#arxiv-1911.02969 #region-us \n"
] |
null | null | Second 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``.
For downloading first 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts/). | {} | Jeevesh8/feather_berts1 | null | [
"arxiv:1911.02969",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02969"
] | [] | TAGS
#arxiv-1911.02969 #region-us
| Second 50 Feather BERT-s compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ''.from_pretrained()''.
For downloading first 50 Feather BERT-s, see here. | [] | [
"TAGS\n#arxiv-1911.02969 #region-us \n"
] | [
16
] | [
"TAGS\n#arxiv-1911.02969 #region-us \n"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialData
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 297 | 2.2419 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialData", "results": []}]} | Jeska/BertjeWDialData | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialData
===============
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.2608
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
47
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALL
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1739 | 1.0 | 1542 | 2.0150 |
| 2.0759 | 2.0 | 3084 | 1.9918 |
| 2.0453 | 3.0 | 4626 | 2.0132 |
| 1.9936 | 4.0 | 6168 | 1.9341 |
| 1.9659 | 5.0 | 7710 | 1.9140 |
| 1.9545 | 6.0 | 9252 | 1.9418 |
| 1.9104 | 7.0 | 10794 | 1.9179 |
| 1.8991 | 8.0 | 12336 | 1.9157 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALL", "results": []}]} | Jeska/BertjeWDialDataALL | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALL
==================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9469
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.