pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification | transformers | # DeBERTa-v3-small-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is [DeBERTa-v3-small from Microsoft](https://huggingface.co/microsoft/deberta-v3-small). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c
---------|----------|---------|----------|----------
0.927 | 0.921 | 0.892 | 0.684 | 0.673
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. | {"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "widget": [{"text": "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."}]} | MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c | null | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2104.07179",
"arxiv:2106.09449",
"arxiv:2006.03654",
"arxiv:2111.09543",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.07179",
"2106.09449",
"2006.03654",
"2111.09543"
] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2104.07179 #arxiv-2106.09449 #arxiv-2006.03654 #arxiv-2111.09543 #autotrain_compatible #endpoints_compatible #region-us
| DeBERTa-v3-small-mnli-fever-docnli-ling-2c
==========================================
Model description
-----------------
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is DeBERTa-v3-small from Microsoft. The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original DeBERTa paper as well as the DeBERTa-V3 paper.
Intended uses & limitations
---------------------------
#### How to use the model
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
Limitations and bias
--------------------
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
| [
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).",
"### Training procedure\n\n\nDeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn",
"### Debugging and issues\n\n\nNote that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues."
] | [
"TAGS\n#transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2104.07179 #arxiv-2106.09449 #arxiv-2006.03654 #arxiv-2111.09543 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).",
"### Training procedure\n\n\nDeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn",
"### Debugging and issues\n\n\nNote that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues."
] |
zero-shot-classification | transformers | # DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
## Model description
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
### Training procedure
DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
dataset | mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c
--------|---------|----------|---------|----------|----------|------
accuracy | 0.925 | 0.922 | 0.892 | 0.676 | 0.665 | 0.888
speed (text/sec, CPU, 128 batch) | 6.0 | 6.3 | 3.0 | 5.8 | 5.0 | 7.6
speed (text/sec, GPU Tesla P100, 128 batch) | 473 | 487 | 230 | 390 | 340 | 586
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. | {"language": ["en"], "license": "mit", "tags": ["text-classification", "zero-shot-classification"], "datasets": ["multi_nli", "anli", "fever", "lingnli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"} | MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:anli",
"dataset:fever",
"dataset:lingnli",
"arxiv:2104.07179",
"arxiv:2111.09543",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.07179",
"2111.09543"
] | [
"en"
] | TAGS
#transformers #pytorch #onnx #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #dataset-multi_nli #dataset-anli #dataset-fever #dataset-lingnli #arxiv-2104.07179 #arxiv-2111.09543 #license-mit #autotrain_compatible #endpoints_compatible #region-us
| DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
=============================================
Model description
-----------------
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.
Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
The base model is DeBERTa-v3-xsmall from Microsoft. The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the DeBERTa-V3 paper.
For highest performance (but less speed), I recommend using URL
Intended uses & limitations
---------------------------
#### How to use the model
### Training data
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.
### Training procedure
DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
Limitations and bias
--------------------
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn
### Debugging and issues
Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
| [
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.",
"### Training procedure\n\n\nDeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn",
"### Debugging and issues\n\n\nNote that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues."
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #dataset-multi_nli #dataset-anli #dataset-fever #dataset-lingnli #arxiv-2104.07179 #arxiv-2111.09543 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.",
"### Training procedure\n\n\nDeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn",
"### Debugging and issues\n\n\nNote that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues."
] |
text-classification | transformers | # MiniLM-L6-mnli-binary
## Model description
This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset. The model was trained for binary NLI, which means that the "neutral" and "contradiction" classes were merged into one class. The model therefore predicts "entailment" or "not_entailment".
The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/MiniLM-L6-mnli-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I liked the movie"
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
[MultiNLI](https://huggingface.co/datasets/multi_nli).
### Training procedure
MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary (matched) test set from MultiNLI. Accuracy: 0.886
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. | {"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "widget": [{"text": "I liked the movie. [SEP] The movie was good."}]} | MoritzLaurer/MiniLM-L6-mnli-binary | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #zero-shot-classification #en #autotrain_compatible #endpoints_compatible #region-us
| # MiniLM-L6-mnli-binary
## Model description
This model was trained on the MultiNLI dataset. The model was trained for binary NLI, which means that the "neutral" and "contradiction" classes were merged into one class. The model therefore predicts "entailment" or "not_entailment".
The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
### Training data
MultiNLI.
### Training procedure
MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated using the binary (matched) test set from MultiNLI. Accuracy: 0.886
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. | [
"# MiniLM-L6-mnli-binary",
"## Model description\nThis model was trained on the MultiNLI dataset. The model was trained for binary NLI, which means that the \"neutral\" and \"contradiction\" classes were merged into one class. The model therefore predicts \"entailment\" or \"not_entailment\". \nThe base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.",
"## Intended uses & limitations",
"#### How to use the model",
"### Training data\nMultiNLI.",
"### Training procedure\nMiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\nThe model was evaluated using the binary (matched) test set from MultiNLI. Accuracy: 0.886",
"## Limitations and bias\nPlease consult the original MiniLM paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\nIf you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub."
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #zero-shot-classification #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# MiniLM-L6-mnli-binary",
"## Model description\nThis model was trained on the MultiNLI dataset. The model was trained for binary NLI, which means that the \"neutral\" and \"contradiction\" classes were merged into one class. The model therefore predicts \"entailment\" or \"not_entailment\". \nThe base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.",
"## Intended uses & limitations",
"#### How to use the model",
"### Training data\nMultiNLI.",
"### Training procedure\nMiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\nThe model was evaluated using the binary (matched) test set from MultiNLI. Accuracy: 0.886",
"## Limitations and bias\nPlease consult the original MiniLM paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\nIf you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub."
] |
text-classification | transformers | # MiniLM-L6-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is MiniLM-L6 from Microsoft. Which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
MiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c
---------|----------|---------|----------|----------
(to upload)
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m.laurer{at}vu.nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) | {"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "widget": [{"text": "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."}]} | MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2104.07179",
"arxiv:2106.09449",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.07179",
"2106.09449"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #zero-shot-classification #en #arxiv-2104.07179 #arxiv-2106.09449 #autotrain_compatible #endpoints_compatible #region-us
| MiniLM-L6-mnli-fever-docnli-ling-2c
===================================
Model description
-----------------
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is MiniLM-L6 from Microsoft. Which is very fast, but a bit less accurate than other models.
Intended uses & limitations
---------------------------
#### How to use the model
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
MiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
Limitations and bias
--------------------
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m.laurer{at}URL or LinkedIn
| [
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).",
"### Training procedure\n\n\nMiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original MiniLM paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m.laurer{at}URL or LinkedIn"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #zero-shot-classification #en #arxiv-2104.07179 #arxiv-2106.09449 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).",
"### Training procedure\n\n\nMiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original MiniLM paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m.laurer{at}URL or LinkedIn"
] |
text-classification | transformers | # MiniLM-L6-mnli
## Model description
This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset.
The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/MiniLM-L6-mnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I liked the movie"
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
[MultiNLI](https://huggingface.co/datasets/multi_nli).
### Training procedure
MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. | {"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "widget": [{"text": "I liked the movie. [SEP] The movie was good."}]} | MoritzLaurer/MiniLM-L6-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #zero-shot-classification #en #autotrain_compatible #endpoints_compatible #region-us
| # MiniLM-L6-mnli
## Model description
This model was trained on the MultiNLI dataset.
The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
### Training data
MultiNLI.
### Training procedure
MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. | [
"# MiniLM-L6-mnli",
"## Model description\nThis model was trained on the MultiNLI dataset. \nThe base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.",
"## Intended uses & limitations",
"#### How to use the model",
"### Training data\nMultiNLI.",
"### Training procedure\nMiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\nThe model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814",
"## Limitations and bias\nPlease consult the original MiniLM paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\nIf you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub."
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #zero-shot-classification #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# MiniLM-L6-mnli",
"## Model description\nThis model was trained on the MultiNLI dataset. \nThe base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.",
"## Intended uses & limitations",
"#### How to use the model",
"### Training data\nMultiNLI.",
"### Training procedure\nMiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\nThe model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814",
"## Limitations and bias\nPlease consult the original MiniLM paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\nIf you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub."
] |
text-classification | transformers |
# Covid-Policy-RoBERTa-21
This model is currently in development at the Centre for European Policy Studies (CEPS).
The model is not yet recommended for use. A more detailed description will follow.
If you are interested in using deep learning to identify 20 different types policy measures against COVID-19 in text (NPIs, "non-pharmaceutical interventions") don't hesitate to [contact me](https://www.ceps.eu/ceps-staff/moritz-laurer/). | {"language": ["en"], "tags": ["text-classification"], "metrics": ["accuracy (balanced)", "F1 (weighted)"], "widget": [{"text": "All non-essential work activity will stop in Spain from tomorrow until 9 April but there is some confusion as to which jobs can continue under the new lockdown restrictions"}]} | MoritzLaurer/covid-policy-roberta-21 | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #roberta #text-classification #en #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Covid-Policy-RoBERTa-21
This model is currently in development at the Centre for European Policy Studies (CEPS).
The model is not yet recommended for use. A more detailed description will follow.
If you are interested in using deep learning to identify 20 different types policy measures against COVID-19 in text (NPIs, "non-pharmaceutical interventions") don't hesitate to contact me. | [
"# Covid-Policy-RoBERTa-21\nThis model is currently in development at the Centre for European Policy Studies (CEPS).\n\nThe model is not yet recommended for use. A more detailed description will follow.\n\nIf you are interested in using deep learning to identify 20 different types policy measures against COVID-19 in text (NPIs, \"non-pharmaceutical interventions\") don't hesitate to contact me."
] | [
"TAGS\n#transformers #pytorch #jax #roberta #text-classification #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Covid-Policy-RoBERTa-21\nThis model is currently in development at the Centre for European Policy Studies (CEPS).\n\nThe model is not yet recommended for use. A more detailed description will follow.\n\nIf you are interested in using deep learning to identify 20 different types policy measures against COVID-19 in text (NPIs, \"non-pharmaceutical interventions\") don't hesitate to contact me."
] |
zero-shot-classification | transformers | # Multilingual mDeBERTa-v3-base-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
zero-shot classification. The underlying model was pre-trained by Microsoft on the
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
If you are looking for a smaller, faster (but less performant) model, you can
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs.
### Training procedure
mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=2, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=16, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI.
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh
---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
## Limitations and bias
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
## Debugging and issues
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
| {"language": ["multilingual", "en", "ar", "bg", "de", "el", "es", "fr", "hi", "ru", "sw", "th", "tr", "ur", "vi", "zh"], "license": "mit", "tags": ["zero-shot-classification", "text-classification", "nli", "pytorch"], "datasets": ["multi_nli", "xnli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU", "candidate_labels": "politics, economy, entertainment, environment"}]} | MoritzLaurer/mDeBERTa-v3-base-mnli-xnli | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"nli",
"multilingual",
"en",
"ar",
"bg",
"de",
"el",
"es",
"fr",
"hi",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2111.09543",
"arxiv:1809.05053",
"arxiv:1911.02116",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2111.09543",
"1809.05053",
"1911.02116"
] | [
"multilingual",
"en",
"ar",
"bg",
"de",
"el",
"es",
"fr",
"hi",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh"
] | TAGS
#transformers #pytorch #onnx #safetensors #deberta-v2 #text-classification #zero-shot-classification #nli #multilingual #en #ar #bg #de #el #es #fr #hi #ru #sw #th #tr #ur #vi #zh #dataset-multi_nli #dataset-xnli #arxiv-2111.09543 #arxiv-1809.05053 #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| Multilingual mDeBERTa-v3-base-mnli-xnli
=======================================
Model description
-----------------
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
zero-shot classification. The underlying model was pre-trained by Microsoft on the
CC100 multilingual dataset. It was then fine-tuned on the XNLI dataset, which contains hypothesis-premise pairs from 15 languages, as well as the English MNLI dataset.
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
introduced by Microsoft in this paper.
If you are looking for a smaller, faster (but less performant) model, you can
try multilingual-MiniLMv2-L6-mnli-xnli.
### How to use the model
#### Simple zero-shot classification pipeline
#### NLI use-case
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see this paper). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs.
### Training procedure
mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI.
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see here or here).
Limitations and bias
--------------------
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL
Ideas for cooperation or questions?
-----------------------------------
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn
Debugging and issues
--------------------
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: URL
| [
"### How to use the model",
"#### Simple zero-shot classification pipeline",
"#### NLI use-case",
"### Training data\n\n\nThis model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see this paper). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs.",
"### Training procedure\n\n\nmDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI.\n\n\nAlso note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see here or here).\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL\n\n\nIdeas for cooperation or questions?\n-----------------------------------\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn\n\n\nDebugging and issues\n--------------------\n\n\nNote that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: URL"
] | [
"TAGS\n#transformers #pytorch #onnx #safetensors #deberta-v2 #text-classification #zero-shot-classification #nli #multilingual #en #ar #bg #de #el #es #fr #hi #ru #sw #th #tr #ur #vi #zh #dataset-multi_nli #dataset-xnli #arxiv-2111.09543 #arxiv-1809.05053 #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use the model",
"#### Simple zero-shot classification pipeline",
"#### NLI use-case",
"### Training data\n\n\nThis model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see this paper). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs.",
"### Training procedure\n\n\nmDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI.\n\n\nAlso note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see here or here).\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL\n\n\nIdeas for cooperation or questions?\n-----------------------------------\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn\n\n\nDebugging and issues\n--------------------\n\n\nNote that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: URL"
] |
text-classification | transformers |
# Policy-DistilBERT-7d
## Model description
This model was trained on 129.669 manually annotated sentences to classify text into one of seven political categories: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups'.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/policy-distilbert-7d"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "The new variant first detected in southern England in September is blamed for sharp rises in levels of positive tests in recent weeks in London, south-east England and the east of England"
input = tokenizer(text, truncation=True, return_tensors="pt")
output = model(input["input_ids"])
# the output corresponds to the following labels:
# 0: external relations, 1: freedom and democracy, 2: political system, 3: economy, 4: welfare and quality of life, 5: fabric of society, 6: social groups
# output to dictionary
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["external relations", "freedom and democracy", "political system", "economy", "welfare and quality of life", "fabric of society", "social groups"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
#{'external relations': 0.0, 'freedom and democracy': 0.0, 'political system': 0.9, 'economy': 0.4,
# 'welfare and quality of life': 98.3, 'fabric of society': 0.3, 'social groups': 0.0}
```
### Training data
Policy-DistilBERT-7d was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2020a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 129.669 sentences from 164 political manifestos from 55 political parties in 8 English-speaking countries (Australia, Canada, Ireland, Israel, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2019.
The Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the [codebook](https://manifesto-project.wzb.eu/down/data/2020b/codebooks/codebook_MPDataset_MPDS2020b.pdf) for the exact definitions of each domain.
### Training procedure
`distilbert-base-uncased` was trained using the Hugging Face trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 15% validation set.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=4e-05,
per_device_train_batch_size=4, # batch size per device during training
per_device_eval_batch_size=4, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.02, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using 15% of the sentences (85-15 train-test split).
accuracy (balanced) | F1 (weighted) | precision | recall | accuracy (not balanced)
-------|---------|----------|---------|----------
0.745 | 0.773 | 0.772 | 0.771 | 0.771
Please note that the label distribution in the dataset is imbalanced:
```
Welfare and Quality of Life 0.327225
Economy 0.259191
Fabric of Society 0.111800
Political System 0.095081
Social Groups 0.094371
External Relations 0.063724
Freedom and Democracy 0.048608
```
[Balanced accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html) and [weighted F1](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html) were therefore used to evaluate model performance.
## Limitations and bias
The model was trained on sentences in political manifestos from parties in the 8 countries mentioned above between 1992-2019, manually annotated by the [Manifesto Project](https://manifesto-project.wzb.eu/information/documents/information). The model output therefore reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
### BibTeX entry and citation info
```bibtex
@unpublished{
title={Policy-DistilBERT},
author={Moritz Laurer},
year={2020},
note={Unpublished paper}
}
``` | {"language": ["en"], "tags": ["text-classification"], "metrics": ["accuracy (balanced)", "F1 (weighted)"], "widget": [{"text": "70-85% of the population needs to get vaccinated against the novel coronavirus to achieve herd immunity."}]} | MoritzLaurer/policy-distilbert-7d | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #has_space #region-us
| Policy-DistilBERT-7d
====================
Model description
-----------------
This model was trained on 129.669 manually annotated sentences to classify text into one of seven political categories: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups'.
Intended uses & limitations
---------------------------
#### How to use the model
### Training data
Policy-DistilBERT-7d was trained on the English-speaking subset of the Manifesto Project Dataset (MPDS2020a). The model was trained on 129.669 sentences from 164 political manifestos from 55 political parties in 8 English-speaking countries (Australia, Canada, Ireland, Israel, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2019.
The Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the codebook for the exact definitions of each domain.
### Training procedure
'distilbert-base-uncased' was trained using the Hugging Face trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 15% validation set.
### Eval results
The model was evaluated using 15% of the sentences (85-15 train-test split).
Please note that the label distribution in the dataset is imbalanced:
Balanced accuracy and weighted F1 were therefore used to evaluate model performance.
Limitations and bias
--------------------
The model was trained on sentences in political manifestos from parties in the 8 countries mentioned above between 1992-2019, manually annotated by the Manifesto Project. The model output therefore reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
### BibTeX entry and citation info
| [
"#### How to use the model",
"### Training data\n\n\nPolicy-DistilBERT-7d was trained on the English-speaking subset of the Manifesto Project Dataset (MPDS2020a). The model was trained on 129.669 sentences from 164 political manifestos from 55 political parties in 8 English-speaking countries (Australia, Canada, Ireland, Israel, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2019.\n\n\nThe Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the codebook for the exact definitions of each domain.",
"### Training procedure\n\n\n'distilbert-base-uncased' was trained using the Hugging Face trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 15% validation set.",
"### Eval results\n\n\nThe model was evaluated using 15% of the sentences (85-15 train-test split).\n\n\n\nPlease note that the label distribution in the dataset is imbalanced:\n\n\nBalanced accuracy and weighted F1 were therefore used to evaluate model performance.\n\n\nLimitations and bias\n--------------------\n\n\nThe model was trained on sentences in political manifestos from parties in the 8 countries mentioned above between 1992-2019, manually annotated by the Manifesto Project. The model output therefore reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### How to use the model",
"### Training data\n\n\nPolicy-DistilBERT-7d was trained on the English-speaking subset of the Manifesto Project Dataset (MPDS2020a). The model was trained on 129.669 sentences from 164 political manifestos from 55 political parties in 8 English-speaking countries (Australia, Canada, Ireland, Israel, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2019.\n\n\nThe Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the codebook for the exact definitions of each domain.",
"### Training procedure\n\n\n'distilbert-base-uncased' was trained using the Hugging Face trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 15% validation set.",
"### Eval results\n\n\nThe model was evaluated using 15% of the sentences (85-15 train-test split).\n\n\n\nPlease note that the label distribution in the dataset is imbalanced:\n\n\nBalanced accuracy and weighted F1 were therefore used to evaluate model performance.\n\n\nLimitations and bias\n--------------------\n\n\nThe model was trained on sentences in political manifestos from parties in the 8 countries mentioned above between 1992-2019, manually annotated by the Manifesto Project. The model output therefore reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.",
"### BibTeX entry and citation info"
] |
zero-shot-classification | transformers | # xtremedistil-l6-h256-mnli-fever-anli-ling-binary
## Model description
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
The base model is [xtremedistil-l6-h256-uncased from Microsoft](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased).
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/xtremedistil-l6-h256-mnli-fever-anli-ling-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
### Training procedure
xtremedistil-l6-h256-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
dataset | mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c
--------|---------|----------|---------|----------|----------|------
accuracy | 0.897 | 0.898 | 0.861 | 0.607 | 0.62 | 0.827
speed (text/sec, GPU Tesla P100, 128 batch) | 1490 | 1485 | 760 | 1186 | 1062 | 1791
## Limitations and bias
Please consult the original paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that the model was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. | {"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "datasets": ["multi_nli", "anli", "fever", "lingnli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"} | MoritzLaurer/xtremedistil-l6-h256-mnli-fever-anli-ling-binary | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:anli",
"dataset:fever",
"dataset:lingnli",
"arxiv:2104.07179",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.07179"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #zero-shot-classification #en #dataset-multi_nli #dataset-anli #dataset-fever #dataset-lingnli #arxiv-2104.07179 #autotrain_compatible #endpoints_compatible #region-us
| xtremedistil-l6-h256-mnli-fever-anli-ling-binary
================================================
Model description
-----------------
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.
Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
The base model is xtremedistil-l6-h256-uncased from Microsoft.
Intended uses & limitations
---------------------------
#### How to use the model
### Training data
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.
### Training procedure
xtremedistil-l6-h256-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
### Eval results
The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
Limitations and bias
--------------------
Please consult the original paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn
### Debugging and issues
Note that the model was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
| [
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.",
"### Training procedure\n\n\nxtremedistil-l6-h256-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn",
"### Debugging and issues\n\n\nNote that the model was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues."
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #zero-shot-classification #en #dataset-multi_nli #dataset-anli #dataset-fever #dataset-lingnli #arxiv-2104.07179 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use the model",
"### Training data\n\n\nThis model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: MultiNLI, Fever-NLI, LingNLI and ANLI.",
"### Training procedure\n\n\nxtremedistil-l6-h256-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.",
"### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original paper and literature on different NLI datasets for potential biases.",
"### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.",
"### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn",
"### Debugging and issues\n\n\nNote that the model was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues."
] |
fill-mask | transformers |
# TswanaBert
Pretrained model on the Tswana language using a masked language modeling (MLM) objective.
## Model Description.
TswanaBERT is a transformer model pre-trained on a corpus of Setswana in a self-supervised fashion by masking part of the input words and training to predict the masks by using byte-level tokens.
## Intended uses & limitations
The model can be used for either masked language modeling or next-word prediction. It can also be fine-tuned on a specific downstream NLP application.
#### How to use
```python
>>> from transformers import pipeline
>>> from transformers import AutoTokenizer, AutoModelWithLMHead
>>> tokenizer = AutoTokenizer.from_pretrained("MoseliMotsoehli/TswanaBert")
>>> model = AutoModelWithLMHead.from_pretrained("MoseliMotsoehli/TswanaBert")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker("Ntshopotse <mask> e godile.")
[{'score': 0.32749542593955994,
'sequence': '<s>Ntshopotse setse e godile.</s>',
'token': 538,
'token_str': 'Ġsetse'},
{'score': 0.060260992497205734,
'sequence': '<s>Ntshopotse le e godile.</s>',
'token': 270,
'token_str': 'Ġle'},
{'score': 0.058460816740989685,
'sequence': '<s>Ntshopotse bone e godile.</s>',
'token': 364,
'token_str': 'Ġbone'},
{'score': 0.05694682151079178,
'sequence': '<s>Ntshopotse ga e godile.</s>',
'token': 298,
'token_str': 'Ġga'},
{'score': 0.0565204992890358,
'sequence': '<s>Ntshopotse, e godile.</s>',
'token': 16,
'token_str': ','}]
```
#### Limitations and bias
The model is trained on a relatively small collection of sestwana, mostly from news articles and creative writings, and so is not representative enough of the language as yet.
## Training data
1. The largest portion of this dataset (10k) sentences of text, comes from the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download)
2. We then added SABC news headlines collected by Marivate Vukosi, & Sefara Tshephisho, (2020) that are generously made available on [zenoodo](http://doi.org/10.5281/zenodo.3668495 ). This added 185 tswana sentences to my corpus.
3. We went on to add 300 more sentences by scrapping following news sites and blogs that mostly originate in Botswana. We actively continue to expand the dataset.
* http://setswana.blogspot.com/
* https://omniglot.com/writing/tswana.php
* http://www.dailynews.gov.bw/
* http://www.mmegi.bw/index.php
* https://tsena.co.bw
* http://www.botswana.co.za/Cultural_Issues-travel/botswana-country-guide-en-route.html
* https://www.poemhunter.com/poem/2013-setswana/
https://www.poemhunter.com/poem/ngwana-wa-mosetsana/
### BibTeX entry and citation info
```bibtex
@inproceedings{author = {Moseli Motsoehli},
year={2020}
}
```
| {"language": "tn"} | MoseliMotsoehli/TswanaBert | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"tn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"tn"
] | TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #tn #autotrain_compatible #endpoints_compatible #region-us
|
# TswanaBert
Pretrained model on the Tswana language using a masked language modeling (MLM) objective.
## Model Description.
TswanaBERT is a transformer model pre-trained on a corpus of Setswana in a self-supervised fashion by masking part of the input words and training to predict the masks by using byte-level tokens.
## Intended uses & limitations
The model can be used for either masked language modeling or next-word prediction. It can also be fine-tuned on a specific downstream NLP application.
#### How to use
#### Limitations and bias
The model is trained on a relatively small collection of sestwana, mostly from news articles and creative writings, and so is not representative enough of the language as yet.
## Training data
1. The largest portion of this dataset (10k) sentences of text, comes from the Leipzig Corpora Collection
2. We then added SABC news headlines collected by Marivate Vukosi, & Sefara Tshephisho, (2020) that are generously made available on zenoodo. This added 185 tswana sentences to my corpus.
3. We went on to add 300 more sentences by scrapping following news sites and blogs that mostly originate in Botswana. We actively continue to expand the dataset.
* URL
* URL
* URL
* URL
* URL
* URL
* URL
URL
### BibTeX entry and citation info
| [
"# TswanaBert\nPretrained model on the Tswana language using a masked language modeling (MLM) objective.",
"## Model Description.\nTswanaBERT is a transformer model pre-trained on a corpus of Setswana in a self-supervised fashion by masking part of the input words and training to predict the masks by using byte-level tokens.",
"## Intended uses & limitations\nThe model can be used for either masked language modeling or next-word prediction. It can also be fine-tuned on a specific downstream NLP application.",
"#### How to use",
"#### Limitations and bias\nThe model is trained on a relatively small collection of sestwana, mostly from news articles and creative writings, and so is not representative enough of the language as yet.",
"## Training data\n\n1. The largest portion of this dataset (10k) sentences of text, comes from the Leipzig Corpora Collection\n\n2. We then added SABC news headlines collected by Marivate Vukosi, & Sefara Tshephisho, (2020) that are generously made available on zenoodo. This added 185 tswana sentences to my corpus. \n\n3. We went on to add 300 more sentences by scrapping following news sites and blogs that mostly originate in Botswana. We actively continue to expand the dataset.\n\n* URL\n* URL\n* URL\n* URL\n* URL\n* URL\n* URL\nURL",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #tn #autotrain_compatible #endpoints_compatible #region-us \n",
"# TswanaBert\nPretrained model on the Tswana language using a masked language modeling (MLM) objective.",
"## Model Description.\nTswanaBERT is a transformer model pre-trained on a corpus of Setswana in a self-supervised fashion by masking part of the input words and training to predict the masks by using byte-level tokens.",
"## Intended uses & limitations\nThe model can be used for either masked language modeling or next-word prediction. It can also be fine-tuned on a specific downstream NLP application.",
"#### How to use",
"#### Limitations and bias\nThe model is trained on a relatively small collection of sestwana, mostly from news articles and creative writings, and so is not representative enough of the language as yet.",
"## Training data\n\n1. The largest portion of this dataset (10k) sentences of text, comes from the Leipzig Corpora Collection\n\n2. We then added SABC news headlines collected by Marivate Vukosi, & Sefara Tshephisho, (2020) that are generously made available on zenoodo. This added 185 tswana sentences to my corpus. \n\n3. We went on to add 300 more sentences by scrapping following news sites and blogs that mostly originate in Botswana. We actively continue to expand the dataset.\n\n* URL\n* URL\n* URL\n* URL\n* URL\n* URL\n* URL\nURL",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# zuBERTa
zuBERTa is a RoBERTa style transformer language model trained on zulu text.
## Intended uses & limitations
The model can be used for getting embeddings to use on a down-stream task such as question answering.
#### How to use
```python
>>> from transformers import pipeline
>>> from transformers import AutoTokenizer, AutoModelWithLMHead
>>> tokenizer = AutoTokenizer.from_pretrained("MoseliMotsoehli/zuBERTa")
>>> model = AutoModelWithLMHead.from_pretrained("MoseliMotsoehli/zuBERTa")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker("Abafika eNkandla bafika sebeholwa <mask> uMpongo kaZingelwayo.")
[
{
"sequence": "<s>Abafika eNkandla bafika sebeholwa khona uMpongo kaZingelwayo.</s>",
"score": 0.050459690392017365,
"token": 555,
"token_str": "Ġkhona"
},
{
"sequence": "<s>Abafika eNkandla bafika sebeholwa inkosi uMpongo kaZingelwayo.</s>",
"score": 0.03668094798922539,
"token": 2321,
"token_str": "Ġinkosi"
},
{
"sequence": "<s>Abafika eNkandla bafika sebeholwa ubukhosi uMpongo kaZingelwayo.</s>",
"score": 0.028774697333574295,
"token": 5101,
"token_str": "Ġubukhosi"
}
]
```
## Training data
1. 30k sentences of text, came from the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download) of zulu 2018. These were collected from news articles and creative writtings.
2. ~7500 articles of human generated translations were scraped from the zulu [wikipedia](https://zu.wikipedia.org/wiki/Special:AllPages).
### BibTeX entry and citation info
```bibtex
@inproceedings{author = {Moseli Motsoehli},
title = {Towards transformation of Southern African language models through transformers.},
year={2020}
}
```
| {"language": "zu"} | MoseliMotsoehli/zuBERTa | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"zu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zu"
] | TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #zu #autotrain_compatible #endpoints_compatible #region-us
|
# zuBERTa
zuBERTa is a RoBERTa style transformer language model trained on zulu text.
## Intended uses & limitations
The model can be used for getting embeddings to use on a down-stream task such as question answering.
#### How to use
## Training data
1. 30k sentences of text, came from the Leipzig Corpora Collection of zulu 2018. These were collected from news articles and creative writtings.
2. ~7500 articles of human generated translations were scraped from the zulu wikipedia.
### BibTeX entry and citation info
| [
"# zuBERTa\nzuBERTa is a RoBERTa style transformer language model trained on zulu text.",
"## Intended uses & limitations\nThe model can be used for getting embeddings to use on a down-stream task such as question answering.",
"#### How to use",
"## Training data\n\n1. 30k sentences of text, came from the Leipzig Corpora Collection of zulu 2018. These were collected from news articles and creative writtings. \n2. ~7500 articles of human generated translations were scraped from the zulu wikipedia.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #zu #autotrain_compatible #endpoints_compatible #region-us \n",
"# zuBERTa\nzuBERTa is a RoBERTa style transformer language model trained on zulu text.",
"## Intended uses & limitations\nThe model can be used for getting embeddings to use on a down-stream task such as question answering.",
"#### How to use",
"## Training data\n\n1. 30k sentences of text, came from the Leipzig Corpora Collection of zulu 2018. These were collected from news articles and creative writtings. \n2. ~7500 articles of human generated translations were scraped from the zulu wikipedia.",
"### BibTeX entry and citation info"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sst2-mahtab
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4982
- eval_accuracy: 0.8830
- eval_runtime: 2.3447
- eval_samples_per_second: 371.91
- eval_steps_per_second: 46.489
- epoch: 1.0
- step: 8419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "model-index": [{"name": "distilbert-sst2-mahtab", "results": []}]} | Motahar/distilbert-sst2-mahtab | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-sst2-mahtab
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4982
- eval_accuracy: 0.8830
- eval_runtime: 2.3447
- eval_samples_per_second: 371.91
- eval_steps_per_second: 46.489
- epoch: 1.0
- step: 8419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# distilbert-sst2-mahtab\n\nThis model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the glue dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.4982\n- eval_accuracy: 0.8830\n- eval_runtime: 2.3447\n- eval_samples_per_second: 371.91\n- eval_steps_per_second: 46.489\n- epoch: 1.0\n- step: 8419",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-sst2-mahtab\n\nThis model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the glue dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.4982\n- eval_accuracy: 0.8830\n- eval_runtime: 2.3447\n- eval_samples_per_second: 371.91\n- eval_steps_per_second: 46.489\n- epoch: 1.0\n- step: 8419",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation | transformers | ### Description:
BART Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.
### How To Use:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
src_text = [" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.", "In the end, it played out like a movie. A tense, heartbreaking story, and then a surprise twist at the end. As eight of Mary Jane Veloso's fellow death row inmates -- mostly foreigners, like her -- were put to death by firing squad early Wednesday in a wooded grove on the Indonesian island of Nusa Kambangan, the Filipina maid and mother of two was spared, at least for now. Her family was returning from what they thought was their final visit to the prison on so-called \"execution island\" when a Philippine TV crew flagged their bus down to tell them of the decision to postpone her execution. Her ecstatic mother, Celia Veloso, told CNN: \"We are so happy, so happy. I thought I had lost my daughter already but God is so good. Thank you to everyone who helped us."]
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("Mousumi/finetuned_bart")
model = AutoModelForSeq2SeqLM.from_pretrained("Mousumi/finetuned_bart").to(torch_device)
no_samples = len(src_text)
result = []
for i in range(no_samples):
with tokenizer.as_target_tokenizer():
tokenized_text = tokenizer([src_text[i]], return_tensors='pt', padding=True, truncation=True)
batch = tokenized_text.to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
result.append(tgt_text[0])
print(result)
``` | {} | Mousumi/finetuned_bart | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### Description:
BART Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.
### How To Use:
| [
"### Description:\nBART Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.",
"### How To Use:"
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Description:\nBART Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.",
"### How To Use:"
] |
text2text-generation | transformers | ### Description:
Pegasus Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.
### How To Use:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
src_text = [" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.", "In the end, it played out like a movie. A tense, heartbreaking story, and then a surprise twist at the end. As eight of Mary Jane Veloso's fellow death row inmates -- mostly foreigners, like her -- were put to death by firing squad early Wednesday in a wooded grove on the Indonesian island of Nusa Kambangan, the Filipina maid and mother of two was spared, at least for now. Her family was returning from what they thought was their final visit to the prison on so-called \"execution island\" when a Philippine TV crew flagged their bus down to tell them of the decision to postpone her execution. Her ecstatic mother, Celia Veloso, told CNN: \"We are so happy, so happy. I thought I had lost my daughter already but God is so good. Thank you to everyone who helped us."]
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("Mousumi/finetuned_pegasus")
model = AutoModelForSeq2SeqLM.from_pretrained("Mousumi/finetuned_pegasus").to(torch_device)
no_samples = len(src_text)
result = []
for i in range(no_samples):
with tokenizer.as_target_tokenizer():
tokenized_text = tokenizer([src_text[i]], return_tensors='pt', padding=True, truncation=True)
batch = tokenized_text.to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
result.append(tgt_text[0])
print(result)
``` | {} | Mousumi/finetuned_pegasus | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| ### Description:
Pegasus Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.
### How To Use:
| [
"### Description:\nPegasus Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.",
"### How To Use:"
] | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"### Description:\nPegasus Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.",
"### How To Use:"
] |
text-generation | transformers | kakao brain에서 공개한 kogpt 6b model('kakaobrain/kogpt')을 fp16으로 저장한 모델입니다.
### 카카오브레인 모델을 fp16으로 로드하는 방법
```python
import torch
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained('kakaobrain/kogpt', cache_dir='./my_dir', revision='KoGPT6B-ryan1.5b', torch_dtype=torch.float16)
```
### fp16 모델 로드 후 문장 생성
[](https://colab.research.google.com/drive/1_rLDzhGohJPbOD5I_eTIOdx4aOTp43uK?usp=sharing)
```python
import torch
from transformers import GPTJForCausalLM, AutoTokenizer
model = GPTJForCausalLM.from_pretrained('MrBananaHuman/kogpt_6b_fp16', low_cpu_mem_usage=True))
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained('MrBananaHuman/kogpt_6b_fp16')
input_text = '이순신은'
input_ids = tokenizer(input_text, return_tensors='pt').input_ids.to('cuda')
output = model.generate(input_ids, max_length=64)
print(tokenizer.decode(output[0]))
>>> 이순신은 우리에게 무엇인가? 1. 머리말 이글은 임진왜란 당시 이순인이 보여준
```
### 참고 링크
https://github.com/kakaobrain/kogpt/issues/6?fbclid=IwAR1KpWhuHnevQvEWV18o16k2z9TLgrXkbWTkKqzL-NDXHfDnWcIq7I4SJXM | {} | MrBananaHuman/kogpt_6b_fp16 | null | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gptj #text-generation #autotrain_compatible #endpoints_compatible #region-us
| kakao brain에서 공개한 kogpt 6b model('kakaobrain/kogpt')을 fp16으로 저장한 모델입니다.
### 카카오브레인 모델을 fp16으로 로드하는 방법
### fp16 모델 로드 후 문장 생성

`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list. | {"title": "DPT Large", "emoji": "\ud83d\udc20", "colorFrom": "red", "colorTo": "blue", "sdk": "gradio", "app_file": "app.py", "pinned": false} | MrBodean/Depthmap | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
|
# Configuration
'title': _string_
Display title for the Space
'emoji': _string_
Space emoji (emoji-only character allowed)
'colorFrom': _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
'colorTo': _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
'sdk': _string_
Can be either 'gradio' or 'streamlit'
'sdk_version' : _string_
Only applicable for 'streamlit' SDK.
See doc for more info on supported versions.
'app_file': _string_
Path to your main application file (which contains either 'gradio' or 'streamlit' Python code).
Path is relative to the root of the repository.
'pinned': _boolean_
Whether the Space stays on top of your list. | [
"# Configuration\n\n'title': _string_ \nDisplay title for the Space\n\n'emoji': _string_ \nSpace emoji (emoji-only character allowed)\n\n'colorFrom': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'colorTo': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'sdk': _string_ \nCan be either 'gradio' or 'streamlit'\n\n'sdk_version' : _string_ \nOnly applicable for 'streamlit' SDK. \nSee doc for more info on supported versions.\n\n'app_file': _string_ \nPath to your main application file (which contains either 'gradio' or 'streamlit' Python code). \nPath is relative to the root of the repository.\n\n'pinned': _boolean_ \nWhether the Space stays on top of your list."
] | [
"TAGS\n#region-us \n",
"# Configuration\n\n'title': _string_ \nDisplay title for the Space\n\n'emoji': _string_ \nSpace emoji (emoji-only character allowed)\n\n'colorFrom': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'colorTo': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'sdk': _string_ \nCan be either 'gradio' or 'streamlit'\n\n'sdk_version' : _string_ \nOnly applicable for 'streamlit' SDK. \nSee doc for more info on supported versions.\n\n'app_file': _string_ \nPath to your main application file (which contains either 'gradio' or 'streamlit' Python code). \nPath is relative to the root of the repository.\n\n'pinned': _boolean_ \nWhether the Space stays on top of your list."
] |
text-generation | transformers |
#Rick DialoGPT model | {"tags": ["conversational"]} | MrDuckerino/DialoGPT-medium-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick DialoGPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Sarge | {"tags": ["conversational"]} | MrE/DialoGPT-medium-SARGE | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Sarge | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | # Sarge | {"tags": ["conversational"]} | MrE/DialoGPT-medium-SARGER1 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Sarge | [
"# Sarge"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Sarge"
] |
text-generation | transformers | #Sarge3 | {"tags": ["conversational"]} | MrE/DialoGPT-medium-SARGER3 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #Sarge3 | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Delta Chat Model | {"pipeline_tag": "conversational"} | MrGentle/DeltaModel-genius1 | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
#Delta Chat Model | [] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation | transformers | #Rick Sanchez DialoGPT model | {"tags": ["conversational"]} | MrZ/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #Rick Sanchez DialoGPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
sentence-similarity | sentence-transformers |
# SBERT-base-msmarco-asym
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Asym(
(QRY-0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BertModel
(DOCPOS-0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BertModel
(DOCNEG-0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BertModel
)
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-base-msmarco-asym | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SBERT-base-msmarco-asym
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SBERT-base-msmarco-asym",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SBERT-base-msmarco-asym",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SBERT-base-msmarco-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-base-msmarco-bitfit | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SBERT-base-msmarco-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SBERT-base-msmarco-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SBERT-base-msmarco-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SBERT-base-msmarco
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-base-msmarco | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SBERT-base-msmarco
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SBERT-base-msmarco",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SBERT-base-msmarco",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers | This model is used in "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
| {"license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-base-nli-stsb-v2 | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #license-apache-2.0 #endpoints_compatible #region-us
| This model is used in "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
| [] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
sentence-similarity | sentence-transformers |
# SBERT-base-nli-v2-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-base-nli-v2-bitfit | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SBERT-base-nli-v2-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SBERT-base-nli-v2-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SBERT-base-nli-v2-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SBERT-base-nli-v2
This model is used in "SGPT: GPT Sentence Embeddings for Semantic Search" and "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-base-nli-v2 | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SBERT-base-nli-v2
This model is used in "SGPT: GPT Sentence Embeddings for Semantic Search" and "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SBERT-base-nli-v2\n\nThis model is used in \"SGPT: GPT Sentence Embeddings for Semantic Search\" and \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\".",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SBERT-base-nli-v2\n\nThis model is used in \"SGPT: GPT Sentence Embeddings for Semantic Search\" and \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\".",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SBERT-large-nli-v2
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SBERT-large-nli-v2 | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SBERT-large-nli-v2
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SBERT-large-nli-v2",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SBERT-large-nli-v2",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-1.3B-mean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-1.3B-mean-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-1.3B-mean-nli
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-1.3B-mean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-1.3B-mean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
feature-extraction | sentence-transformers |
# SGPT-1.3B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 62398 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "model-index": [{"name": "SGPT-1.3B-weightedmean-msmarco-specb-bitfit", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 65.20895522388061}, {"type": "ap", "value": 29.59212705444778}, {"type": "f1", "value": 59.97099864321921}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1"}, "metrics": [{"type": "accuracy", "value": 73.20565}, {"type": "ap", "value": 67.36680643550963}, {"type": "f1", "value": 72.90420520325125}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 34.955999999999996}, {"type": "f1", "value": 34.719324437696955}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3"}, "metrics": [{"type": "map_at_1", "value": 26.101999999999997}, {"type": "map_at_10", "value": 40.958}, {"type": "map_at_100", "value": 42.033}, {"type": "map_at_1000", "value": 42.042}, {"type": "map_at_3", "value": 36.332}, {"type": "map_at_5", "value": 38.608}, {"type": "mrr_at_1", "value": 26.387}, {"type": "mrr_at_10", "value": 41.051}, {"type": "mrr_at_100", "value": 42.118}, {"type": "mrr_at_1000", "value": 42.126999999999995}, {"type": "mrr_at_3", "value": 36.415}, {"type": "mrr_at_5", "value": 38.72}, {"type": "ndcg_at_1", "value": 26.101999999999997}, {"type": "ndcg_at_10", "value": 49.68}, {"type": "ndcg_at_100", "value": 54.257999999999996}, {"type": "ndcg_at_1000", "value": 54.486000000000004}, {"type": "ndcg_at_3", "value": 39.864}, {"type": "ndcg_at_5", "value": 43.980000000000004}, {"type": "precision_at_1", "value": 26.101999999999997}, {"type": "precision_at_10", "value": 7.781000000000001}, {"type": "precision_at_100", "value": 0.979}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 16.714000000000002}, {"type": "precision_at_5", "value": 12.034}, {"type": "recall_at_1", "value": 26.101999999999997}, {"type": "recall_at_10", "value": 77.809}, {"type": "recall_at_100", "value": 97.866}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 50.141999999999996}, {"type": "recall_at_5", "value": 60.171}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "0bbdb47bcbe3a90093699aefeed338a0f28a7ee8"}, "metrics": [{"type": "v_measure", "value": 43.384194916953774}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3"}, "metrics": [{"type": "v_measure", "value": 33.70962633433912}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c"}, "metrics": [{"type": "map", "value": 58.133058996870076}, {"type": "mrr", "value": 72.10922041946972}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "9ee918f184421b6bd48b78f6c714d86546106103"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.62153841660047}, {"type": "cos_sim_spearman", "value": 83.01514456843276}, {"type": "euclidean_pearson", "value": 86.00431518427241}, {"type": "euclidean_spearman", "value": 83.85552516285783}, {"type": "manhattan_pearson", "value": 85.83025803351181}, {"type": "manhattan_spearman", "value": 83.86636878343106}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "44fa15921b4c889113cc5df03dd4901b49161ab7"}, "metrics": [{"type": "accuracy", "value": 82.05844155844156}, {"type": "f1", "value": 82.0185837884764}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55"}, "metrics": [{"type": "v_measure", "value": 35.05918333141837}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "c0fab014e1bcb8d3a5e31b2088972a1e01547dc1"}, "metrics": [{"type": "v_measure", "value": 30.71055028830579}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "2b9f5791698b5be7bc5e10535c8690f20043c3db"}, "metrics": [{"type": "map_at_1", "value": 26.519}, {"type": "map_at_10", "value": 35.634}, {"type": "map_at_100", "value": 36.961}, {"type": "map_at_1000", "value": 37.088}, {"type": "map_at_3", "value": 32.254}, {"type": "map_at_5", "value": 34.22}, {"type": "mrr_at_1", "value": 32.332}, {"type": "mrr_at_10", "value": 41.168}, {"type": "mrr_at_100", "value": 41.977}, {"type": "mrr_at_1000", "value": 42.028999999999996}, {"type": "mrr_at_3", "value": 38.196999999999996}, {"type": "mrr_at_5", "value": 40.036}, {"type": "ndcg_at_1", "value": 32.332}, {"type": "ndcg_at_10", "value": 41.471000000000004}, {"type": "ndcg_at_100", "value": 46.955999999999996}, {"type": "ndcg_at_1000", "value": 49.262}, {"type": "ndcg_at_3", "value": 35.937999999999995}, {"type": "ndcg_at_5", "value": 38.702999999999996}, {"type": "precision_at_1", "value": 32.332}, {"type": "precision_at_10", "value": 7.7829999999999995}, {"type": "precision_at_100", "value": 1.29}, {"type": "precision_at_1000", "value": 0.178}, {"type": "precision_at_3", "value": 16.834}, {"type": "precision_at_5", "value": 12.418}, {"type": "recall_at_1", "value": 26.519}, {"type": "recall_at_10", "value": 53.190000000000005}, {"type": "recall_at_100", "value": 76.56500000000001}, {"type": "recall_at_1000", "value": 91.47800000000001}, {"type": "recall_at_3", "value": 38.034}, {"type": "recall_at_5", "value": 45.245999999999995}, {"type": "map_at_1", "value": 25.356}, {"type": "map_at_10", "value": 34.596}, {"type": "map_at_100", "value": 35.714}, {"type": "map_at_1000", "value": 35.839999999999996}, {"type": "map_at_3", "value": 32.073}, {"type": "map_at_5", "value": 33.475}, {"type": "mrr_at_1", "value": 31.274}, {"type": "mrr_at_10", "value": 39.592}, {"type": "mrr_at_100", "value": 40.284}, {"type": "mrr_at_1000", "value": 40.339999999999996}, {"type": "mrr_at_3", "value": 37.378}, {"type": "mrr_at_5", "value": 38.658}, {"type": "ndcg_at_1", "value": 31.274}, {"type": "ndcg_at_10", "value": 39.766}, {"type": "ndcg_at_100", "value": 44.028}, {"type": "ndcg_at_1000", "value": 46.445}, {"type": "ndcg_at_3", "value": 35.934}, {"type": "ndcg_at_5", "value": 37.751000000000005}, {"type": "precision_at_1", "value": 31.274}, {"type": "precision_at_10", "value": 7.452}, {"type": "precision_at_100", "value": 1.217}, {"type": "precision_at_1000", "value": 0.16999999999999998}, {"type": "precision_at_3", "value": 17.431}, {"type": "precision_at_5", "value": 12.306000000000001}, {"type": "recall_at_1", "value": 25.356}, {"type": "recall_at_10", "value": 49.344}, {"type": "recall_at_100", "value": 67.497}, {"type": "recall_at_1000", "value": 83.372}, {"type": "recall_at_3", "value": 38.227}, {"type": "recall_at_5", "value": 43.187999999999995}, {"type": "map_at_1", "value": 32.759}, {"type": "map_at_10", "value": 43.937}, {"type": "map_at_100", "value": 45.004}, {"type": "map_at_1000", "value": 45.07}, {"type": "map_at_3", "value": 40.805}, {"type": "map_at_5", "value": 42.497}, {"type": "mrr_at_1", "value": 37.367}, {"type": "mrr_at_10", "value": 47.237}, {"type": "mrr_at_100", "value": 47.973}, {"type": "mrr_at_1000", "value": 48.010999999999996}, {"type": "mrr_at_3", "value": 44.65}, {"type": "mrr_at_5", "value": 46.050999999999995}, {"type": "ndcg_at_1", "value": 37.367}, {"type": "ndcg_at_10", "value": 49.659}, {"type": "ndcg_at_100", "value": 54.069}, {"type": "ndcg_at_1000", "value": 55.552}, {"type": "ndcg_at_3", "value": 44.169000000000004}, {"type": "ndcg_at_5", "value": 46.726}, {"type": "precision_at_1", "value": 37.367}, {"type": "precision_at_10", "value": 8.163}, {"type": "precision_at_100", "value": 1.133}, {"type": "precision_at_1000", "value": 0.131}, {"type": "precision_at_3", "value": 19.707}, {"type": "precision_at_5", "value": 13.718}, {"type": "recall_at_1", "value": 32.759}, {"type": "recall_at_10", "value": 63.341}, {"type": "recall_at_100", "value": 82.502}, {"type": "recall_at_1000", "value": 93.259}, {"type": "recall_at_3", "value": 48.796}, {"type": "recall_at_5", "value": 54.921}, {"type": "map_at_1", "value": 18.962}, {"type": "map_at_10", "value": 25.863000000000003}, {"type": "map_at_100", "value": 26.817999999999998}, {"type": "map_at_1000", "value": 26.918}, {"type": "map_at_3", "value": 23.043}, {"type": "map_at_5", "value": 24.599}, {"type": "mrr_at_1", "value": 20.452}, {"type": "mrr_at_10", "value": 27.301}, {"type": "mrr_at_100", "value": 28.233000000000004}, {"type": "mrr_at_1000", "value": 28.310000000000002}, {"type": "mrr_at_3", "value": 24.539}, {"type": "mrr_at_5", "value": 26.108999999999998}, {"type": "ndcg_at_1", "value": 20.452}, {"type": "ndcg_at_10", "value": 30.354999999999997}, {"type": "ndcg_at_100", "value": 35.336}, {"type": "ndcg_at_1000", "value": 37.927}, {"type": "ndcg_at_3", "value": 24.705}, {"type": "ndcg_at_5", "value": 27.42}, {"type": "precision_at_1", "value": 20.452}, {"type": "precision_at_10", "value": 4.949}, {"type": "precision_at_100", "value": 0.7799999999999999}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 10.358}, {"type": "precision_at_5", "value": 7.774}, {"type": "recall_at_1", "value": 18.962}, {"type": "recall_at_10", "value": 43.056}, {"type": "recall_at_100", "value": 66.27300000000001}, {"type": "recall_at_1000", "value": 85.96000000000001}, {"type": "recall_at_3", "value": 27.776}, {"type": "recall_at_5", "value": 34.287}, {"type": "map_at_1", "value": 11.24}, {"type": "map_at_10", "value": 18.503}, {"type": "map_at_100", "value": 19.553}, {"type": "map_at_1000", "value": 19.689999999999998}, {"type": "map_at_3", "value": 16.150000000000002}, {"type": "map_at_5", "value": 17.254}, {"type": "mrr_at_1", "value": 13.806}, {"type": "mrr_at_10", "value": 21.939}, {"type": "mrr_at_100", "value": 22.827}, {"type": "mrr_at_1000", "value": 22.911}, {"type": "mrr_at_3", "value": 19.32}, {"type": "mrr_at_5", "value": 20.558}, {"type": "ndcg_at_1", "value": 13.806}, {"type": "ndcg_at_10", "value": 23.383000000000003}, {"type": "ndcg_at_100", "value": 28.834}, {"type": "ndcg_at_1000", "value": 32.175}, {"type": "ndcg_at_3", "value": 18.651999999999997}, {"type": "ndcg_at_5", "value": 20.505000000000003}, {"type": "precision_at_1", "value": 13.806}, {"type": "precision_at_10", "value": 4.714}, {"type": "precision_at_100", "value": 0.864}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 9.328}, {"type": "precision_at_5", "value": 6.841}, {"type": "recall_at_1", "value": 11.24}, {"type": "recall_at_10", "value": 34.854}, {"type": "recall_at_100", "value": 59.50299999999999}, {"type": "recall_at_1000", "value": 83.25}, {"type": "recall_at_3", "value": 22.02}, {"type": "recall_at_5", "value": 26.715}, {"type": "map_at_1", "value": 23.012}, {"type": "map_at_10", "value": 33.048}, {"type": "map_at_100", "value": 34.371}, {"type": "map_at_1000", "value": 34.489}, {"type": "map_at_3", "value": 29.942999999999998}, {"type": "map_at_5", "value": 31.602000000000004}, {"type": "mrr_at_1", "value": 28.104000000000003}, {"type": "mrr_at_10", "value": 37.99}, {"type": "mrr_at_100", "value": 38.836}, {"type": "mrr_at_1000", "value": 38.891}, {"type": "mrr_at_3", "value": 35.226}, {"type": "mrr_at_5", "value": 36.693999999999996}, {"type": "ndcg_at_1", "value": 28.104000000000003}, {"type": "ndcg_at_10", "value": 39.037}, {"type": "ndcg_at_100", "value": 44.643}, {"type": "ndcg_at_1000", "value": 46.939}, {"type": "ndcg_at_3", "value": 33.784}, {"type": "ndcg_at_5", "value": 36.126000000000005}, {"type": "precision_at_1", "value": 28.104000000000003}, {"type": "precision_at_10", "value": 7.2669999999999995}, {"type": "precision_at_100", "value": 1.193}, {"type": "precision_at_1000", "value": 0.159}, {"type": "precision_at_3", "value": 16.298000000000002}, {"type": "precision_at_5", "value": 11.684}, {"type": "recall_at_1", "value": 23.012}, {"type": "recall_at_10", "value": 52.054}, {"type": "recall_at_100", "value": 75.622}, {"type": "recall_at_1000", "value": 90.675}, {"type": "recall_at_3", "value": 37.282}, {"type": "recall_at_5", "value": 43.307}, {"type": "map_at_1", "value": 21.624}, {"type": "map_at_10", "value": 30.209999999999997}, {"type": "map_at_100", "value": 31.52}, {"type": "map_at_1000", "value": 31.625999999999998}, {"type": "map_at_3", "value": 26.951000000000004}, {"type": "map_at_5", "value": 28.938999999999997}, {"type": "mrr_at_1", "value": 26.941}, {"type": "mrr_at_10", "value": 35.13}, {"type": "mrr_at_100", "value": 36.15}, {"type": "mrr_at_1000", "value": 36.204}, {"type": "mrr_at_3", "value": 32.42}, {"type": "mrr_at_5", "value": 34.155}, {"type": "ndcg_at_1", "value": 26.941}, {"type": "ndcg_at_10", "value": 35.726}, {"type": "ndcg_at_100", "value": 41.725}, {"type": "ndcg_at_1000", "value": 44.105}, {"type": "ndcg_at_3", "value": 30.184}, {"type": "ndcg_at_5", "value": 33.176}, {"type": "precision_at_1", "value": 26.941}, {"type": "precision_at_10", "value": 6.654999999999999}, {"type": "precision_at_100", "value": 1.1520000000000001}, {"type": "precision_at_1000", "value": 0.152}, {"type": "precision_at_3", "value": 14.346}, {"type": "precision_at_5", "value": 10.868}, {"type": "recall_at_1", "value": 21.624}, {"type": "recall_at_10", "value": 47.359}, {"type": "recall_at_100", "value": 73.436}, {"type": "recall_at_1000", "value": 89.988}, {"type": "recall_at_3", "value": 32.34}, {"type": "recall_at_5", "value": 39.856}, {"type": "map_at_1", "value": 20.67566666666667}, {"type": "map_at_10", "value": 28.479333333333333}, {"type": "map_at_100", "value": 29.612249999999996}, {"type": "map_at_1000", "value": 29.731166666666663}, {"type": "map_at_3", "value": 25.884}, {"type": "map_at_5", "value": 27.298916666666667}, {"type": "mrr_at_1", "value": 24.402583333333332}, {"type": "mrr_at_10", "value": 32.07041666666667}, {"type": "mrr_at_100", "value": 32.95841666666667}, {"type": "mrr_at_1000", "value": 33.025416666666665}, {"type": "mrr_at_3", "value": 29.677749999999996}, {"type": "mrr_at_5", "value": 31.02391666666667}, {"type": "ndcg_at_1", "value": 24.402583333333332}, {"type": "ndcg_at_10", "value": 33.326166666666666}, {"type": "ndcg_at_100", "value": 38.51566666666667}, {"type": "ndcg_at_1000", "value": 41.13791666666667}, {"type": "ndcg_at_3", "value": 28.687749999999994}, {"type": "ndcg_at_5", "value": 30.84766666666667}, {"type": "precision_at_1", "value": 24.402583333333332}, {"type": "precision_at_10", "value": 5.943749999999999}, {"type": "precision_at_100", "value": 1.0098333333333334}, {"type": "precision_at_1000", "value": 0.14183333333333334}, {"type": "precision_at_3", "value": 13.211500000000001}, {"type": "precision_at_5", "value": 9.548416666666668}, {"type": "recall_at_1", "value": 20.67566666666667}, {"type": "recall_at_10", "value": 44.245583333333336}, {"type": "recall_at_100", "value": 67.31116666666667}, {"type": "recall_at_1000", "value": 85.87841666666665}, {"type": "recall_at_3", "value": 31.49258333333333}, {"type": "recall_at_5", "value": 36.93241666666667}, {"type": "map_at_1", "value": 18.34}, {"type": "map_at_10", "value": 23.988}, {"type": "map_at_100", "value": 24.895}, {"type": "map_at_1000", "value": 24.992}, {"type": "map_at_3", "value": 21.831}, {"type": "map_at_5", "value": 23.0}, {"type": "mrr_at_1", "value": 20.399}, {"type": "mrr_at_10", "value": 26.186}, {"type": "mrr_at_100", "value": 27.017999999999997}, {"type": "mrr_at_1000", "value": 27.090999999999998}, {"type": "mrr_at_3", "value": 24.08}, {"type": "mrr_at_5", "value": 25.230000000000004}, {"type": "ndcg_at_1", "value": 20.399}, {"type": "ndcg_at_10", "value": 27.799000000000003}, {"type": "ndcg_at_100", "value": 32.579}, {"type": "ndcg_at_1000", "value": 35.209}, {"type": "ndcg_at_3", "value": 23.684}, {"type": "ndcg_at_5", "value": 25.521}, {"type": "precision_at_1", "value": 20.399}, {"type": "precision_at_10", "value": 4.585999999999999}, {"type": "precision_at_100", "value": 0.755}, {"type": "precision_at_1000", "value": 0.105}, {"type": "precision_at_3", "value": 10.276}, {"type": "precision_at_5", "value": 7.362}, {"type": "recall_at_1", "value": 18.34}, {"type": "recall_at_10", "value": 37.456}, {"type": "recall_at_100", "value": 59.86}, {"type": "recall_at_1000", "value": 79.703}, {"type": "recall_at_3", "value": 26.163999999999998}, {"type": "recall_at_5", "value": 30.652}, {"type": "map_at_1", "value": 12.327}, {"type": "map_at_10", "value": 17.572}, {"type": "map_at_100", "value": 18.534}, {"type": "map_at_1000", "value": 18.653}, {"type": "map_at_3", "value": 15.703}, {"type": "map_at_5", "value": 16.752}, {"type": "mrr_at_1", "value": 15.038000000000002}, {"type": "mrr_at_10", "value": 20.726}, {"type": "mrr_at_100", "value": 21.61}, {"type": "mrr_at_1000", "value": 21.695}, {"type": "mrr_at_3", "value": 18.829}, {"type": "mrr_at_5", "value": 19.885}, {"type": "ndcg_at_1", "value": 15.038000000000002}, {"type": "ndcg_at_10", "value": 21.241}, {"type": "ndcg_at_100", "value": 26.179000000000002}, {"type": "ndcg_at_1000", "value": 29.316}, {"type": "ndcg_at_3", "value": 17.762}, {"type": "ndcg_at_5", "value": 19.413}, {"type": "precision_at_1", "value": 15.038000000000002}, {"type": "precision_at_10", "value": 3.8920000000000003}, {"type": "precision_at_100", "value": 0.75}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 8.351}, {"type": "precision_at_5", "value": 6.187}, {"type": "recall_at_1", "value": 12.327}, {"type": "recall_at_10", "value": 29.342000000000002}, {"type": "recall_at_100", "value": 51.854}, {"type": "recall_at_1000", "value": 74.648}, {"type": "recall_at_3", "value": 19.596}, {"type": "recall_at_5", "value": 23.899}, {"type": "map_at_1", "value": 20.594}, {"type": "map_at_10", "value": 27.878999999999998}, {"type": "map_at_100", "value": 28.926000000000002}, {"type": "map_at_1000", "value": 29.041}, {"type": "map_at_3", "value": 25.668999999999997}, {"type": "map_at_5", "value": 26.773999999999997}, {"type": "mrr_at_1", "value": 23.694000000000003}, {"type": "mrr_at_10", "value": 31.335}, {"type": "mrr_at_100", "value": 32.218}, {"type": "mrr_at_1000", "value": 32.298}, {"type": "mrr_at_3", "value": 29.26}, {"type": "mrr_at_5", "value": 30.328}, {"type": "ndcg_at_1", "value": 23.694000000000003}, {"type": "ndcg_at_10", "value": 32.456}, {"type": "ndcg_at_100", "value": 37.667}, {"type": "ndcg_at_1000", "value": 40.571}, {"type": "ndcg_at_3", "value": 28.283}, {"type": "ndcg_at_5", "value": 29.986}, {"type": "precision_at_1", "value": 23.694000000000003}, {"type": "precision_at_10", "value": 5.448}, {"type": "precision_at_100", "value": 0.9119999999999999}, {"type": "precision_at_1000", "value": 0.127}, {"type": "precision_at_3", "value": 12.717999999999998}, {"type": "precision_at_5", "value": 8.843}, {"type": "recall_at_1", "value": 20.594}, {"type": "recall_at_10", "value": 43.004999999999995}, {"type": "recall_at_100", "value": 66.228}, {"type": "recall_at_1000", "value": 87.17099999999999}, {"type": "recall_at_3", "value": 31.554}, {"type": "recall_at_5", "value": 35.838}, {"type": "map_at_1", "value": 20.855999999999998}, {"type": "map_at_10", "value": 28.372000000000003}, {"type": "map_at_100", "value": 29.87}, {"type": "map_at_1000", "value": 30.075000000000003}, {"type": "map_at_3", "value": 26.054}, {"type": "map_at_5", "value": 27.128999999999998}, {"type": "mrr_at_1", "value": 25.494}, {"type": "mrr_at_10", "value": 32.735}, {"type": "mrr_at_100", "value": 33.794000000000004}, {"type": "mrr_at_1000", "value": 33.85}, {"type": "mrr_at_3", "value": 30.731}, {"type": "mrr_at_5", "value": 31.897}, {"type": "ndcg_at_1", "value": 25.494}, {"type": "ndcg_at_10", "value": 33.385}, {"type": "ndcg_at_100", "value": 39.436}, {"type": "ndcg_at_1000", "value": 42.313}, {"type": "ndcg_at_3", "value": 29.612}, {"type": "ndcg_at_5", "value": 31.186999999999998}, {"type": "precision_at_1", "value": 25.494}, {"type": "precision_at_10", "value": 6.422999999999999}, {"type": "precision_at_100", "value": 1.383}, {"type": "precision_at_1000", "value": 0.22399999999999998}, {"type": "precision_at_3", "value": 13.834}, {"type": "precision_at_5", "value": 10.0}, {"type": "recall_at_1", "value": 20.855999999999998}, {"type": "recall_at_10", "value": 42.678}, {"type": "recall_at_100", "value": 70.224}, {"type": "recall_at_1000", "value": 89.369}, {"type": "recall_at_3", "value": 31.957}, {"type": "recall_at_5", "value": 36.026}, {"type": "map_at_1", "value": 16.519000000000002}, {"type": "map_at_10", "value": 22.15}, {"type": "map_at_100", "value": 23.180999999999997}, {"type": "map_at_1000", "value": 23.291999999999998}, {"type": "map_at_3", "value": 20.132}, {"type": "map_at_5", "value": 21.346}, {"type": "mrr_at_1", "value": 17.93}, {"type": "mrr_at_10", "value": 23.506}, {"type": "mrr_at_100", "value": 24.581}, {"type": "mrr_at_1000", "value": 24.675}, {"type": "mrr_at_3", "value": 21.503}, {"type": "mrr_at_5", "value": 22.686}, {"type": "ndcg_at_1", "value": 17.93}, {"type": "ndcg_at_10", "value": 25.636}, {"type": "ndcg_at_100", "value": 30.736}, {"type": "ndcg_at_1000", "value": 33.841}, {"type": "ndcg_at_3", "value": 21.546000000000003}, {"type": "ndcg_at_5", "value": 23.658}, {"type": "precision_at_1", "value": 17.93}, {"type": "precision_at_10", "value": 3.993}, {"type": "precision_at_100", "value": 0.6890000000000001}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 9.057}, {"type": "precision_at_5", "value": 6.58}, {"type": "recall_at_1", "value": 16.519000000000002}, {"type": "recall_at_10", "value": 35.268}, {"type": "recall_at_100", "value": 58.17}, {"type": "recall_at_1000", "value": 81.66799999999999}, {"type": "recall_at_3", "value": 24.165}, {"type": "recall_at_5", "value": 29.254}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "392b78eb68c07badcd7c2cd8f39af108375dfcce"}, "metrics": [{"type": "map_at_1", "value": 10.363}, {"type": "map_at_10", "value": 18.301000000000002}, {"type": "map_at_100", "value": 20.019000000000002}, {"type": "map_at_1000", "value": 20.207}, {"type": "map_at_3", "value": 14.877}, {"type": "map_at_5", "value": 16.544}, {"type": "mrr_at_1", "value": 22.866}, {"type": "mrr_at_10", "value": 34.935}, {"type": "mrr_at_100", "value": 35.802}, {"type": "mrr_at_1000", "value": 35.839999999999996}, {"type": "mrr_at_3", "value": 30.965999999999998}, {"type": "mrr_at_5", "value": 33.204}, {"type": "ndcg_at_1", "value": 22.866}, {"type": "ndcg_at_10", "value": 26.595000000000002}, {"type": "ndcg_at_100", "value": 33.513999999999996}, {"type": "ndcg_at_1000", "value": 36.872}, {"type": "ndcg_at_3", "value": 20.666999999999998}, {"type": "ndcg_at_5", "value": 22.728}, {"type": "precision_at_1", "value": 22.866}, {"type": "precision_at_10", "value": 8.632}, {"type": "precision_at_100", "value": 1.6119999999999999}, {"type": "precision_at_1000", "value": 0.22399999999999998}, {"type": "precision_at_3", "value": 15.504999999999999}, {"type": "precision_at_5", "value": 12.404}, {"type": "recall_at_1", "value": 10.363}, {"type": "recall_at_10", "value": 33.494}, {"type": "recall_at_100", "value": 57.593}, {"type": "recall_at_1000", "value": 76.342}, {"type": "recall_at_3", "value": 19.157}, {"type": "recall_at_5", "value": 24.637999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "f097057d03ed98220bc7309ddb10b71a54d667d6"}, "metrics": [{"type": "map_at_1", "value": 7.436}, {"type": "map_at_10", "value": 14.760000000000002}, {"type": "map_at_100", "value": 19.206}, {"type": "map_at_1000", "value": 20.267}, {"type": "map_at_3", "value": 10.894}, {"type": "map_at_5", "value": 12.828999999999999}, {"type": "mrr_at_1", "value": 54.25}, {"type": "mrr_at_10", "value": 63.769}, {"type": "mrr_at_100", "value": 64.193}, {"type": "mrr_at_1000", "value": 64.211}, {"type": "mrr_at_3", "value": 61.458}, {"type": "mrr_at_5", "value": 63.096}, {"type": "ndcg_at_1", "value": 42.875}, {"type": "ndcg_at_10", "value": 31.507}, {"type": "ndcg_at_100", "value": 34.559}, {"type": "ndcg_at_1000", "value": 41.246}, {"type": "ndcg_at_3", "value": 35.058}, {"type": "ndcg_at_5", "value": 33.396}, {"type": "precision_at_1", "value": 54.25}, {"type": "precision_at_10", "value": 24.45}, {"type": "precision_at_100", "value": 7.383000000000001}, {"type": "precision_at_1000", "value": 1.582}, {"type": "precision_at_3", "value": 38.083}, {"type": "precision_at_5", "value": 32.6}, {"type": "recall_at_1", "value": 7.436}, {"type": "recall_at_10", "value": 19.862}, {"type": "recall_at_100", "value": 38.981}, {"type": "recall_at_1000", "value": 61.038000000000004}, {"type": "recall_at_3", "value": 11.949}, {"type": "recall_at_5", "value": 15.562000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "829147f8f75a25f005913200eb5ed41fae320aa1"}, "metrics": [{"type": "accuracy", "value": 46.39}, {"type": "f1", "value": 42.26424885856703}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "1429cf27e393599b8b359b9b72c666f96b2525f9"}, "metrics": [{"type": "map_at_1", "value": 50.916}, {"type": "map_at_10", "value": 62.258}, {"type": "map_at_100", "value": 62.741}, {"type": "map_at_1000", "value": 62.763000000000005}, {"type": "map_at_3", "value": 60.01800000000001}, {"type": "map_at_5", "value": 61.419999999999995}, {"type": "mrr_at_1", "value": 54.964999999999996}, {"type": "mrr_at_10", "value": 66.554}, {"type": "mrr_at_100", "value": 66.96600000000001}, {"type": "mrr_at_1000", "value": 66.97800000000001}, {"type": "mrr_at_3", "value": 64.414}, {"type": "mrr_at_5", "value": 65.77}, {"type": "ndcg_at_1", "value": 54.964999999999996}, {"type": "ndcg_at_10", "value": 68.12}, {"type": "ndcg_at_100", "value": 70.282}, {"type": "ndcg_at_1000", "value": 70.788}, {"type": "ndcg_at_3", "value": 63.861999999999995}, {"type": "ndcg_at_5", "value": 66.216}, {"type": "precision_at_1", "value": 54.964999999999996}, {"type": "precision_at_10", "value": 8.998000000000001}, {"type": "precision_at_100", "value": 1.016}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 25.618000000000002}, {"type": "precision_at_5", "value": 16.676}, {"type": "recall_at_1", "value": 50.916}, {"type": "recall_at_10", "value": 82.04}, {"type": "recall_at_100", "value": 91.689}, {"type": "recall_at_1000", "value": 95.34899999999999}, {"type": "recall_at_3", "value": 70.512}, {"type": "recall_at_5", "value": 76.29899999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "41b686a7f28c59bcaaa5791efd47c67c8ebe28be"}, "metrics": [{"type": "map_at_1", "value": 13.568}, {"type": "map_at_10", "value": 23.264000000000003}, {"type": "map_at_100", "value": 24.823999999999998}, {"type": "map_at_1000", "value": 25.013999999999996}, {"type": "map_at_3", "value": 19.724}, {"type": "map_at_5", "value": 21.772}, {"type": "mrr_at_1", "value": 27.315}, {"type": "mrr_at_10", "value": 35.935}, {"type": "mrr_at_100", "value": 36.929}, {"type": "mrr_at_1000", "value": 36.985}, {"type": "mrr_at_3", "value": 33.591}, {"type": "mrr_at_5", "value": 34.848}, {"type": "ndcg_at_1", "value": 27.315}, {"type": "ndcg_at_10", "value": 29.988}, {"type": "ndcg_at_100", "value": 36.41}, {"type": "ndcg_at_1000", "value": 40.184999999999995}, {"type": "ndcg_at_3", "value": 26.342}, {"type": "ndcg_at_5", "value": 27.68}, {"type": "precision_at_1", "value": 27.315}, {"type": "precision_at_10", "value": 8.565000000000001}, {"type": "precision_at_100", "value": 1.508}, {"type": "precision_at_1000", "value": 0.219}, {"type": "precision_at_3", "value": 17.849999999999998}, {"type": "precision_at_5", "value": 13.672999999999998}, {"type": "recall_at_1", "value": 13.568}, {"type": "recall_at_10", "value": 37.133}, {"type": "recall_at_100", "value": 61.475}, {"type": "recall_at_1000", "value": 84.372}, {"type": "recall_at_3", "value": 24.112000000000002}, {"type": "recall_at_5", "value": 29.507}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "766870b35a1b9ca65e67a0d1913899973551fc6c"}, "metrics": [{"type": "map_at_1", "value": 30.878}, {"type": "map_at_10", "value": 40.868}, {"type": "map_at_100", "value": 41.693999999999996}, {"type": "map_at_1000", "value": 41.775}, {"type": "map_at_3", "value": 38.56}, {"type": "map_at_5", "value": 39.947}, {"type": "mrr_at_1", "value": 61.756}, {"type": "mrr_at_10", "value": 68.265}, {"type": "mrr_at_100", "value": 68.671}, {"type": "mrr_at_1000", "value": 68.694}, {"type": "mrr_at_3", "value": 66.78399999999999}, {"type": "mrr_at_5", "value": 67.704}, {"type": "ndcg_at_1", "value": 61.756}, {"type": "ndcg_at_10", "value": 49.931}, {"type": "ndcg_at_100", "value": 53.179}, {"type": "ndcg_at_1000", "value": 54.94799999999999}, {"type": "ndcg_at_3", "value": 46.103}, {"type": "ndcg_at_5", "value": 48.147}, {"type": "precision_at_1", "value": 61.756}, {"type": "precision_at_10", "value": 10.163}, {"type": "precision_at_100", "value": 1.2710000000000001}, {"type": "precision_at_1000", "value": 0.151}, {"type": "precision_at_3", "value": 28.179}, {"type": "precision_at_5", "value": 18.528}, {"type": "recall_at_1", "value": 30.878}, {"type": "recall_at_10", "value": 50.817}, {"type": "recall_at_100", "value": 63.544999999999995}, {"type": "recall_at_1000", "value": 75.361}, {"type": "recall_at_3", "value": 42.269}, {"type": "recall_at_5", "value": 46.32}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "8d743909f834c38949e8323a8a6ce8721ea6c7f4"}, "metrics": [{"type": "accuracy", "value": 64.04799999999999}, {"type": "ap", "value": 59.185251455339284}, {"type": "f1", "value": 63.947123181349255}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "validation", "revision": "e6838a846e2408f22cf5cc337ebc83e0bcf77849"}, "metrics": [{"type": "map_at_1", "value": 18.9}, {"type": "map_at_10", "value": 29.748}, {"type": "map_at_100", "value": 30.976}, {"type": "map_at_1000", "value": 31.041}, {"type": "map_at_3", "value": 26.112999999999996}, {"type": "map_at_5", "value": 28.197}, {"type": "mrr_at_1", "value": 19.413}, {"type": "mrr_at_10", "value": 30.322}, {"type": "mrr_at_100", "value": 31.497000000000003}, {"type": "mrr_at_1000", "value": 31.555}, {"type": "mrr_at_3", "value": 26.729000000000003}, {"type": "mrr_at_5", "value": 28.788999999999998}, {"type": "ndcg_at_1", "value": 19.413}, {"type": "ndcg_at_10", "value": 36.048}, {"type": "ndcg_at_100", "value": 42.152}, {"type": "ndcg_at_1000", "value": 43.772}, {"type": "ndcg_at_3", "value": 28.642}, {"type": "ndcg_at_5", "value": 32.358}, {"type": "precision_at_1", "value": 19.413}, {"type": "precision_at_10", "value": 5.785}, {"type": "precision_at_100", "value": 0.8869999999999999}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 12.192}, {"type": "precision_at_5", "value": 9.189}, {"type": "recall_at_1", "value": 18.9}, {"type": "recall_at_10", "value": 55.457}, {"type": "recall_at_100", "value": 84.09100000000001}, {"type": "recall_at_1000", "value": 96.482}, {"type": "recall_at_3", "value": 35.359}, {"type": "recall_at_5", "value": 44.275}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 92.07706338349293}, {"type": "f1", "value": 91.56680443236652}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 71.18559051527589}, {"type": "f1", "value": 52.42887061726789}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 68.64828513786148}, {"type": "f1", "value": 66.54281381596097}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 76.04236718224612}, {"type": "f1", "value": 75.89170458655639}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "dcefc037ef84348e49b0d29109e891c01067226b"}, "metrics": [{"type": "v_measure", "value": 32.0840369055247}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc"}, "metrics": [{"type": "v_measure", "value": 29.448729560244537}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.340856463122375}, {"type": "mrr", "value": 32.398547669840916}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "7eb63cc0c1eb59324d709ebed25fcab851fa7610"}, "metrics": [{"type": "map_at_1", "value": 5.526}, {"type": "map_at_10", "value": 11.745}, {"type": "map_at_100", "value": 14.831}, {"type": "map_at_1000", "value": 16.235}, {"type": "map_at_3", "value": 8.716}, {"type": "map_at_5", "value": 10.101}, {"type": "mrr_at_1", "value": 43.653}, {"type": "mrr_at_10", "value": 51.06699999999999}, {"type": "mrr_at_100", "value": 51.881}, {"type": "mrr_at_1000", "value": 51.912000000000006}, {"type": "mrr_at_3", "value": 49.02}, {"type": "mrr_at_5", "value": 50.288999999999994}, {"type": "ndcg_at_1", "value": 41.949999999999996}, {"type": "ndcg_at_10", "value": 32.083}, {"type": "ndcg_at_100", "value": 30.049999999999997}, {"type": "ndcg_at_1000", "value": 38.661}, {"type": "ndcg_at_3", "value": 37.940000000000005}, {"type": "ndcg_at_5", "value": 35.455999999999996}, {"type": "precision_at_1", "value": 43.344}, {"type": "precision_at_10", "value": 23.437}, {"type": "precision_at_100", "value": 7.829999999999999}, {"type": "precision_at_1000", "value": 2.053}, {"type": "precision_at_3", "value": 35.501}, {"type": "precision_at_5", "value": 30.464000000000002}, {"type": "recall_at_1", "value": 5.526}, {"type": "recall_at_10", "value": 15.445999999999998}, {"type": "recall_at_100", "value": 31.179000000000002}, {"type": "recall_at_1000", "value": 61.578}, {"type": "recall_at_3", "value": 9.71}, {"type": "recall_at_5", "value": 12.026}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "6062aefc120bfe8ece5897809fb2e53bfe0d128c"}, "metrics": [{"type": "map_at_1", "value": 23.467}, {"type": "map_at_10", "value": 36.041000000000004}, {"type": "map_at_100", "value": 37.268}, {"type": "map_at_1000", "value": 37.322}, {"type": "map_at_3", "value": 32.09}, {"type": "map_at_5", "value": 34.414}, {"type": "mrr_at_1", "value": 26.738}, {"type": "mrr_at_10", "value": 38.665}, {"type": "mrr_at_100", "value": 39.64}, {"type": "mrr_at_1000", "value": 39.681}, {"type": "mrr_at_3", "value": 35.207}, {"type": "mrr_at_5", "value": 37.31}, {"type": "ndcg_at_1", "value": 26.709}, {"type": "ndcg_at_10", "value": 42.942}, {"type": "ndcg_at_100", "value": 48.296}, {"type": "ndcg_at_1000", "value": 49.651}, {"type": "ndcg_at_3", "value": 35.413}, {"type": "ndcg_at_5", "value": 39.367999999999995}, {"type": "precision_at_1", "value": 26.709}, {"type": "precision_at_10", "value": 7.306}, {"type": "precision_at_100", "value": 1.0290000000000001}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 16.348}, {"type": "precision_at_5", "value": 12.068}, {"type": "recall_at_1", "value": 23.467}, {"type": "recall_at_10", "value": 61.492999999999995}, {"type": "recall_at_100", "value": 85.01100000000001}, {"type": "recall_at_1000", "value": 95.261}, {"type": "recall_at_3", "value": 41.952}, {"type": "recall_at_5", "value": 51.105999999999995}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "6205996560df11e3a3da9ab4f926788fc30a7db4"}, "metrics": [{"type": "map_at_1", "value": 67.51700000000001}, {"type": "map_at_10", "value": 81.054}, {"type": "map_at_100", "value": 81.727}, {"type": "map_at_1000", "value": 81.75200000000001}, {"type": "map_at_3", "value": 78.018}, {"type": "map_at_5", "value": 79.879}, {"type": "mrr_at_1", "value": 77.52}, {"type": "mrr_at_10", "value": 84.429}, {"type": "mrr_at_100", "value": 84.58200000000001}, {"type": "mrr_at_1000", "value": 84.584}, {"type": "mrr_at_3", "value": 83.268}, {"type": "mrr_at_5", "value": 84.013}, {"type": "ndcg_at_1", "value": 77.53}, {"type": "ndcg_at_10", "value": 85.277}, {"type": "ndcg_at_100", "value": 86.80499999999999}, {"type": "ndcg_at_1000", "value": 87.01}, {"type": "ndcg_at_3", "value": 81.975}, {"type": "ndcg_at_5", "value": 83.723}, {"type": "precision_at_1", "value": 77.53}, {"type": "precision_at_10", "value": 12.961}, {"type": "precision_at_100", "value": 1.502}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 35.713}, {"type": "precision_at_5", "value": 23.574}, {"type": "recall_at_1", "value": 67.51700000000001}, {"type": "recall_at_10", "value": 93.486}, {"type": "recall_at_100", "value": 98.9}, {"type": "recall_at_1000", "value": 99.92999999999999}, {"type": "recall_at_3", "value": 84.17999999999999}, {"type": "recall_at_5", "value": 88.97500000000001}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "b2805658ae38990172679479369a78b86de8c390"}, "metrics": [{"type": "v_measure", "value": 48.225994608749915}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 53.17635557157765}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "5c59ef3e437a0a9651c8fe6fde943e7dce59fba5"}, "metrics": [{"type": "map_at_1", "value": 3.988}, {"type": "map_at_10", "value": 9.4}, {"type": "map_at_100", "value": 10.968}, {"type": "map_at_1000", "value": 11.257}, {"type": "map_at_3", "value": 7.123}, {"type": "map_at_5", "value": 8.221}, {"type": "mrr_at_1", "value": 19.7}, {"type": "mrr_at_10", "value": 29.098000000000003}, {"type": "mrr_at_100", "value": 30.247}, {"type": "mrr_at_1000", "value": 30.318}, {"type": "mrr_at_3", "value": 26.55}, {"type": "mrr_at_5", "value": 27.915}, {"type": "ndcg_at_1", "value": 19.7}, {"type": "ndcg_at_10", "value": 16.176}, {"type": "ndcg_at_100", "value": 22.931}, {"type": "ndcg_at_1000", "value": 28.301}, {"type": "ndcg_at_3", "value": 16.142}, {"type": "ndcg_at_5", "value": 13.633999999999999}, {"type": "precision_at_1", "value": 19.7}, {"type": "precision_at_10", "value": 8.18}, {"type": "precision_at_100", "value": 1.8010000000000002}, {"type": "precision_at_1000", "value": 0.309}, {"type": "precision_at_3", "value": 15.1}, {"type": "precision_at_5", "value": 11.74}, {"type": "recall_at_1", "value": 3.988}, {"type": "recall_at_10", "value": 16.625}, {"type": "recall_at_100", "value": 36.61}, {"type": "recall_at_1000", "value": 62.805}, {"type": "recall_at_3", "value": 9.168}, {"type": "recall_at_5", "value": 11.902}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.29330379162072}, {"type": "cos_sim_spearman", "value": 67.22953551111448}, {"type": "euclidean_pearson", "value": 71.44682700059415}, {"type": "euclidean_spearman", "value": 66.33178012153247}, {"type": "manhattan_pearson", "value": 71.46941734657887}, {"type": "manhattan_spearman", "value": 66.43234359835814}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "fdf84275bb8ce4b49c971d02e84dd1abc677a50f"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.40943196466576}, {"type": "cos_sim_spearman", "value": 66.59241013465915}, {"type": "euclidean_pearson", "value": 71.32500540796616}, {"type": "euclidean_spearman", "value": 67.86667467202591}, {"type": "manhattan_pearson", "value": 71.48209832089134}, {"type": "manhattan_spearman", "value": 67.94511626964879}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "1591bfcbe8c69d4bf7fe2a16e2451017832cafb9"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.08302398877518}, {"type": "cos_sim_spearman", "value": 77.33151317062642}, {"type": "euclidean_pearson", "value": 76.77020279715008}, {"type": "euclidean_spearman", "value": 77.13893776083225}, {"type": "manhattan_pearson", "value": 76.76732290707477}, {"type": "manhattan_spearman", "value": 77.14500877396631}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "e2125984e7df8b7871f6ae9949cf6b6795e7c54b"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.46886184932168}, {"type": "cos_sim_spearman", "value": 71.82815265534886}, {"type": "euclidean_pearson", "value": 75.19783284299076}, {"type": "euclidean_spearman", "value": 71.36479611710412}, {"type": "manhattan_pearson", "value": 75.30375233959337}, {"type": "manhattan_spearman", "value": 71.46280266488021}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "1cd7298cac12a96a373b6a2f18738bb3e739a9b6"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.093017609484}, {"type": "cos_sim_spearman", "value": 80.65931167868882}, {"type": "euclidean_pearson", "value": 80.36786337117047}, {"type": "euclidean_spearman", "value": 81.30521389642827}, {"type": "manhattan_pearson", "value": 80.37922433220973}, {"type": "manhattan_spearman", "value": 81.30496664496285}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "360a0b2dff98700d09e634a01e1cc1624d3e42cd"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.98998347238742}, {"type": "cos_sim_spearman", "value": 78.91151365939403}, {"type": "euclidean_pearson", "value": 76.40510899217841}, {"type": "euclidean_spearman", "value": 76.8551459824213}, {"type": "manhattan_pearson", "value": 76.3986079603294}, {"type": "manhattan_spearman", "value": 76.8848053254288}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.63510653472044}, {"type": "cos_sim_spearman", "value": 86.98674844768605}, {"type": "euclidean_pearson", "value": 85.205080538809}, {"type": "euclidean_spearman", "value": 85.53630494151886}, {"type": "manhattan_pearson", "value": 85.48612469885626}, {"type": "manhattan_spearman", "value": 85.81741413931921}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 66.7257987615171}, {"type": "cos_sim_spearman", "value": 67.30387805090024}, {"type": "euclidean_pearson", "value": 69.46877227885867}, {"type": "euclidean_spearman", "value": 69.33161798704344}, {"type": "manhattan_pearson", "value": 69.82773311626424}, {"type": "manhattan_spearman", "value": 69.57199940498796}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "8913289635987208e6e7c72789e4be2fe94b6abd"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.37322139418472}, {"type": "cos_sim_spearman", "value": 77.5887175717799}, {"type": "euclidean_pearson", "value": 78.23006410562164}, {"type": "euclidean_spearman", "value": 77.18470385673044}, {"type": "manhattan_pearson", "value": 78.40868369362455}, {"type": "manhattan_spearman", "value": 77.36675823897656}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "56a6d0140cf6356659e2a7c1413286a774468d44"}, "metrics": [{"type": "map", "value": 77.21233007730808}, {"type": "mrr", "value": 93.0502386139641}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "a75ae049398addde9b70f6b268875f5cbce99089"}, "metrics": [{"type": "map_at_1", "value": 54.567}, {"type": "map_at_10", "value": 63.653000000000006}, {"type": "map_at_100", "value": 64.282}, {"type": "map_at_1000", "value": 64.31099999999999}, {"type": "map_at_3", "value": 60.478}, {"type": "map_at_5", "value": 62.322}, {"type": "mrr_at_1", "value": 56.99999999999999}, {"type": "mrr_at_10", "value": 64.759}, {"type": "mrr_at_100", "value": 65.274}, {"type": "mrr_at_1000", "value": 65.301}, {"type": "mrr_at_3", "value": 62.333000000000006}, {"type": "mrr_at_5", "value": 63.817}, {"type": "ndcg_at_1", "value": 56.99999999999999}, {"type": "ndcg_at_10", "value": 68.28699999999999}, {"type": "ndcg_at_100", "value": 70.98400000000001}, {"type": "ndcg_at_1000", "value": 71.695}, {"type": "ndcg_at_3", "value": 62.656}, {"type": "ndcg_at_5", "value": 65.523}, {"type": "precision_at_1", "value": 56.99999999999999}, {"type": "precision_at_10", "value": 9.232999999999999}, {"type": "precision_at_100", "value": 1.0630000000000002}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 24.221999999999998}, {"type": "precision_at_5", "value": 16.333000000000002}, {"type": "recall_at_1", "value": 54.567}, {"type": "recall_at_10", "value": 81.45599999999999}, {"type": "recall_at_100", "value": 93.5}, {"type": "recall_at_1000", "value": 99.0}, {"type": "recall_at_3", "value": 66.228}, {"type": "recall_at_5", "value": 73.489}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.74455445544554}, {"type": "cos_sim_ap", "value": 92.57836032673468}, {"type": "cos_sim_f1", "value": 87.0471464019851}, {"type": "cos_sim_precision", "value": 86.4039408866995}, {"type": "cos_sim_recall", "value": 87.7}, {"type": "dot_accuracy", "value": 99.56039603960396}, {"type": "dot_ap", "value": 82.47233353407186}, {"type": "dot_f1", "value": 76.78207739307537}, {"type": "dot_precision", "value": 78.21576763485477}, {"type": "dot_recall", "value": 75.4}, {"type": "euclidean_accuracy", "value": 99.73069306930694}, {"type": "euclidean_ap", "value": 91.70507666665775}, {"type": "euclidean_f1", "value": 86.26262626262626}, {"type": "euclidean_precision", "value": 87.14285714285714}, {"type": "euclidean_recall", "value": 85.39999999999999}, {"type": "manhattan_accuracy", "value": 99.73861386138614}, {"type": "manhattan_ap", "value": 91.96809459281754}, {"type": "manhattan_f1", "value": 86.6}, {"type": "manhattan_precision", "value": 86.6}, {"type": "manhattan_recall", "value": 86.6}, {"type": "max_accuracy", "value": 99.74455445544554}, {"type": "max_ap", "value": 92.57836032673468}, {"type": "max_f1", "value": 87.0471464019851}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "70a89468f6dccacc6aa2b12a6eac54e74328f235"}, "metrics": [{"type": "v_measure", "value": 60.85593925770172}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "d88009ab563dd0b16cfaf4436abaf97fa3550cf0"}, "metrics": [{"type": "v_measure", "value": 32.356772998237496}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9"}, "metrics": [{"type": "map", "value": 49.320607035290735}, {"type": "mrr", "value": 50.09196481622952}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "8753c2788d36c01fc6f05d03fe3f7268d63f9122"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.17573968015504}, {"type": "cos_sim_spearman", "value": 30.43371643155132}, {"type": "dot_pearson", "value": 30.164319483092743}, {"type": "dot_spearman", "value": 29.207082242868754}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217"}, "metrics": [{"type": "map_at_1", "value": 0.22100000000000003}, {"type": "map_at_10", "value": 1.7229999999999999}, {"type": "map_at_100", "value": 9.195}, {"type": "map_at_1000", "value": 21.999}, {"type": "map_at_3", "value": 0.6479999999999999}, {"type": "map_at_5", "value": 0.964}, {"type": "mrr_at_1", "value": 86.0}, {"type": "mrr_at_10", "value": 90.667}, {"type": "mrr_at_100", "value": 90.858}, {"type": "mrr_at_1000", "value": 90.858}, {"type": "mrr_at_3", "value": 90.667}, {"type": "mrr_at_5", "value": 90.667}, {"type": "ndcg_at_1", "value": 82.0}, {"type": "ndcg_at_10", "value": 72.98}, {"type": "ndcg_at_100", "value": 52.868}, {"type": "ndcg_at_1000", "value": 46.541}, {"type": "ndcg_at_3", "value": 80.39699999999999}, {"type": "ndcg_at_5", "value": 76.303}, {"type": "precision_at_1", "value": 86.0}, {"type": "precision_at_10", "value": 75.8}, {"type": "precision_at_100", "value": 53.5}, {"type": "precision_at_1000", "value": 20.946}, {"type": "precision_at_3", "value": 85.333}, {"type": "precision_at_5", "value": 79.2}, {"type": "recall_at_1", "value": 0.22100000000000003}, {"type": "recall_at_10", "value": 1.9109999999999998}, {"type": "recall_at_100", "value": 12.437}, {"type": "recall_at_1000", "value": 43.606}, {"type": "recall_at_3", "value": 0.681}, {"type": "recall_at_5", "value": 1.023}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "527b7d77e16e343303e68cb6af11d6e18b9f7b3b"}, "metrics": [{"type": "map_at_1", "value": 2.5}, {"type": "map_at_10", "value": 9.568999999999999}, {"type": "map_at_100", "value": 15.653}, {"type": "map_at_1000", "value": 17.188}, {"type": "map_at_3", "value": 5.335999999999999}, {"type": "map_at_5", "value": 6.522}, {"type": "mrr_at_1", "value": 34.694}, {"type": "mrr_at_10", "value": 49.184}, {"type": "mrr_at_100", "value": 50.512}, {"type": "mrr_at_1000", "value": 50.512}, {"type": "mrr_at_3", "value": 46.259}, {"type": "mrr_at_5", "value": 48.299}, {"type": "ndcg_at_1", "value": 30.612000000000002}, {"type": "ndcg_at_10", "value": 24.45}, {"type": "ndcg_at_100", "value": 35.870999999999995}, {"type": "ndcg_at_1000", "value": 47.272999999999996}, {"type": "ndcg_at_3", "value": 28.528}, {"type": "ndcg_at_5", "value": 25.768}, {"type": "precision_at_1", "value": 34.694}, {"type": "precision_at_10", "value": 21.429000000000002}, {"type": "precision_at_100", "value": 7.265000000000001}, {"type": "precision_at_1000", "value": 1.504}, {"type": "precision_at_3", "value": 29.252}, {"type": "precision_at_5", "value": 24.898}, {"type": "recall_at_1", "value": 2.5}, {"type": "recall_at_10", "value": 15.844}, {"type": "recall_at_100", "value": 45.469}, {"type": "recall_at_1000", "value": 81.148}, {"type": "recall_at_3", "value": 6.496}, {"type": "recall_at_5", "value": 8.790000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 68.7272}, {"type": "ap", "value": 13.156450706152686}, {"type": "f1", "value": 52.814703437064395}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "62146448f05be9e52a36b8ee9936447ea787eede"}, "metrics": [{"type": "accuracy", "value": 55.6677985285795}, {"type": "f1", "value": 55.9373937514999}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "091a54f9a36281ce7d6590ec8c75dd485e7e01d4"}, "metrics": [{"type": "v_measure", "value": 40.05809562275603}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 82.76807534124099}, {"type": "cos_sim_ap", "value": 62.37052608803734}, {"type": "cos_sim_f1", "value": 59.077414934916646}, {"type": "cos_sim_precision", "value": 52.07326892109501}, {"type": "cos_sim_recall", "value": 68.25857519788919}, {"type": "dot_accuracy", "value": 80.56267509089825}, {"type": "dot_ap", "value": 54.75349561321037}, {"type": "dot_f1", "value": 54.75483794372552}, {"type": "dot_precision", "value": 49.77336499028707}, {"type": "dot_recall", "value": 60.844327176781}, {"type": "euclidean_accuracy", "value": 82.476008821601}, {"type": "euclidean_ap", "value": 61.17417554210511}, {"type": "euclidean_f1", "value": 57.80318696022382}, {"type": "euclidean_precision", "value": 53.622207176709544}, {"type": "euclidean_recall", "value": 62.69129287598945}, {"type": "manhattan_accuracy", "value": 82.48792990403528}, {"type": "manhattan_ap", "value": 61.044816292966544}, {"type": "manhattan_f1", "value": 58.03033951360462}, {"type": "manhattan_precision", "value": 53.36581045172719}, {"type": "manhattan_recall", "value": 63.58839050131926}, {"type": "max_accuracy", "value": 82.76807534124099}, {"type": "max_ap", "value": 62.37052608803734}, {"type": "max_f1", "value": 59.077414934916646}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 87.97881010594946}, {"type": "cos_sim_ap", "value": 83.78748636891035}, {"type": "cos_sim_f1", "value": 75.94113995691386}, {"type": "cos_sim_precision", "value": 72.22029307590805}, {"type": "cos_sim_recall", "value": 80.06621496766245}, {"type": "dot_accuracy", "value": 85.69294058291614}, {"type": "dot_ap", "value": 78.15363722278026}, {"type": "dot_f1", "value": 72.08894926888564}, {"type": "dot_precision", "value": 67.28959487419075}, {"type": "dot_recall", "value": 77.62550046196489}, {"type": "euclidean_accuracy", "value": 87.73625179493149}, {"type": "euclidean_ap", "value": 83.19012184470559}, {"type": "euclidean_f1", "value": 75.5148064623461}, {"type": "euclidean_precision", "value": 72.63352535381551}, {"type": "euclidean_recall", "value": 78.6341238065907}, {"type": "manhattan_accuracy", "value": 87.74013272790779}, {"type": "manhattan_ap", "value": 83.23305405113403}, {"type": "manhattan_f1", "value": 75.63960775639607}, {"type": "manhattan_precision", "value": 72.563304569246}, {"type": "manhattan_recall", "value": 78.9882968894364}, {"type": "max_accuracy", "value": 87.97881010594946}, {"type": "max_ap", "value": 83.78748636891035}, {"type": "max_f1", "value": 75.94113995691386}]}]}]} | Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us
|
# SGPT-1.3B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to the eval folder or our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 62398 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-1.3B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 62398 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us \n",
"# SGPT-1.3B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 62398 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-1.3B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-1.3B-weightedmean-nli-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-1.3B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to the eval folder or our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-1.3B-weightedmean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-1.3B-weightedmean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-1.3B-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-1.3B-weightedmean-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-1.3B-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-1.3B-weightedmean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-1.3B-weightedmean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 93941 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-lasttoken-msmarco-specb
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-lasttoken-msmarco-specb | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-lasttoken-msmarco-specb
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-lasttoken-msmarco-specb",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-lasttoken-msmarco-specb",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-lasttoken-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-lasttoken-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-lasttoken-nli
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-lasttoken-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-lasttoken-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-learntmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): WeightedMeanPooling()
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-learntmean-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-learntmean-nli
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-learntmean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-learntmean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-mean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-mean-nli-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-mean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-mean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-mean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-mean-nli-linear5
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(3): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(4): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(5): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(6): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-mean-nli-linear5 | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-mean-nli-linear5
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-mean-nli-linear5",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-mean-nli-linear5",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-mean-nli-linearthenpool5
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(3): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(4): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(5): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(6): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-mean-nli-linearthenpool5 | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-mean-nli-linearthenpool5
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-mean-nli-linearthenpool5",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-mean-nli-linearthenpool5",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-mean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-mean-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-mean-nli
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-mean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #transformers #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-mean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-scratchmean-nli
** Trained from scratch only on NLI with reinitialized GPT-Neo weights **
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-scratchmean-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-scratchmean-nli
Trained from scratch only on NLI with reinitialized GPT-Neo weights
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-scratchmean-nli\n\n Trained from scratch only on NLI with reinitialized GPT-Neo weights",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-scratchmean-nli\n\n Trained from scratch only on NLI with reinitialized GPT-Neo weights",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-msmarco-asym
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Asym(
(QRY-0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(DOCPOS-0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(DOCNEG-0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
)
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
``` | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-weightedmean-msmarco-asym | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-weightedmean-msmarco-asym
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-msmarco-asym",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-weightedmean-msmarco-asym",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "SGPT-125M-weightedmean-msmarco-specb-bitfit", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 61.23880597014926}, {"type": "ap", "value": 25.854431650388644}, {"type": "f1", "value": 55.751862762818604}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 56.88436830835117}, {"type": "ap", "value": 72.67279104379772}, {"type": "f1", "value": 54.449840243786404}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 58.27586206896551}, {"type": "ap", "value": 14.067357642500387}, {"type": "f1", "value": 48.172318518691334}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 54.64668094218415}, {"type": "ap", "value": 11.776694555054965}, {"type": "f1", "value": 44.526622834078765}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1"}, "metrics": [{"type": "accuracy", "value": 65.401225}, {"type": "ap", "value": 60.22809958678552}, {"type": "f1", "value": 65.0251824898292}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 31.165999999999993}, {"type": "f1", "value": 30.908870050167437}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 24.79}, {"type": "f1", "value": 24.5833598854121}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 26.643999999999995}, {"type": "f1", "value": 26.39012792213563}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 26.386000000000003}, {"type": "f1", "value": 26.276867791454873}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 22.078000000000003}, {"type": "f1", "value": 21.797960290226843}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 24.274}, {"type": "f1", "value": 23.887054434822627}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3"}, "metrics": [{"type": "map_at_1", "value": 22.404}, {"type": "map_at_10", "value": 36.845}, {"type": "map_at_100", "value": 37.945}, {"type": "map_at_1000", "value": 37.966}, {"type": "map_at_3", "value": 31.78}, {"type": "map_at_5", "value": 34.608}, {"type": "mrr_at_1", "value": 22.902}, {"type": "mrr_at_10", "value": 37.034}, {"type": "mrr_at_100", "value": 38.134}, {"type": "mrr_at_1000", "value": 38.155}, {"type": "mrr_at_3", "value": 31.935000000000002}, {"type": "mrr_at_5", "value": 34.812}, {"type": "ndcg_at_1", "value": 22.404}, {"type": "ndcg_at_10", "value": 45.425}, {"type": "ndcg_at_100", "value": 50.354}, {"type": "ndcg_at_1000", "value": 50.873999999999995}, {"type": "ndcg_at_3", "value": 34.97}, {"type": "ndcg_at_5", "value": 40.081}, {"type": "precision_at_1", "value": 22.404}, {"type": "precision_at_10", "value": 7.303999999999999}, {"type": "precision_at_100", "value": 0.951}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 14.746}, {"type": "precision_at_5", "value": 11.337}, {"type": "recall_at_1", "value": 22.404}, {"type": "recall_at_10", "value": 73.044}, {"type": "recall_at_100", "value": 95.092}, {"type": "recall_at_1000", "value": 99.075}, {"type": "recall_at_3", "value": 44.239}, {"type": "recall_at_5", "value": 56.686}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "0bbdb47bcbe3a90093699aefeed338a0f28a7ee8"}, "metrics": [{"type": "v_measure", "value": 39.70858340673288}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3"}, "metrics": [{"type": "v_measure", "value": 28.242847713721048}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c"}, "metrics": [{"type": "map", "value": 55.83700395192393}, {"type": "mrr", "value": 70.3891307215407}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "9ee918f184421b6bd48b78f6c714d86546106103"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.25366801756223}, {"type": "cos_sim_spearman", "value": 75.20954502580506}, {"type": "euclidean_pearson", "value": 78.79900722991617}, {"type": "euclidean_spearman", "value": 77.79996549607588}, {"type": "manhattan_pearson", "value": 78.18408109480399}, {"type": "manhattan_spearman", "value": 76.85958262303106}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "44fa15921b4c889113cc5df03dd4901b49161ab7"}, "metrics": [{"type": "accuracy", "value": 77.70454545454545}, {"type": "f1", "value": 77.6929000113803}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55"}, "metrics": [{"type": "v_measure", "value": 33.63260395543984}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "c0fab014e1bcb8d3a5e31b2088972a1e01547dc1"}, "metrics": [{"type": "v_measure", "value": 27.038042665369925}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "2b9f5791698b5be7bc5e10535c8690f20043c3db"}, "metrics": [{"type": "map_at_1", "value": 22.139}, {"type": "map_at_10", "value": 28.839}, {"type": "map_at_100", "value": 30.023}, {"type": "map_at_1000", "value": 30.153000000000002}, {"type": "map_at_3", "value": 26.521}, {"type": "map_at_5", "value": 27.775}, {"type": "mrr_at_1", "value": 26.466}, {"type": "mrr_at_10", "value": 33.495000000000005}, {"type": "mrr_at_100", "value": 34.416999999999994}, {"type": "mrr_at_1000", "value": 34.485}, {"type": "mrr_at_3", "value": 31.402}, {"type": "mrr_at_5", "value": 32.496}, {"type": "ndcg_at_1", "value": 26.466}, {"type": "ndcg_at_10", "value": 33.372}, {"type": "ndcg_at_100", "value": 38.7}, {"type": "ndcg_at_1000", "value": 41.696}, {"type": "ndcg_at_3", "value": 29.443}, {"type": "ndcg_at_5", "value": 31.121}, {"type": "precision_at_1", "value": 26.466}, {"type": "precision_at_10", "value": 6.037}, {"type": "precision_at_100", "value": 1.0670000000000002}, {"type": "precision_at_1000", "value": 0.16199999999999998}, {"type": "precision_at_3", "value": 13.782}, {"type": "precision_at_5", "value": 9.757}, {"type": "recall_at_1", "value": 22.139}, {"type": "recall_at_10", "value": 42.39}, {"type": "recall_at_100", "value": 65.427}, {"type": "recall_at_1000", "value": 86.04899999999999}, {"type": "recall_at_3", "value": 31.127}, {"type": "recall_at_5", "value": 35.717999999999996}, {"type": "map_at_1", "value": 20.652}, {"type": "map_at_10", "value": 27.558}, {"type": "map_at_100", "value": 28.473}, {"type": "map_at_1000", "value": 28.577}, {"type": "map_at_3", "value": 25.402}, {"type": "map_at_5", "value": 26.68}, {"type": "mrr_at_1", "value": 25.223000000000003}, {"type": "mrr_at_10", "value": 31.966}, {"type": "mrr_at_100", "value": 32.664}, {"type": "mrr_at_1000", "value": 32.724}, {"type": "mrr_at_3", "value": 30.074}, {"type": "mrr_at_5", "value": 31.249}, {"type": "ndcg_at_1", "value": 25.223000000000003}, {"type": "ndcg_at_10", "value": 31.694}, {"type": "ndcg_at_100", "value": 35.662}, {"type": "ndcg_at_1000", "value": 38.092}, {"type": "ndcg_at_3", "value": 28.294000000000004}, {"type": "ndcg_at_5", "value": 30.049}, {"type": "precision_at_1", "value": 25.223000000000003}, {"type": "precision_at_10", "value": 5.777}, {"type": "precision_at_100", "value": 0.9730000000000001}, {"type": "precision_at_1000", "value": 0.13999999999999999}, {"type": "precision_at_3", "value": 13.397}, {"type": "precision_at_5", "value": 9.605}, {"type": "recall_at_1", "value": 20.652}, {"type": "recall_at_10", "value": 39.367999999999995}, {"type": "recall_at_100", "value": 56.485}, {"type": "recall_at_1000", "value": 73.292}, {"type": "recall_at_3", "value": 29.830000000000002}, {"type": "recall_at_5", "value": 34.43}, {"type": "map_at_1", "value": 25.180000000000003}, {"type": "map_at_10", "value": 34.579}, {"type": "map_at_100", "value": 35.589999999999996}, {"type": "map_at_1000", "value": 35.68}, {"type": "map_at_3", "value": 31.735999999999997}, {"type": "map_at_5", "value": 33.479}, {"type": "mrr_at_1", "value": 29.467}, {"type": "mrr_at_10", "value": 37.967}, {"type": "mrr_at_100", "value": 38.800000000000004}, {"type": "mrr_at_1000", "value": 38.858}, {"type": "mrr_at_3", "value": 35.465}, {"type": "mrr_at_5", "value": 37.057}, {"type": "ndcg_at_1", "value": 29.467}, {"type": "ndcg_at_10", "value": 39.796}, {"type": "ndcg_at_100", "value": 44.531}, {"type": "ndcg_at_1000", "value": 46.666000000000004}, {"type": "ndcg_at_3", "value": 34.676}, {"type": "ndcg_at_5", "value": 37.468}, {"type": "precision_at_1", "value": 29.467}, {"type": "precision_at_10", "value": 6.601999999999999}, {"type": "precision_at_100", "value": 0.9900000000000001}, {"type": "precision_at_1000", "value": 0.124}, {"type": "precision_at_3", "value": 15.568999999999999}, {"type": "precision_at_5", "value": 11.172}, {"type": "recall_at_1", "value": 25.180000000000003}, {"type": "recall_at_10", "value": 52.269}, {"type": "recall_at_100", "value": 73.574}, {"type": "recall_at_1000", "value": 89.141}, {"type": "recall_at_3", "value": 38.522}, {"type": "recall_at_5", "value": 45.323}, {"type": "map_at_1", "value": 16.303}, {"type": "map_at_10", "value": 21.629}, {"type": "map_at_100", "value": 22.387999999999998}, {"type": "map_at_1000", "value": 22.489}, {"type": "map_at_3", "value": 19.608}, {"type": "map_at_5", "value": 20.774}, {"type": "mrr_at_1", "value": 17.740000000000002}, {"type": "mrr_at_10", "value": 23.214000000000002}, {"type": "mrr_at_100", "value": 23.97}, {"type": "mrr_at_1000", "value": 24.054000000000002}, {"type": "mrr_at_3", "value": 21.243000000000002}, {"type": "mrr_at_5", "value": 22.322}, {"type": "ndcg_at_1", "value": 17.740000000000002}, {"type": "ndcg_at_10", "value": 25.113000000000003}, {"type": "ndcg_at_100", "value": 29.287999999999997}, {"type": "ndcg_at_1000", "value": 32.204}, {"type": "ndcg_at_3", "value": 21.111}, {"type": "ndcg_at_5", "value": 23.061999999999998}, {"type": "precision_at_1", "value": 17.740000000000002}, {"type": "precision_at_10", "value": 3.955}, {"type": "precision_at_100", "value": 0.644}, {"type": "precision_at_1000", "value": 0.093}, {"type": "precision_at_3", "value": 8.851}, {"type": "precision_at_5", "value": 6.418}, {"type": "recall_at_1", "value": 16.303}, {"type": "recall_at_10", "value": 34.487}, {"type": "recall_at_100", "value": 54.413999999999994}, {"type": "recall_at_1000", "value": 77.158}, {"type": "recall_at_3", "value": 23.733}, {"type": "recall_at_5", "value": 28.381}, {"type": "map_at_1", "value": 10.133000000000001}, {"type": "map_at_10", "value": 15.665999999999999}, {"type": "map_at_100", "value": 16.592000000000002}, {"type": "map_at_1000", "value": 16.733999999999998}, {"type": "map_at_3", "value": 13.625000000000002}, {"type": "map_at_5", "value": 14.721}, {"type": "mrr_at_1", "value": 12.562000000000001}, {"type": "mrr_at_10", "value": 18.487000000000002}, {"type": "mrr_at_100", "value": 19.391}, {"type": "mrr_at_1000", "value": 19.487}, {"type": "mrr_at_3", "value": 16.418}, {"type": "mrr_at_5", "value": 17.599999999999998}, {"type": "ndcg_at_1", "value": 12.562000000000001}, {"type": "ndcg_at_10", "value": 19.43}, {"type": "ndcg_at_100", "value": 24.546}, {"type": "ndcg_at_1000", "value": 28.193}, {"type": "ndcg_at_3", "value": 15.509999999999998}, {"type": "ndcg_at_5", "value": 17.322000000000003}, {"type": "precision_at_1", "value": 12.562000000000001}, {"type": "precision_at_10", "value": 3.794}, {"type": "precision_at_100", "value": 0.74}, {"type": "precision_at_1000", "value": 0.122}, {"type": "precision_at_3", "value": 7.546}, {"type": "precision_at_5", "value": 5.721}, {"type": "recall_at_1", "value": 10.133000000000001}, {"type": "recall_at_10", "value": 28.261999999999997}, {"type": "recall_at_100", "value": 51.742999999999995}, {"type": "recall_at_1000", "value": 78.075}, {"type": "recall_at_3", "value": 17.634}, {"type": "recall_at_5", "value": 22.128999999999998}, {"type": "map_at_1", "value": 19.991999999999997}, {"type": "map_at_10", "value": 27.346999999999998}, {"type": "map_at_100", "value": 28.582}, {"type": "map_at_1000", "value": 28.716}, {"type": "map_at_3", "value": 24.907}, {"type": "map_at_5", "value": 26.1}, {"type": "mrr_at_1", "value": 23.773}, {"type": "mrr_at_10", "value": 31.647}, {"type": "mrr_at_100", "value": 32.639}, {"type": "mrr_at_1000", "value": 32.706}, {"type": "mrr_at_3", "value": 29.195}, {"type": "mrr_at_5", "value": 30.484}, {"type": "ndcg_at_1", "value": 23.773}, {"type": "ndcg_at_10", "value": 32.322}, {"type": "ndcg_at_100", "value": 37.996}, {"type": "ndcg_at_1000", "value": 40.819}, {"type": "ndcg_at_3", "value": 27.876}, {"type": "ndcg_at_5", "value": 29.664}, {"type": "precision_at_1", "value": 23.773}, {"type": "precision_at_10", "value": 5.976999999999999}, {"type": "precision_at_100", "value": 1.055}, {"type": "precision_at_1000", "value": 0.15}, {"type": "precision_at_3", "value": 13.122}, {"type": "precision_at_5", "value": 9.451}, {"type": "recall_at_1", "value": 19.991999999999997}, {"type": "recall_at_10", "value": 43.106}, {"type": "recall_at_100", "value": 67.264}, {"type": "recall_at_1000", "value": 86.386}, {"type": "recall_at_3", "value": 30.392000000000003}, {"type": "recall_at_5", "value": 34.910999999999994}, {"type": "map_at_1", "value": 17.896}, {"type": "map_at_10", "value": 24.644}, {"type": "map_at_100", "value": 25.790000000000003}, {"type": "map_at_1000", "value": 25.913999999999998}, {"type": "map_at_3", "value": 22.694}, {"type": "map_at_5", "value": 23.69}, {"type": "mrr_at_1", "value": 21.346999999999998}, {"type": "mrr_at_10", "value": 28.594}, {"type": "mrr_at_100", "value": 29.543999999999997}, {"type": "mrr_at_1000", "value": 29.621}, {"type": "mrr_at_3", "value": 26.807}, {"type": "mrr_at_5", "value": 27.669}, {"type": "ndcg_at_1", "value": 21.346999999999998}, {"type": "ndcg_at_10", "value": 28.833}, {"type": "ndcg_at_100", "value": 34.272000000000006}, {"type": "ndcg_at_1000", "value": 37.355}, {"type": "ndcg_at_3", "value": 25.373}, {"type": "ndcg_at_5", "value": 26.756}, {"type": "precision_at_1", "value": 21.346999999999998}, {"type": "precision_at_10", "value": 5.2170000000000005}, {"type": "precision_at_100", "value": 0.954}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 11.948}, {"type": "precision_at_5", "value": 8.425}, {"type": "recall_at_1", "value": 17.896}, {"type": "recall_at_10", "value": 37.291000000000004}, {"type": "recall_at_100", "value": 61.138000000000005}, {"type": "recall_at_1000", "value": 83.212}, {"type": "recall_at_3", "value": 27.705999999999996}, {"type": "recall_at_5", "value": 31.234}, {"type": "map_at_1", "value": 17.195166666666665}, {"type": "map_at_10", "value": 23.329083333333333}, {"type": "map_at_100", "value": 24.30308333333333}, {"type": "map_at_1000", "value": 24.422416666666667}, {"type": "map_at_3", "value": 21.327416666666664}, {"type": "map_at_5", "value": 22.419999999999998}, {"type": "mrr_at_1", "value": 19.999916666666667}, {"type": "mrr_at_10", "value": 26.390166666666666}, {"type": "mrr_at_100", "value": 27.230999999999998}, {"type": "mrr_at_1000", "value": 27.308333333333334}, {"type": "mrr_at_3", "value": 24.4675}, {"type": "mrr_at_5", "value": 25.541083333333336}, {"type": "ndcg_at_1", "value": 19.999916666666667}, {"type": "ndcg_at_10", "value": 27.248666666666665}, {"type": "ndcg_at_100", "value": 32.00258333333334}, {"type": "ndcg_at_1000", "value": 34.9465}, {"type": "ndcg_at_3", "value": 23.58566666666667}, {"type": "ndcg_at_5", "value": 25.26341666666666}, {"type": "precision_at_1", "value": 19.999916666666667}, {"type": "precision_at_10", "value": 4.772166666666666}, {"type": "precision_at_100", "value": 0.847}, {"type": "precision_at_1000", "value": 0.12741666666666668}, {"type": "precision_at_3", "value": 10.756166666666669}, {"type": "precision_at_5", "value": 7.725416666666667}, {"type": "recall_at_1", "value": 17.195166666666665}, {"type": "recall_at_10", "value": 35.99083333333334}, {"type": "recall_at_100", "value": 57.467999999999996}, {"type": "recall_at_1000", "value": 78.82366666666667}, {"type": "recall_at_3", "value": 25.898499999999995}, {"type": "recall_at_5", "value": 30.084333333333333}, {"type": "map_at_1", "value": 16.779}, {"type": "map_at_10", "value": 21.557000000000002}, {"type": "map_at_100", "value": 22.338}, {"type": "map_at_1000", "value": 22.421}, {"type": "map_at_3", "value": 19.939}, {"type": "map_at_5", "value": 20.903}, {"type": "mrr_at_1", "value": 18.404999999999998}, {"type": "mrr_at_10", "value": 23.435}, {"type": "mrr_at_100", "value": 24.179000000000002}, {"type": "mrr_at_1000", "value": 24.25}, {"type": "mrr_at_3", "value": 21.907}, {"type": "mrr_at_5", "value": 22.781000000000002}, {"type": "ndcg_at_1", "value": 18.404999999999998}, {"type": "ndcg_at_10", "value": 24.515}, {"type": "ndcg_at_100", "value": 28.721000000000004}, {"type": "ndcg_at_1000", "value": 31.259999999999998}, {"type": "ndcg_at_3", "value": 21.508}, {"type": "ndcg_at_5", "value": 23.01}, {"type": "precision_at_1", "value": 18.404999999999998}, {"type": "precision_at_10", "value": 3.834}, {"type": "precision_at_100", "value": 0.641}, {"type": "precision_at_1000", "value": 0.093}, {"type": "precision_at_3", "value": 9.151}, {"type": "precision_at_5", "value": 6.503}, {"type": "recall_at_1", "value": 16.779}, {"type": "recall_at_10", "value": 31.730000000000004}, {"type": "recall_at_100", "value": 51.673}, {"type": "recall_at_1000", "value": 71.17599999999999}, {"type": "recall_at_3", "value": 23.518}, {"type": "recall_at_5", "value": 27.230999999999998}, {"type": "map_at_1", "value": 9.279}, {"type": "map_at_10", "value": 13.822000000000001}, {"type": "map_at_100", "value": 14.533}, {"type": "map_at_1000", "value": 14.649999999999999}, {"type": "map_at_3", "value": 12.396}, {"type": "map_at_5", "value": 13.214}, {"type": "mrr_at_1", "value": 11.149000000000001}, {"type": "mrr_at_10", "value": 16.139}, {"type": "mrr_at_100", "value": 16.872}, {"type": "mrr_at_1000", "value": 16.964000000000002}, {"type": "mrr_at_3", "value": 14.613000000000001}, {"type": "mrr_at_5", "value": 15.486}, {"type": "ndcg_at_1", "value": 11.149000000000001}, {"type": "ndcg_at_10", "value": 16.82}, {"type": "ndcg_at_100", "value": 20.73}, {"type": "ndcg_at_1000", "value": 23.894000000000002}, {"type": "ndcg_at_3", "value": 14.11}, {"type": "ndcg_at_5", "value": 15.404000000000002}, {"type": "precision_at_1", "value": 11.149000000000001}, {"type": "precision_at_10", "value": 3.063}, {"type": "precision_at_100", "value": 0.587}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 6.699}, {"type": "precision_at_5", "value": 4.928}, {"type": "recall_at_1", "value": 9.279}, {"type": "recall_at_10", "value": 23.745}, {"type": "recall_at_100", "value": 41.873}, {"type": "recall_at_1000", "value": 64.982}, {"type": "recall_at_3", "value": 16.152}, {"type": "recall_at_5", "value": 19.409000000000002}, {"type": "map_at_1", "value": 16.36}, {"type": "map_at_10", "value": 21.927}, {"type": "map_at_100", "value": 22.889}, {"type": "map_at_1000", "value": 22.994}, {"type": "map_at_3", "value": 20.433}, {"type": "map_at_5", "value": 21.337}, {"type": "mrr_at_1", "value": 18.75}, {"type": "mrr_at_10", "value": 24.859}, {"type": "mrr_at_100", "value": 25.746999999999996}, {"type": "mrr_at_1000", "value": 25.829}, {"type": "mrr_at_3", "value": 23.383000000000003}, {"type": "mrr_at_5", "value": 24.297}, {"type": "ndcg_at_1", "value": 18.75}, {"type": "ndcg_at_10", "value": 25.372}, {"type": "ndcg_at_100", "value": 30.342999999999996}, {"type": "ndcg_at_1000", "value": 33.286}, {"type": "ndcg_at_3", "value": 22.627}, {"type": "ndcg_at_5", "value": 24.04}, {"type": "precision_at_1", "value": 18.75}, {"type": "precision_at_10", "value": 4.1419999999999995}, {"type": "precision_at_100", "value": 0.738}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 10.261000000000001}, {"type": "precision_at_5", "value": 7.164}, {"type": "recall_at_1", "value": 16.36}, {"type": "recall_at_10", "value": 32.949}, {"type": "recall_at_100", "value": 55.552}, {"type": "recall_at_1000", "value": 77.09899999999999}, {"type": "recall_at_3", "value": 25.538}, {"type": "recall_at_5", "value": 29.008}, {"type": "map_at_1", "value": 17.39}, {"type": "map_at_10", "value": 23.058}, {"type": "map_at_100", "value": 24.445}, {"type": "map_at_1000", "value": 24.637999999999998}, {"type": "map_at_3", "value": 21.037}, {"type": "map_at_5", "value": 21.966}, {"type": "mrr_at_1", "value": 19.96}, {"type": "mrr_at_10", "value": 26.301000000000002}, {"type": "mrr_at_100", "value": 27.297}, {"type": "mrr_at_1000", "value": 27.375}, {"type": "mrr_at_3", "value": 24.340999999999998}, {"type": "mrr_at_5", "value": 25.339}, {"type": "ndcg_at_1", "value": 19.96}, {"type": "ndcg_at_10", "value": 27.249000000000002}, {"type": "ndcg_at_100", "value": 32.997}, {"type": "ndcg_at_1000", "value": 36.359}, {"type": "ndcg_at_3", "value": 23.519000000000002}, {"type": "ndcg_at_5", "value": 24.915000000000003}, {"type": "precision_at_1", "value": 19.96}, {"type": "precision_at_10", "value": 5.356000000000001}, {"type": "precision_at_100", "value": 1.198}, {"type": "precision_at_1000", "value": 0.20400000000000001}, {"type": "precision_at_3", "value": 10.738}, {"type": "precision_at_5", "value": 7.904999999999999}, {"type": "recall_at_1", "value": 17.39}, {"type": "recall_at_10", "value": 35.254999999999995}, {"type": "recall_at_100", "value": 61.351}, {"type": "recall_at_1000", "value": 84.395}, {"type": "recall_at_3", "value": 25.194}, {"type": "recall_at_5", "value": 28.546}, {"type": "map_at_1", "value": 14.238999999999999}, {"type": "map_at_10", "value": 19.323}, {"type": "map_at_100", "value": 19.994}, {"type": "map_at_1000", "value": 20.102999999999998}, {"type": "map_at_3", "value": 17.631}, {"type": "map_at_5", "value": 18.401}, {"type": "mrr_at_1", "value": 15.157000000000002}, {"type": "mrr_at_10", "value": 20.578}, {"type": "mrr_at_100", "value": 21.252}, {"type": "mrr_at_1000", "value": 21.346999999999998}, {"type": "mrr_at_3", "value": 18.762}, {"type": "mrr_at_5", "value": 19.713}, {"type": "ndcg_at_1", "value": 15.157000000000002}, {"type": "ndcg_at_10", "value": 22.468}, {"type": "ndcg_at_100", "value": 26.245}, {"type": "ndcg_at_1000", "value": 29.534}, {"type": "ndcg_at_3", "value": 18.981}, {"type": "ndcg_at_5", "value": 20.349999999999998}, {"type": "precision_at_1", "value": 15.157000000000002}, {"type": "precision_at_10", "value": 3.512}, {"type": "precision_at_100", "value": 0.577}, {"type": "precision_at_1000", "value": 0.091}, {"type": "precision_at_3", "value": 8.01}, {"type": "precision_at_5", "value": 5.656}, {"type": "recall_at_1", "value": 14.238999999999999}, {"type": "recall_at_10", "value": 31.038}, {"type": "recall_at_100", "value": 49.122}, {"type": "recall_at_1000", "value": 74.919}, {"type": "recall_at_3", "value": 21.436}, {"type": "recall_at_5", "value": 24.692}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "392b78eb68c07badcd7c2cd8f39af108375dfcce"}, "metrics": [{"type": "map_at_1", "value": 8.828}, {"type": "map_at_10", "value": 14.982000000000001}, {"type": "map_at_100", "value": 16.495}, {"type": "map_at_1000", "value": 16.658}, {"type": "map_at_3", "value": 12.366000000000001}, {"type": "map_at_5", "value": 13.655000000000001}, {"type": "mrr_at_1", "value": 19.088}, {"type": "mrr_at_10", "value": 29.29}, {"type": "mrr_at_100", "value": 30.291}, {"type": "mrr_at_1000", "value": 30.342000000000002}, {"type": "mrr_at_3", "value": 25.907000000000004}, {"type": "mrr_at_5", "value": 27.840999999999998}, {"type": "ndcg_at_1", "value": 19.088}, {"type": "ndcg_at_10", "value": 21.858}, {"type": "ndcg_at_100", "value": 28.323999999999998}, {"type": "ndcg_at_1000", "value": 31.561}, {"type": "ndcg_at_3", "value": 17.175}, {"type": "ndcg_at_5", "value": 18.869}, {"type": "precision_at_1", "value": 19.088}, {"type": "precision_at_10", "value": 6.9190000000000005}, {"type": "precision_at_100", "value": 1.376}, {"type": "precision_at_1000", "value": 0.197}, {"type": "precision_at_3", "value": 12.703999999999999}, {"type": "precision_at_5", "value": 9.993}, {"type": "recall_at_1", "value": 8.828}, {"type": "recall_at_10", "value": 27.381}, {"type": "recall_at_100", "value": 50.0}, {"type": "recall_at_1000", "value": 68.355}, {"type": "recall_at_3", "value": 16.118}, {"type": "recall_at_5", "value": 20.587}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "f097057d03ed98220bc7309ddb10b71a54d667d6"}, "metrics": [{"type": "map_at_1", "value": 5.586}, {"type": "map_at_10", "value": 10.040000000000001}, {"type": "map_at_100", "value": 12.55}, {"type": "map_at_1000", "value": 13.123999999999999}, {"type": "map_at_3", "value": 7.75}, {"type": "map_at_5", "value": 8.835999999999999}, {"type": "mrr_at_1", "value": 42.25}, {"type": "mrr_at_10", "value": 51.205999999999996}, {"type": "mrr_at_100", "value": 51.818}, {"type": "mrr_at_1000", "value": 51.855}, {"type": "mrr_at_3", "value": 48.875}, {"type": "mrr_at_5", "value": 50.488}, {"type": "ndcg_at_1", "value": 32.25}, {"type": "ndcg_at_10", "value": 22.718}, {"type": "ndcg_at_100", "value": 24.359}, {"type": "ndcg_at_1000", "value": 29.232000000000003}, {"type": "ndcg_at_3", "value": 25.974000000000004}, {"type": "ndcg_at_5", "value": 24.291999999999998}, {"type": "precision_at_1", "value": 42.25}, {"type": "precision_at_10", "value": 17.75}, {"type": "precision_at_100", "value": 5.032}, {"type": "precision_at_1000", "value": 1.117}, {"type": "precision_at_3", "value": 28.833}, {"type": "precision_at_5", "value": 24.25}, {"type": "recall_at_1", "value": 5.586}, {"type": "recall_at_10", "value": 14.16}, {"type": "recall_at_100", "value": 28.051}, {"type": "recall_at_1000", "value": 45.157000000000004}, {"type": "recall_at_3", "value": 8.758000000000001}, {"type": "recall_at_5", "value": 10.975999999999999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "829147f8f75a25f005913200eb5ed41fae320aa1"}, "metrics": [{"type": "accuracy", "value": 39.075}, {"type": "f1", "value": 35.01420354708222}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "1429cf27e393599b8b359b9b72c666f96b2525f9"}, "metrics": [{"type": "map_at_1", "value": 43.519999999999996}, {"type": "map_at_10", "value": 54.368}, {"type": "map_at_100", "value": 54.918}, {"type": "map_at_1000", "value": 54.942}, {"type": "map_at_3", "value": 51.712}, {"type": "map_at_5", "value": 53.33599999999999}, {"type": "mrr_at_1", "value": 46.955000000000005}, {"type": "mrr_at_10", "value": 58.219}, {"type": "mrr_at_100", "value": 58.73500000000001}, {"type": "mrr_at_1000", "value": 58.753}, {"type": "mrr_at_3", "value": 55.518}, {"type": "mrr_at_5", "value": 57.191}, {"type": "ndcg_at_1", "value": 46.955000000000005}, {"type": "ndcg_at_10", "value": 60.45}, {"type": "ndcg_at_100", "value": 63.047}, {"type": "ndcg_at_1000", "value": 63.712999999999994}, {"type": "ndcg_at_3", "value": 55.233}, {"type": "ndcg_at_5", "value": 58.072}, {"type": "precision_at_1", "value": 46.955000000000005}, {"type": "precision_at_10", "value": 8.267}, {"type": "precision_at_100", "value": 0.962}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 22.326999999999998}, {"type": "precision_at_5", "value": 14.940999999999999}, {"type": "recall_at_1", "value": 43.519999999999996}, {"type": "recall_at_10", "value": 75.632}, {"type": "recall_at_100", "value": 87.41600000000001}, {"type": "recall_at_1000", "value": 92.557}, {"type": "recall_at_3", "value": 61.597}, {"type": "recall_at_5", "value": 68.518}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "41b686a7f28c59bcaaa5791efd47c67c8ebe28be"}, "metrics": [{"type": "map_at_1", "value": 9.549000000000001}, {"type": "map_at_10", "value": 15.762}, {"type": "map_at_100", "value": 17.142}, {"type": "map_at_1000", "value": 17.329}, {"type": "map_at_3", "value": 13.575000000000001}, {"type": "map_at_5", "value": 14.754000000000001}, {"type": "mrr_at_1", "value": 19.753}, {"type": "mrr_at_10", "value": 26.568}, {"type": "mrr_at_100", "value": 27.606}, {"type": "mrr_at_1000", "value": 27.68}, {"type": "mrr_at_3", "value": 24.203}, {"type": "mrr_at_5", "value": 25.668999999999997}, {"type": "ndcg_at_1", "value": 19.753}, {"type": "ndcg_at_10", "value": 21.118000000000002}, {"type": "ndcg_at_100", "value": 27.308}, {"type": "ndcg_at_1000", "value": 31.304}, {"type": "ndcg_at_3", "value": 18.319}, {"type": "ndcg_at_5", "value": 19.414}, {"type": "precision_at_1", "value": 19.753}, {"type": "precision_at_10", "value": 6.08}, {"type": "precision_at_100", "value": 1.204}, {"type": "precision_at_1000", "value": 0.192}, {"type": "precision_at_3", "value": 12.191}, {"type": "precision_at_5", "value": 9.383}, {"type": "recall_at_1", "value": 9.549000000000001}, {"type": "recall_at_10", "value": 26.131}, {"type": "recall_at_100", "value": 50.544999999999995}, {"type": "recall_at_1000", "value": 74.968}, {"type": "recall_at_3", "value": 16.951}, {"type": "recall_at_5", "value": 20.95}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "766870b35a1b9ca65e67a0d1913899973551fc6c"}, "metrics": [{"type": "map_at_1", "value": 25.544}, {"type": "map_at_10", "value": 32.62}, {"type": "map_at_100", "value": 33.275}, {"type": "map_at_1000", "value": 33.344}, {"type": "map_at_3", "value": 30.851}, {"type": "map_at_5", "value": 31.868999999999996}, {"type": "mrr_at_1", "value": 51.087}, {"type": "mrr_at_10", "value": 57.704}, {"type": "mrr_at_100", "value": 58.175}, {"type": "mrr_at_1000", "value": 58.207}, {"type": "mrr_at_3", "value": 56.106}, {"type": "mrr_at_5", "value": 57.074000000000005}, {"type": "ndcg_at_1", "value": 51.087}, {"type": "ndcg_at_10", "value": 40.876000000000005}, {"type": "ndcg_at_100", "value": 43.762}, {"type": "ndcg_at_1000", "value": 45.423}, {"type": "ndcg_at_3", "value": 37.65}, {"type": "ndcg_at_5", "value": 39.305}, {"type": "precision_at_1", "value": 51.087}, {"type": "precision_at_10", "value": 8.304}, {"type": "precision_at_100", "value": 1.059}, {"type": "precision_at_1000", "value": 0.128}, {"type": "precision_at_3", "value": 22.875999999999998}, {"type": "precision_at_5", "value": 15.033}, {"type": "recall_at_1", "value": 25.544}, {"type": "recall_at_10", "value": 41.519}, {"type": "recall_at_100", "value": 52.957}, {"type": "recall_at_1000", "value": 64.132}, {"type": "recall_at_3", "value": 34.315}, {"type": "recall_at_5", "value": 37.583}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "8d743909f834c38949e8323a8a6ce8721ea6c7f4"}, "metrics": [{"type": "accuracy", "value": 58.6696}, {"type": "ap", "value": 55.3644880984279}, {"type": "f1", "value": 58.07942097405652}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "validation", "revision": "e6838a846e2408f22cf5cc337ebc83e0bcf77849"}, "metrics": [{"type": "map_at_1", "value": 14.442}, {"type": "map_at_10", "value": 22.932}, {"type": "map_at_100", "value": 24.132}, {"type": "map_at_1000", "value": 24.213}, {"type": "map_at_3", "value": 20.002}, {"type": "map_at_5", "value": 21.636}, {"type": "mrr_at_1", "value": 14.841999999999999}, {"type": "mrr_at_10", "value": 23.416}, {"type": "mrr_at_100", "value": 24.593999999999998}, {"type": "mrr_at_1000", "value": 24.669}, {"type": "mrr_at_3", "value": 20.494}, {"type": "mrr_at_5", "value": 22.14}, {"type": "ndcg_at_1", "value": 14.841999999999999}, {"type": "ndcg_at_10", "value": 27.975}, {"type": "ndcg_at_100", "value": 34.143}, {"type": "ndcg_at_1000", "value": 36.370000000000005}, {"type": "ndcg_at_3", "value": 21.944}, {"type": "ndcg_at_5", "value": 24.881}, {"type": "precision_at_1", "value": 14.841999999999999}, {"type": "precision_at_10", "value": 4.537}, {"type": "precision_at_100", "value": 0.767}, {"type": "precision_at_1000", "value": 0.096}, {"type": "precision_at_3", "value": 9.322}, {"type": "precision_at_5", "value": 7.074}, {"type": "recall_at_1", "value": 14.442}, {"type": "recall_at_10", "value": 43.557}, {"type": "recall_at_100", "value": 72.904}, {"type": "recall_at_1000", "value": 90.40700000000001}, {"type": "recall_at_3", "value": 27.088}, {"type": "recall_at_5", "value": 34.144000000000005}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 86.95622435020519}, {"type": "f1", "value": 86.58363130708494}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 62.73034657650043}, {"type": "f1", "value": 60.78623915840713}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 67.54503002001334}, {"type": "f1", "value": 65.34879794116112}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 65.35233322893829}, {"type": "f1", "value": 62.994001882446646}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 45.37110075295806}, {"type": "f1", "value": 44.26285860740745}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 55.276672694394215}, {"type": "f1", "value": 53.28388179869587}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 62.25262197902417}, {"type": "f1", "value": 43.44084037148853}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 49.56043956043956}, {"type": "f1", "value": 32.86333673498598}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 49.93995997331555}, {"type": "f1", "value": 34.726671876888126}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 46.32947071719386}, {"type": "f1", "value": 32.325273615982795}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 32.208676945141626}, {"type": "f1", "value": 21.32185122815139}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 43.627486437613015}, {"type": "f1", "value": 27.04872922347508}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (af)", "type": "mteb/amazon_massive_intent", "config": "af", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.548083389374575}, {"type": "f1", "value": 39.490307545239716}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (am)", "type": "mteb/amazon_massive_intent", "config": "am", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 24.18291862811029}, {"type": "f1", "value": 23.437620034727473}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ar)", "type": "mteb/amazon_massive_intent", "config": "ar", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 30.134498991257562}, {"type": "f1", "value": 28.787175191531283}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (az)", "type": "mteb/amazon_massive_intent", "config": "az", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 35.88433086751849}, {"type": "f1", "value": 36.264500398782126}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (bn)", "type": "mteb/amazon_massive_intent", "config": "bn", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 29.17283120376597}, {"type": "f1", "value": 27.8101616531901}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (cy)", "type": "mteb/amazon_massive_intent", "config": "cy", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.788836583725626}, {"type": "f1", "value": 39.71413181054801}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (da)", "type": "mteb/amazon_massive_intent", "config": "da", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 44.176193678547406}, {"type": "f1", "value": 42.192499826552286}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.07464694014795}, {"type": "f1", "value": 39.44188259183162}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (el)", "type": "mteb/amazon_massive_intent", "config": "el", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 36.254203093476804}, {"type": "f1", "value": 34.46592715936761}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 61.40887693342301}, {"type": "f1", "value": 59.79854802683996}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.679892400807}, {"type": "f1", "value": 42.04801248338172}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fa)", "type": "mteb/amazon_massive_intent", "config": "fa", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 35.59179556153329}, {"type": "f1", "value": 34.045862930486166}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fi)", "type": "mteb/amazon_massive_intent", "config": "fi", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.036987222595826}, {"type": "f1", "value": 38.117703439362785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 43.43981170141224}, {"type": "f1", "value": 42.7084388987865}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (he)", "type": "mteb/amazon_massive_intent", "config": "he", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 31.593813046402154}, {"type": "f1", "value": 29.98550522450782}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hi)", "type": "mteb/amazon_massive_intent", "config": "hi", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 27.044384667114997}, {"type": "f1", "value": 27.313059184832667}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hu)", "type": "mteb/amazon_massive_intent", "config": "hu", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.453261600538}, {"type": "f1", "value": 37.309189326110435}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hy)", "type": "mteb/amazon_massive_intent", "config": "hy", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 27.979152656355076}, {"type": "f1", "value": 27.430939684346445}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (id)", "type": "mteb/amazon_massive_intent", "config": "id", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 43.97108271687963}, {"type": "f1", "value": 43.40585705688761}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (is)", "type": "mteb/amazon_massive_intent", "config": "is", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.302622730329524}, {"type": "f1", "value": 39.108052180520744}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (it)", "type": "mteb/amazon_massive_intent", "config": "it", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 45.474108944182916}, {"type": "f1", "value": 45.85950328241134}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ja)", "type": "mteb/amazon_massive_intent", "config": "ja", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 45.60860793544048}, {"type": "f1", "value": 43.94920708216737}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (jv)", "type": "mteb/amazon_massive_intent", "config": "jv", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.668459986550104}, {"type": "f1", "value": 37.6990034018859}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ka)", "type": "mteb/amazon_massive_intent", "config": "ka", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 25.6523201075992}, {"type": "f1", "value": 25.279084273189582}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (km)", "type": "mteb/amazon_massive_intent", "config": "km", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 28.295225285810353}, {"type": "f1", "value": 26.645825638771548}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (kn)", "type": "mteb/amazon_massive_intent", "config": "kn", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 23.480161398789505}, {"type": "f1", "value": 22.275241866506732}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ko)", "type": "mteb/amazon_massive_intent", "config": "ko", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 36.55682582380632}, {"type": "f1", "value": 36.004753171063605}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (lv)", "type": "mteb/amazon_massive_intent", "config": "lv", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.84936112979153}, {"type": "f1", "value": 41.38932672359119}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ml)", "type": "mteb/amazon_massive_intent", "config": "ml", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 24.90921318090114}, {"type": "f1", "value": 23.968687483768807}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (mn)", "type": "mteb/amazon_massive_intent", "config": "mn", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 29.86213853396099}, {"type": "f1", "value": 29.977152075255407}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ms)", "type": "mteb/amazon_massive_intent", "config": "ms", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.42098184263618}, {"type": "f1", "value": 41.50877432664628}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (my)", "type": "mteb/amazon_massive_intent", "config": "my", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 25.131136516476126}, {"type": "f1", "value": 23.938932214086776}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nb)", "type": "mteb/amazon_massive_intent", "config": "nb", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 39.81506388702084}, {"type": "f1", "value": 38.809586587791664}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nl)", "type": "mteb/amazon_massive_intent", "config": "nl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 43.62138533960995}, {"type": "f1", "value": 42.01386842914633}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.19569603227976}, {"type": "f1", "value": 40.00556559825827}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pt)", "type": "mteb/amazon_massive_intent", "config": "pt", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 45.20847343644923}, {"type": "f1", "value": 44.24115005029051}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ro)", "type": "mteb/amazon_massive_intent", "config": "ro", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.80901143241426}, {"type": "f1", "value": 40.474074848670085}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ru)", "type": "mteb/amazon_massive_intent", "config": "ru", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 35.96839273705447}, {"type": "f1", "value": 35.095456843621}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sl)", "type": "mteb/amazon_massive_intent", "config": "sl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.60524546065905}, {"type": "f1", "value": 39.302383051500136}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sq)", "type": "mteb/amazon_massive_intent", "config": "sq", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.75722932078009}, {"type": "f1", "value": 41.53763931497389}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sv)", "type": "mteb/amazon_massive_intent", "config": "sv", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.347007397444514}, {"type": "f1", "value": 41.04366017948627}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sw)", "type": "mteb/amazon_massive_intent", "config": "sw", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.12306657700067}, {"type": "f1", "value": 39.712940473289024}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ta)", "type": "mteb/amazon_massive_intent", "config": "ta", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 24.603227975790183}, {"type": "f1", "value": 23.969236788828606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (te)", "type": "mteb/amazon_massive_intent", "config": "te", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 25.03698722259583}, {"type": "f1", "value": 24.37196123281459}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (th)", "type": "mteb/amazon_massive_intent", "config": "th", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 35.40013449899126}, {"type": "f1", "value": 35.063600413688036}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tl)", "type": "mteb/amazon_massive_intent", "config": "tl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.19031607262945}, {"type": "f1", "value": 40.240432304273014}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tr)", "type": "mteb/amazon_massive_intent", "config": "tr", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 36.405514458641555}, {"type": "f1", "value": 36.03844992856558}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ur)", "type": "mteb/amazon_massive_intent", "config": "ur", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 25.934767989240076}, {"type": "f1", "value": 25.2074457023531}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (vi)", "type": "mteb/amazon_massive_intent", "config": "vi", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.79959650302622}, {"type": "f1", "value": 37.160233794673125}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 46.244115669132476}, {"type": "f1", "value": 44.367480561291906}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-TW)", "type": "mteb/amazon_massive_intent", "config": "zh-TW", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.30665770006724}, {"type": "f1", "value": 41.9642223283514}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (af)", "type": "mteb/amazon_massive_scenario", "config": "af", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 43.2481506388702}, {"type": "f1", "value": 40.924230769590785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (am)", "type": "mteb/amazon_massive_scenario", "config": "am", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 25.30262273032952}, {"type": "f1", "value": 24.937105830264066}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ar)", "type": "mteb/amazon_massive_scenario", "config": "ar", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.07128446536651}, {"type": "f1", "value": 31.80245816594883}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (az)", "type": "mteb/amazon_massive_scenario", "config": "az", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 36.681237390719566}, {"type": "f1", "value": 36.37219042508338}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (bn)", "type": "mteb/amazon_massive_scenario", "config": "bn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 29.56624075319435}, {"type": "f1", "value": 28.386042056362758}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (cy)", "type": "mteb/amazon_massive_scenario", "config": "cy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.1049092131809}, {"type": "f1", "value": 38.926150886991294}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (da)", "type": "mteb/amazon_massive_scenario", "config": "da", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.44384667114997}, {"type": "f1", "value": 42.578252395460005}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 43.211163416274374}, {"type": "f1", "value": 41.04465858304789}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (el)", "type": "mteb/amazon_massive_scenario", "config": "el", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 36.503026227303295}, {"type": "f1", "value": 34.49785095312759}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.73772696704773}, {"type": "f1", "value": 69.21759502909043}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.078681909885674}, {"type": "f1", "value": 43.05914426901129}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fa)", "type": "mteb/amazon_massive_scenario", "config": "fa", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.61264290517821}, {"type": "f1", "value": 32.02463177462754}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fi)", "type": "mteb/amazon_massive_scenario", "config": "fi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.35642232683255}, {"type": "f1", "value": 38.13642481807678}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.06724949562878}, {"type": "f1", "value": 43.19827608343738}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (he)", "type": "mteb/amazon_massive_scenario", "config": "he", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.178883658372555}, {"type": "f1", "value": 29.979761884698775}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hi)", "type": "mteb/amazon_massive_scenario", "config": "hi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 26.903160726294555}, {"type": "f1", "value": 25.833010434083363}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hu)", "type": "mteb/amazon_massive_scenario", "config": "hu", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.379959650302624}, {"type": "f1", "value": 37.93134355292882}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hy)", "type": "mteb/amazon_massive_scenario", "config": "hy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 28.375924680564896}, {"type": "f1", "value": 26.96255693013172}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (id)", "type": "mteb/amazon_massive_scenario", "config": "id", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.361129791526565}, {"type": "f1", "value": 43.54445012295126}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (is)", "type": "mteb/amazon_massive_scenario", "config": "is", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.290517821116346}, {"type": "f1", "value": 37.26982052174147}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (it)", "type": "mteb/amazon_massive_scenario", "config": "it", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 46.4694014794889}, {"type": "f1", "value": 44.060986162841566}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ja)", "type": "mteb/amazon_massive_scenario", "config": "ja", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 46.25756556825824}, {"type": "f1", "value": 45.625139456758816}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (jv)", "type": "mteb/amazon_massive_scenario", "config": "jv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 41.12642905178212}, {"type": "f1", "value": 39.54392378396527}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ka)", "type": "mteb/amazon_massive_scenario", "config": "ka", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 24.72763954270343}, {"type": "f1", "value": 23.337743140804484}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (km)", "type": "mteb/amazon_massive_scenario", "config": "km", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 29.741089441829182}, {"type": "f1", "value": 27.570876190083748}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (kn)", "type": "mteb/amazon_massive_scenario", "config": "kn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 23.850033624747816}, {"type": "f1", "value": 22.86733484540032}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ko)", "type": "mteb/amazon_massive_scenario", "config": "ko", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 36.56691324815064}, {"type": "f1", "value": 35.504081677134565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (lv)", "type": "mteb/amazon_massive_scenario", "config": "lv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.928043039677206}, {"type": "f1", "value": 39.108589131211254}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ml)", "type": "mteb/amazon_massive_scenario", "config": "ml", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 25.527908540685946}, {"type": "f1", "value": 25.333391622280477}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (mn)", "type": "mteb/amazon_massive_scenario", "config": "mn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 29.105581708137183}, {"type": "f1", "value": 28.478235012692814}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ms)", "type": "mteb/amazon_massive_scenario", "config": "ms", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 43.78614660390047}, {"type": "f1", "value": 41.9640143926267}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (my)", "type": "mteb/amazon_massive_scenario", "config": "my", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 27.269670477471415}, {"type": "f1", "value": 26.228386764141852}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nb)", "type": "mteb/amazon_massive_scenario", "config": "nb", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.018157363819775}, {"type": "f1", "value": 37.641949339321854}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nl)", "type": "mteb/amazon_massive_scenario", "config": "nl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.35978480161399}, {"type": "f1", "value": 42.6851176096831}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 41.89307330195023}, {"type": "f1", "value": 40.888710642615024}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pt)", "type": "mteb/amazon_massive_scenario", "config": "pt", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.901143241425686}, {"type": "f1", "value": 44.496942353920545}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ro)", "type": "mteb/amazon_massive_scenario", "config": "ro", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.11566913248151}, {"type": "f1", "value": 41.953945105870616}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ru)", "type": "mteb/amazon_massive_scenario", "config": "ru", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.76395427034297}, {"type": "f1", "value": 31.436372571600934}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sl)", "type": "mteb/amazon_massive_scenario", "config": "sl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.504371217215876}, {"type": "f1", "value": 39.322752749628165}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sq)", "type": "mteb/amazon_massive_scenario", "config": "sq", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.51849361129792}, {"type": "f1", "value": 41.4139297118463}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sv)", "type": "mteb/amazon_massive_scenario", "config": "sv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.293207800941495}, {"type": "f1", "value": 40.50409536806683}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sw)", "type": "mteb/amazon_massive_scenario", "config": "sw", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.9993275050437}, {"type": "f1", "value": 41.045416224973266}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ta)", "type": "mteb/amazon_massive_scenario", "config": "ta", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 28.32548755884331}, {"type": "f1", "value": 27.276841995561867}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (te)", "type": "mteb/amazon_massive_scenario", "config": "te", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 26.593813046402154}, {"type": "f1", "value": 25.483878616197586}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (th)", "type": "mteb/amazon_massive_scenario", "config": "th", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 36.788836583725626}, {"type": "f1", "value": 34.603932909177686}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tl)", "type": "mteb/amazon_massive_scenario", "config": "tl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.5689307330195}, {"type": "f1", "value": 40.924469309079825}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tr)", "type": "mteb/amazon_massive_scenario", "config": "tr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 37.09482178883658}, {"type": "f1", "value": 37.949628822857164}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ur)", "type": "mteb/amazon_massive_scenario", "config": "ur", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 28.836583725622063}, {"type": "f1", "value": 27.806558655512344}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (vi)", "type": "mteb/amazon_massive_scenario", "config": "vi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 37.357094821788834}, {"type": "f1", "value": 37.507918961038165}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 49.37794216543375}, {"type": "f1", "value": 47.20421153697707}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-TW)", "type": "mteb/amazon_massive_scenario", "config": "zh-TW", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.42165433759248}, {"type": "f1", "value": 44.34741861198931}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "dcefc037ef84348e49b0d29109e891c01067226b"}, "metrics": [{"type": "v_measure", "value": 31.374938993074252}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc"}, "metrics": [{"type": "v_measure", "value": 26.871455379644093}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.402396942935333}, {"type": "mrr", "value": 31.42600938803256}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "7eb63cc0c1eb59324d709ebed25fcab851fa7610"}, "metrics": [{"type": "map_at_1", "value": 3.7740000000000005}, {"type": "map_at_10", "value": 7.614999999999999}, {"type": "map_at_100", "value": 9.574}, {"type": "map_at_1000", "value": 10.711}, {"type": "map_at_3", "value": 5.7540000000000004}, {"type": "map_at_5", "value": 6.6659999999999995}, {"type": "mrr_at_1", "value": 33.127}, {"type": "mrr_at_10", "value": 40.351}, {"type": "mrr_at_100", "value": 41.144}, {"type": "mrr_at_1000", "value": 41.202}, {"type": "mrr_at_3", "value": 38.029}, {"type": "mrr_at_5", "value": 39.190000000000005}, {"type": "ndcg_at_1", "value": 31.579}, {"type": "ndcg_at_10", "value": 22.792}, {"type": "ndcg_at_100", "value": 21.698999999999998}, {"type": "ndcg_at_1000", "value": 30.892999999999997}, {"type": "ndcg_at_3", "value": 26.828999999999997}, {"type": "ndcg_at_5", "value": 25.119000000000003}, {"type": "precision_at_1", "value": 33.127}, {"type": "precision_at_10", "value": 16.718}, {"type": "precision_at_100", "value": 5.7090000000000005}, {"type": "precision_at_1000", "value": 1.836}, {"type": "precision_at_3", "value": 24.768}, {"type": "precision_at_5", "value": 21.3}, {"type": "recall_at_1", "value": 3.7740000000000005}, {"type": "recall_at_10", "value": 10.302999999999999}, {"type": "recall_at_100", "value": 23.013}, {"type": "recall_at_1000", "value": 54.864999999999995}, {"type": "recall_at_3", "value": 6.554}, {"type": "recall_at_5", "value": 8.087}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "6062aefc120bfe8ece5897809fb2e53bfe0d128c"}, "metrics": [{"type": "map_at_1", "value": 15.620999999999999}, {"type": "map_at_10", "value": 24.519}, {"type": "map_at_100", "value": 25.586}, {"type": "map_at_1000", "value": 25.662000000000003}, {"type": "map_at_3", "value": 21.619}, {"type": "map_at_5", "value": 23.232}, {"type": "mrr_at_1", "value": 17.497}, {"type": "mrr_at_10", "value": 26.301000000000002}, {"type": "mrr_at_100", "value": 27.235}, {"type": "mrr_at_1000", "value": 27.297}, {"type": "mrr_at_3", "value": 23.561}, {"type": "mrr_at_5", "value": 25.111}, {"type": "ndcg_at_1", "value": 17.497}, {"type": "ndcg_at_10", "value": 29.725}, {"type": "ndcg_at_100", "value": 34.824}, {"type": "ndcg_at_1000", "value": 36.907000000000004}, {"type": "ndcg_at_3", "value": 23.946}, {"type": "ndcg_at_5", "value": 26.739}, {"type": "precision_at_1", "value": 17.497}, {"type": "precision_at_10", "value": 5.2170000000000005}, {"type": "precision_at_100", "value": 0.8099999999999999}, {"type": "precision_at_1000", "value": 0.101}, {"type": "precision_at_3", "value": 11.114}, {"type": "precision_at_5", "value": 8.285}, {"type": "recall_at_1", "value": 15.620999999999999}, {"type": "recall_at_10", "value": 43.999}, {"type": "recall_at_100", "value": 67.183}, {"type": "recall_at_1000", "value": 83.174}, {"type": "recall_at_3", "value": 28.720000000000002}, {"type": "recall_at_5", "value": 35.154}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "6205996560df11e3a3da9ab4f926788fc30a7db4"}, "metrics": [{"type": "map_at_1", "value": 54.717000000000006}, {"type": "map_at_10", "value": 67.514}, {"type": "map_at_100", "value": 68.484}, {"type": "map_at_1000", "value": 68.523}, {"type": "map_at_3", "value": 64.169}, {"type": "map_at_5", "value": 66.054}, {"type": "mrr_at_1", "value": 62.46000000000001}, {"type": "mrr_at_10", "value": 71.503}, {"type": "mrr_at_100", "value": 71.91499999999999}, {"type": "mrr_at_1000", "value": 71.923}, {"type": "mrr_at_3", "value": 69.46799999999999}, {"type": "mrr_at_5", "value": 70.677}, {"type": "ndcg_at_1", "value": 62.480000000000004}, {"type": "ndcg_at_10", "value": 72.98}, {"type": "ndcg_at_100", "value": 76.023}, {"type": "ndcg_at_1000", "value": 76.512}, {"type": "ndcg_at_3", "value": 68.138}, {"type": "ndcg_at_5", "value": 70.458}, {"type": "precision_at_1", "value": 62.480000000000004}, {"type": "precision_at_10", "value": 11.373}, {"type": "precision_at_100", "value": 1.437}, {"type": "precision_at_1000", "value": 0.154}, {"type": "precision_at_3", "value": 29.622999999999998}, {"type": "precision_at_5", "value": 19.918}, {"type": "recall_at_1", "value": 54.717000000000006}, {"type": "recall_at_10", "value": 84.745}, {"type": "recall_at_100", "value": 96.528}, {"type": "recall_at_1000", "value": 99.39}, {"type": "recall_at_3", "value": 71.60600000000001}, {"type": "recall_at_5", "value": 77.511}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "b2805658ae38990172679479369a78b86de8c390"}, "metrics": [{"type": "v_measure", "value": 40.23390747226228}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 49.090518272935626}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "5c59ef3e437a0a9651c8fe6fde943e7dce59fba5"}, "metrics": [{"type": "map_at_1", "value": 3.028}, {"type": "map_at_10", "value": 6.968000000000001}, {"type": "map_at_100", "value": 8.200000000000001}, {"type": "map_at_1000", "value": 8.432}, {"type": "map_at_3", "value": 5.3069999999999995}, {"type": "map_at_5", "value": 6.099}, {"type": "mrr_at_1", "value": 14.799999999999999}, {"type": "mrr_at_10", "value": 22.425}, {"type": "mrr_at_100", "value": 23.577}, {"type": "mrr_at_1000", "value": 23.669999999999998}, {"type": "mrr_at_3", "value": 20.233}, {"type": "mrr_at_5", "value": 21.318}, {"type": "ndcg_at_1", "value": 14.799999999999999}, {"type": "ndcg_at_10", "value": 12.206}, {"type": "ndcg_at_100", "value": 17.799}, {"type": "ndcg_at_1000", "value": 22.891000000000002}, {"type": "ndcg_at_3", "value": 12.128}, {"type": "ndcg_at_5", "value": 10.212}, {"type": "precision_at_1", "value": 14.799999999999999}, {"type": "precision_at_10", "value": 6.17}, {"type": "precision_at_100", "value": 1.428}, {"type": "precision_at_1000", "value": 0.266}, {"type": "precision_at_3", "value": 11.333}, {"type": "precision_at_5", "value": 8.74}, {"type": "recall_at_1", "value": 3.028}, {"type": "recall_at_10", "value": 12.522}, {"type": "recall_at_100", "value": 28.975}, {"type": "recall_at_1000", "value": 54.038}, {"type": "recall_at_3", "value": 6.912999999999999}, {"type": "recall_at_5", "value": 8.883000000000001}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.62983928119752}, {"type": "cos_sim_spearman", "value": 65.92910683118656}, {"type": "euclidean_pearson", "value": 71.10290039690963}, {"type": "euclidean_spearman", "value": 64.80076622426652}, {"type": "manhattan_pearson", "value": 70.8944726230188}, {"type": "manhattan_spearman", "value": 64.75082576033986}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "fdf84275bb8ce4b49c971d02e84dd1abc677a50f"}, "metrics": [{"type": "cos_sim_pearson", "value": 74.42679147085553}, {"type": "cos_sim_spearman", "value": 66.52980061546658}, {"type": "euclidean_pearson", "value": 74.87039477408763}, {"type": "euclidean_spearman", "value": 70.63397666902786}, {"type": "manhattan_pearson", "value": 74.97015137513088}, {"type": "manhattan_spearman", "value": 70.75951355434326}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "1591bfcbe8c69d4bf7fe2a16e2451017832cafb9"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.62472426599543}, {"type": "cos_sim_spearman", "value": 76.1662886374236}, {"type": "euclidean_pearson", "value": 76.3297128081315}, {"type": "euclidean_spearman", "value": 77.19385151966563}, {"type": "manhattan_pearson", "value": 76.50363291423257}, {"type": "manhattan_spearman", "value": 77.37081896355399}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "e2125984e7df8b7871f6ae9949cf6b6795e7c54b"}, "metrics": [{"type": "cos_sim_pearson", "value": 74.48227705407035}, {"type": "cos_sim_spearman", "value": 69.04572664009687}, {"type": "euclidean_pearson", "value": 71.76138185714849}, {"type": "euclidean_spearman", "value": 68.93415452043307}, {"type": "manhattan_pearson", "value": 71.68010915543306}, {"type": "manhattan_spearman", "value": 68.99176321262806}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "1cd7298cac12a96a373b6a2f18738bb3e739a9b6"}, "metrics": [{"type": "cos_sim_pearson", "value": 78.1566527175902}, {"type": "cos_sim_spearman", "value": 79.23677712825851}, {"type": "euclidean_pearson", "value": 76.29138438696417}, {"type": "euclidean_spearman", "value": 77.20108266215374}, {"type": "manhattan_pearson", "value": 76.27464935799118}, {"type": "manhattan_spearman", "value": 77.15286174478099}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "360a0b2dff98700d09e634a01e1cc1624d3e42cd"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.068454465977}, {"type": "cos_sim_spearman", "value": 76.06792422441929}, {"type": "euclidean_pearson", "value": 70.64605440627699}, {"type": "euclidean_spearman", "value": 70.21776051117844}, {"type": "manhattan_pearson", "value": 70.32479295054918}, {"type": "manhattan_spearman", "value": 69.89782458638528}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ko-ko)", "type": "mteb/sts17-crosslingual-sts", "config": "ko-ko", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 39.43327289939437}, {"type": "cos_sim_spearman", "value": 52.386010275505654}, {"type": "euclidean_pearson", "value": 46.40999904885745}, {"type": "euclidean_spearman", "value": 51.00333465175934}, {"type": "manhattan_pearson", "value": 46.55753533133655}, {"type": "manhattan_spearman", "value": 51.07550440519388}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ar-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "ar-ar", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 55.54431928210687}, {"type": "cos_sim_spearman", "value": 55.61674586076298}, {"type": "euclidean_pearson", "value": 58.07442713714088}, {"type": "euclidean_spearman", "value": 55.74066216931719}, {"type": "manhattan_pearson", "value": 57.84021675638542}, {"type": "manhattan_spearman", "value": 55.20365812536853}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 11.378463868809098}, {"type": "cos_sim_spearman", "value": 8.209569244801065}, {"type": "euclidean_pearson", "value": 1.07041700730406}, {"type": "euclidean_spearman", "value": 2.2052197108931892}, {"type": "manhattan_pearson", "value": 0.7671300251104268}, {"type": "manhattan_spearman", "value": 3.430645020535567}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 32.71403560929013}, {"type": "cos_sim_spearman", "value": 30.18181775929109}, {"type": "euclidean_pearson", "value": 25.57368595910298}, {"type": "euclidean_spearman", "value": 23.316649115731376}, {"type": "manhattan_pearson", "value": 24.144200325329614}, {"type": "manhattan_spearman", "value": 21.64621546338457}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.36340470799158}, {"type": "cos_sim_spearman", "value": 84.95398260629699}, {"type": "euclidean_pearson", "value": 80.69876969911644}, {"type": "euclidean_spearman", "value": 80.97451731130427}, {"type": "manhattan_pearson", "value": 80.65869354146945}, {"type": "manhattan_spearman", "value": 80.8540858718528}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 1.9200044163754912}, {"type": "cos_sim_spearman", "value": 1.0393399782021342}, {"type": "euclidean_pearson", "value": 1.1376003191297994}, {"type": "euclidean_spearman", "value": 1.8947106671763914}, {"type": "manhattan_pearson", "value": 3.8362564474484335}, {"type": "manhattan_spearman", "value": 4.242750882792888}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 26.561262451099577}, {"type": "cos_sim_spearman", "value": 28.776666666659906}, {"type": "euclidean_pearson", "value": 14.640410196999088}, {"type": "euclidean_spearman", "value": 16.10557011701786}, {"type": "manhattan_pearson", "value": 15.019405495911272}, {"type": "manhattan_spearman", "value": 15.37192083104197}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.7544202001433}, {"type": "cos_sim_spearman", "value": 71.88444295144646}, {"type": "euclidean_pearson", "value": 73.84934185952773}, {"type": "euclidean_spearman", "value": 73.26911108021089}, {"type": "manhattan_pearson", "value": 74.04354196954574}, {"type": "manhattan_spearman", "value": 73.37650787943872}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 27.70511842301491}, {"type": "cos_sim_spearman", "value": 26.339466714066447}, {"type": "euclidean_pearson", "value": 9.323158236506385}, {"type": "euclidean_spearman", "value": 7.32083231520273}, {"type": "manhattan_pearson", "value": 7.807399527573071}, {"type": "manhattan_spearman", "value": 5.525546663067113}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 24.226521799447692}, {"type": "cos_sim_spearman", "value": 20.72992940458968}, {"type": "euclidean_pearson", "value": 6.753378617205011}, {"type": "euclidean_spearman", "value": 6.281654679029505}, {"type": "manhattan_pearson", "value": 7.087180250449323}, {"type": "manhattan_spearman", "value": 6.41611659259516}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 29.131412364061234}, {"type": "cos_sim_spearman", "value": 25.053429612793547}, {"type": "euclidean_pearson", "value": 10.657141303962}, {"type": "euclidean_spearman", "value": 9.712124819778452}, {"type": "manhattan_pearson", "value": 12.481782693315688}, {"type": "manhattan_spearman", "value": 11.287958480905973}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 64.04750650962879}, {"type": "cos_sim_spearman", "value": 65.66183708171826}, {"type": "euclidean_pearson", "value": 66.90887604405887}, {"type": "euclidean_spearman", "value": 66.89814072484552}, {"type": "manhattan_pearson", "value": 67.31627110509089}, {"type": "manhattan_spearman", "value": 67.01048176165322}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de)", "type": "mteb/sts22-crosslingual-sts", "config": "de", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 19.26519187000913}, {"type": "cos_sim_spearman", "value": 21.987647321429005}, {"type": "euclidean_pearson", "value": 17.850618752342946}, {"type": "euclidean_spearman", "value": 22.86669392885474}, {"type": "manhattan_pearson", "value": 18.16183594260708}, {"type": "manhattan_spearman", "value": 23.637510352837907}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es)", "type": "mteb/sts22-crosslingual-sts", "config": "es", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 34.221261828226936}, {"type": "cos_sim_spearman", "value": 49.811823238907664}, {"type": "euclidean_pearson", "value": 44.50394399762147}, {"type": "euclidean_spearman", "value": 50.959184495072876}, {"type": "manhattan_pearson", "value": 45.83191034038624}, {"type": "manhattan_spearman", "value": 50.190409866117946}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 3.620381732096531}, {"type": "cos_sim_spearman", "value": 23.30843951799194}, {"type": "euclidean_pearson", "value": 0.965453312113125}, {"type": "euclidean_spearman", "value": 24.235967620790316}, {"type": "manhattan_pearson", "value": 1.4408922275701606}, {"type": "manhattan_spearman", "value": 25.161920137046096}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (tr)", "type": "mteb/sts22-crosslingual-sts", "config": "tr", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 16.69489628726267}, {"type": "cos_sim_spearman", "value": 34.66348380997687}, {"type": "euclidean_pearson", "value": 29.415825529188606}, {"type": "euclidean_spearman", "value": 38.33011033170646}, {"type": "manhattan_pearson", "value": 31.23273195263394}, {"type": "manhattan_spearman", "value": 39.10055785755795}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ar)", "type": "mteb/sts22-crosslingual-sts", "config": "ar", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 9.134927430889528}, {"type": "cos_sim_spearman", "value": 28.18922448944151}, {"type": "euclidean_pearson", "value": 19.86814169549051}, {"type": "euclidean_spearman", "value": 27.519588644948627}, {"type": "manhattan_pearson", "value": 21.80949221238945}, {"type": "manhattan_spearman", "value": 28.25217200494078}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ru)", "type": "mteb/sts22-crosslingual-sts", "config": "ru", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 3.6386482942352085}, {"type": "cos_sim_spearman", "value": 9.068119621940966}, {"type": "euclidean_pearson", "value": 0.8123129118737714}, {"type": "euclidean_spearman", "value": 9.173672890166147}, {"type": "manhattan_pearson", "value": 0.754518899822658}, {"type": "manhattan_spearman", "value": 8.431719541986524}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 2.972091574908432}, {"type": "cos_sim_spearman", "value": 25.48511383289232}, {"type": "euclidean_pearson", "value": 12.751569670148918}, {"type": "euclidean_spearman", "value": 24.940721642439286}, {"type": "manhattan_pearson", "value": 14.310238482989826}, {"type": "manhattan_spearman", "value": 24.69821216148647}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 54.4745185734135}, {"type": "cos_sim_spearman", "value": 67.66493409568727}, {"type": "euclidean_pearson", "value": 60.13580336797049}, {"type": "euclidean_spearman", "value": 66.12319300814538}, {"type": "manhattan_pearson", "value": 60.816210368708155}, {"type": "manhattan_spearman", "value": 65.70010026716766}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 49.37865412588201}, {"type": "cos_sim_spearman", "value": 53.07135629778897}, {"type": "euclidean_pearson", "value": 49.29201416711091}, {"type": "euclidean_spearman", "value": 50.54523702399645}, {"type": "manhattan_pearson", "value": 51.265764141268534}, {"type": "manhattan_spearman", "value": 51.979086403193605}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 44.925652392562135}, {"type": "cos_sim_spearman", "value": 49.51253904767726}, {"type": "euclidean_pearson", "value": 48.79346518897415}, {"type": "euclidean_spearman", "value": 51.47957870101565}, {"type": "manhattan_pearson", "value": 49.51314553898044}, {"type": "manhattan_spearman", "value": 51.895207893189166}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (it)", "type": "mteb/sts22-crosslingual-sts", "config": "it", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 45.241690321111875}, {"type": "cos_sim_spearman", "value": 48.24795739512037}, {"type": "euclidean_pearson", "value": 49.22719494399897}, {"type": "euclidean_spearman", "value": 49.64102442042809}, {"type": "manhattan_pearson", "value": 49.497887732970256}, {"type": "manhattan_spearman", "value": 49.940515338096304}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl-en)", "type": "mteb/sts22-crosslingual-sts", "config": "pl-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 36.42138324083909}, {"type": "cos_sim_spearman", "value": 36.79867489417801}, {"type": "euclidean_pearson", "value": 27.760612942610084}, {"type": "euclidean_spearman", "value": 29.140966500287625}, {"type": "manhattan_pearson", "value": 28.456674031350115}, {"type": "manhattan_spearman", "value": 27.46356370924497}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh-en)", "type": "mteb/sts22-crosslingual-sts", "config": "zh-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 26.55350664089358}, {"type": "cos_sim_spearman", "value": 28.681707196975008}, {"type": "euclidean_pearson", "value": 12.613577889195138}, {"type": "euclidean_spearman", "value": 13.589493311702933}, {"type": "manhattan_pearson", "value": 11.640157427420958}, {"type": "manhattan_spearman", "value": 10.345223941212415}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-it)", "type": "mteb/sts22-crosslingual-sts", "config": "es-it", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 38.54682179114309}, {"type": "cos_sim_spearman", "value": 45.782560880405704}, {"type": "euclidean_pearson", "value": 46.496857002368486}, {"type": "euclidean_spearman", "value": 48.21270426410012}, {"type": "manhattan_pearson", "value": 46.871839119374044}, {"type": "manhattan_spearman", "value": 47.556987773851525}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-fr)", "type": "mteb/sts22-crosslingual-sts", "config": "de-fr", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 35.12956772546032}, {"type": "cos_sim_spearman", "value": 32.96920218281008}, {"type": "euclidean_pearson", "value": 34.23140384382136}, {"type": "euclidean_spearman", "value": 32.19303153191447}, {"type": "manhattan_pearson", "value": 34.189468276600635}, {"type": "manhattan_spearman", "value": 34.887065709732376}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "de-pl", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.507667380509634}, {"type": "cos_sim_spearman", "value": 20.447284723752716}, {"type": "euclidean_pearson", "value": 29.662041381794474}, {"type": "euclidean_spearman", "value": 20.939990379746757}, {"type": "manhattan_pearson", "value": 32.5112080506328}, {"type": "manhattan_spearman", "value": 23.773047901712495}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "fr-pl", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.10820459712156}, {"type": "cos_sim_spearman", "value": 61.97797868009122}, {"type": "euclidean_pearson", "value": 60.30910689156633}, {"type": "euclidean_spearman", "value": 61.97797868009122}, {"type": "manhattan_pearson", "value": 66.3405176964038}, {"type": "manhattan_spearman", "value": 61.97797868009122}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "8913289635987208e6e7c72789e4be2fe94b6abd"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.53032504460737}, {"type": "cos_sim_spearman", "value": 75.33716094627373}, {"type": "euclidean_pearson", "value": 69.64662673290599}, {"type": "euclidean_spearman", "value": 67.30188896368857}, {"type": "manhattan_pearson", "value": 69.45096082050807}, {"type": "manhattan_spearman", "value": 67.0718727259371}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "56a6d0140cf6356659e2a7c1413286a774468d44"}, "metrics": [{"type": "map", "value": 71.33941904192648}, {"type": "mrr", "value": 89.73766429648782}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "a75ae049398addde9b70f6b268875f5cbce99089"}, "metrics": [{"type": "map_at_1", "value": 43.333}, {"type": "map_at_10", "value": 52.364}, {"type": "map_at_100", "value": 53.184}, {"type": "map_at_1000", "value": 53.234}, {"type": "map_at_3", "value": 49.832}, {"type": "map_at_5", "value": 51.244}, {"type": "mrr_at_1", "value": 45.333}, {"type": "mrr_at_10", "value": 53.455}, {"type": "mrr_at_100", "value": 54.191}, {"type": "mrr_at_1000", "value": 54.235}, {"type": "mrr_at_3", "value": 51.556000000000004}, {"type": "mrr_at_5", "value": 52.622}, {"type": "ndcg_at_1", "value": 45.333}, {"type": "ndcg_at_10", "value": 56.899}, {"type": "ndcg_at_100", "value": 60.702}, {"type": "ndcg_at_1000", "value": 62.046}, {"type": "ndcg_at_3", "value": 52.451}, {"type": "ndcg_at_5", "value": 54.534000000000006}, {"type": "precision_at_1", "value": 45.333}, {"type": "precision_at_10", "value": 7.8}, {"type": "precision_at_100", "value": 0.987}, {"type": "precision_at_1000", "value": 0.11}, {"type": "precision_at_3", "value": 20.778}, {"type": "precision_at_5", "value": 13.866999999999999}, {"type": "recall_at_1", "value": 43.333}, {"type": "recall_at_10", "value": 69.69999999999999}, {"type": "recall_at_100", "value": 86.9}, {"type": "recall_at_1000", "value": 97.6}, {"type": "recall_at_3", "value": 57.81699999999999}, {"type": "recall_at_5", "value": 62.827999999999996}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.7}, {"type": "cos_sim_ap", "value": 89.88577913120001}, {"type": "cos_sim_f1", "value": 84.62694041061593}, {"type": "cos_sim_precision", "value": 84.7542627883651}, {"type": "cos_sim_recall", "value": 84.5}, {"type": "dot_accuracy", "value": 99.24752475247524}, {"type": "dot_ap", "value": 56.81855467290009}, {"type": "dot_f1", "value": 56.084126189283936}, {"type": "dot_precision", "value": 56.16850551654965}, {"type": "dot_recall", "value": 56.00000000000001}, {"type": "euclidean_accuracy", "value": 99.7059405940594}, {"type": "euclidean_ap", "value": 90.12451226491524}, {"type": "euclidean_f1", "value": 84.44211629125196}, {"type": "euclidean_precision", "value": 88.66886688668868}, {"type": "euclidean_recall", "value": 80.60000000000001}, {"type": "manhattan_accuracy", "value": 99.7128712871287}, {"type": "manhattan_ap", "value": 90.67590584183216}, {"type": "manhattan_f1", "value": 84.85436893203884}, {"type": "manhattan_precision", "value": 82.45283018867924}, {"type": "manhattan_recall", "value": 87.4}, {"type": "max_accuracy", "value": 99.7128712871287}, {"type": "max_ap", "value": 90.67590584183216}, {"type": "max_f1", "value": 84.85436893203884}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "70a89468f6dccacc6aa2b12a6eac54e74328f235"}, "metrics": [{"type": "v_measure", "value": 52.74481093815175}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "d88009ab563dd0b16cfaf4436abaf97fa3550cf0"}, "metrics": [{"type": "v_measure", "value": 32.65999453562101}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9"}, "metrics": [{"type": "map", "value": 44.74498464555465}, {"type": "mrr", "value": 45.333879764026825}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "8753c2788d36c01fc6f05d03fe3f7268d63f9122"}, "metrics": [{"type": "cos_sim_pearson", "value": "29,603788751645216"}, {"type": "cos_sim_spearman", "value": 29.705103354786033}, {"type": "dot_pearson", "value": 28.07425338095399}, {"type": "dot_spearman", "value": 26.841406359135366}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217"}, "metrics": [{"type": "map_at_1", "value": 0.241}, {"type": "map_at_10", "value": 1.672}, {"type": "map_at_100", "value": 7.858999999999999}, {"type": "map_at_1000", "value": 17.616}, {"type": "map_at_3", "value": 0.631}, {"type": "map_at_5", "value": 0.968}, {"type": "mrr_at_1", "value": 90.0}, {"type": "mrr_at_10", "value": 92.952}, {"type": "mrr_at_100", "value": 93.036}, {"type": "mrr_at_1000", "value": 93.036}, {"type": "mrr_at_3", "value": 92.667}, {"type": "mrr_at_5", "value": 92.667}, {"type": "ndcg_at_1", "value": 83.0}, {"type": "ndcg_at_10", "value": 70.30199999999999}, {"type": "ndcg_at_100", "value": 48.149}, {"type": "ndcg_at_1000", "value": 40.709}, {"type": "ndcg_at_3", "value": 79.173}, {"type": "ndcg_at_5", "value": 75.347}, {"type": "precision_at_1", "value": 90.0}, {"type": "precision_at_10", "value": 72.6}, {"type": "precision_at_100", "value": 48.46}, {"type": "precision_at_1000", "value": 18.093999999999998}, {"type": "precision_at_3", "value": 84.0}, {"type": "precision_at_5", "value": 78.8}, {"type": "recall_at_1", "value": 0.241}, {"type": "recall_at_10", "value": 1.814}, {"type": "recall_at_100", "value": 11.141}, {"type": "recall_at_1000", "value": 37.708999999999996}, {"type": "recall_at_3", "value": 0.647}, {"type": "recall_at_5", "value": 1.015}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "527b7d77e16e343303e68cb6af11d6e18b9f7b3b"}, "metrics": [{"type": "map_at_1", "value": 2.782}, {"type": "map_at_10", "value": 9.06}, {"type": "map_at_100", "value": 14.571000000000002}, {"type": "map_at_1000", "value": 16.006999999999998}, {"type": "map_at_3", "value": 5.037}, {"type": "map_at_5", "value": 6.63}, {"type": "mrr_at_1", "value": 34.694}, {"type": "mrr_at_10", "value": 48.243}, {"type": "mrr_at_100", "value": 49.065}, {"type": "mrr_at_1000", "value": 49.065}, {"type": "mrr_at_3", "value": 44.897999999999996}, {"type": "mrr_at_5", "value": 46.428999999999995}, {"type": "ndcg_at_1", "value": 31.633}, {"type": "ndcg_at_10", "value": 22.972}, {"type": "ndcg_at_100", "value": 34.777}, {"type": "ndcg_at_1000", "value": 45.639}, {"type": "ndcg_at_3", "value": 26.398}, {"type": "ndcg_at_5", "value": 24.418}, {"type": "precision_at_1", "value": 34.694}, {"type": "precision_at_10", "value": 19.796}, {"type": "precision_at_100", "value": 7.224}, {"type": "precision_at_1000", "value": 1.4449999999999998}, {"type": "precision_at_3", "value": 26.531}, {"type": "precision_at_5", "value": 23.265}, {"type": "recall_at_1", "value": 2.782}, {"type": "recall_at_10", "value": 14.841}, {"type": "recall_at_100", "value": 44.86}, {"type": "recall_at_1000", "value": 78.227}, {"type": "recall_at_3", "value": 5.959}, {"type": "recall_at_5", "value": 8.969000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 62.657999999999994}, {"type": "ap", "value": 10.96353161716344}, {"type": "f1", "value": 48.294226423442645}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "62146448f05be9e52a36b8ee9936447ea787eede"}, "metrics": [{"type": "accuracy", "value": 52.40803621958121}, {"type": "f1", "value": 52.61009636022186}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "091a54f9a36281ce7d6590ec8c75dd485e7e01d4"}, "metrics": [{"type": "v_measure", "value": 32.12697126747911}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 80.69976753889253}, {"type": "cos_sim_ap", "value": 54.74680676121268}, {"type": "cos_sim_f1", "value": 53.18923998590391}, {"type": "cos_sim_precision", "value": 47.93563413084904}, {"type": "cos_sim_recall", "value": 59.73614775725594}, {"type": "dot_accuracy", "value": 79.3348036001669}, {"type": "dot_ap", "value": 48.46902128933627}, {"type": "dot_f1", "value": 50.480109739369006}, {"type": "dot_precision", "value": 42.06084051345173}, {"type": "dot_recall", "value": 63.113456464379944}, {"type": "euclidean_accuracy", "value": 79.78780473266973}, {"type": "euclidean_ap", "value": 50.258327255164815}, {"type": "euclidean_f1", "value": 49.655838666827684}, {"type": "euclidean_precision", "value": 45.78044978846582}, {"type": "euclidean_recall", "value": 54.24802110817942}, {"type": "manhattan_accuracy", "value": 79.76992310901831}, {"type": "manhattan_ap", "value": 49.89892485714363}, {"type": "manhattan_f1", "value": 49.330433787341185}, {"type": "manhattan_precision", "value": 43.56175459874672}, {"type": "manhattan_recall", "value": 56.86015831134564}, {"type": "max_accuracy", "value": 80.69976753889253}, {"type": "max_ap", "value": 54.74680676121268}, {"type": "max_f1", "value": 53.18923998590391}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.90573213800597}, {"type": "cos_sim_ap", "value": 81.05760818661524}, {"type": "cos_sim_f1", "value": 73.64688856729379}, {"type": "cos_sim_precision", "value": 69.46491946491946}, {"type": "cos_sim_recall", "value": 78.3646442870342}, {"type": "dot_accuracy", "value": 83.80680715644041}, {"type": "dot_ap", "value": 72.49774005947461}, {"type": "dot_f1", "value": 68.68460650173216}, {"type": "dot_precision", "value": 62.954647507858105}, {"type": "dot_recall", "value": 75.56205728364644}, {"type": "euclidean_accuracy", "value": 85.97430822369697}, {"type": "euclidean_ap", "value": 78.86101740829326}, {"type": "euclidean_f1", "value": 71.07960824663695}, {"type": "euclidean_precision", "value": 70.36897306270279}, {"type": "euclidean_recall", "value": 71.8047428395442}, {"type": "manhattan_accuracy", "value": 85.94132029339853}, {"type": "manhattan_ap", "value": 78.77876711171923}, {"type": "manhattan_f1", "value": 71.07869075515912}, {"type": "manhattan_precision", "value": 69.80697847067557}, {"type": "manhattan_recall", "value": 72.39759778256852}, {"type": "max_accuracy", "value": 86.90573213800597}, {"type": "max_ap", "value": 81.05760818661524}, {"type": "max_f1", "value": 73.64688856729379}]}]}]} | Muennighoff/SGPT-125M-weightedmean-msmarco-specb-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us
|
# SGPT-125M-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to the eval folder or our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us \n",
"# SGPT-125M-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-msmarco-specb-bitfitwte
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-weightedmean-msmarco-specb-bitfitwte | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-weightedmean-msmarco-specb-bitfitwte
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-msmarco-specb-bitfitwte",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-weightedmean-msmarco-specb-bitfitwte",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-msmarco-specb
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-weightedmean-msmarco-specb | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-weightedmean-msmarco-specb
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-msmarco-specb",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-weightedmean-msmarco-specb",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-msmarco
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-weightedmean-msmarco | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-weightedmean-msmarco
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 15600 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-msmarco",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-weightedmean-msmarco",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 15600 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-nli-bitfit-linearthenpool1-noact
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'key_name': 'token_embeddings'})
(2): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-weightedmean-nli-bitfit-linearthenpool1-noact | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-weightedmean-nli-bitfit-linearthenpool1-noact
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-nli-bitfit-linearthenpool1-noact",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-weightedmean-nli-bitfit-linearthenpool1-noact",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "SGPT-125M-weightedmean-nli-bitfit", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 65.88059701492537}, {"type": "ap", "value": 28.685493163579785}, {"type": "f1", "value": 59.79951005816335}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 59.07922912205568}, {"type": "ap", "value": 73.91887421019034}, {"type": "f1", "value": 56.6316368658711}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 64.91754122938531}, {"type": "ap", "value": 16.360681214864226}, {"type": "f1", "value": 53.126592061523766}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 56.423982869378996}, {"type": "ap", "value": 12.143003571907899}, {"type": "f1", "value": 45.76363777987471}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1"}, "metrics": [{"type": "accuracy", "value": 74.938225}, {"type": "ap", "value": 69.58187110320567}, {"type": "f1", "value": 74.72744058439321}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 35.098}, {"type": "f1", "value": 34.73265651435726}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 24.516}, {"type": "f1", "value": 24.21748200448397}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 29.097999999999995}, {"type": "f1", "value": 28.620040162757093}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 27.395999999999997}, {"type": "f1", "value": 27.146888644986284}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 21.724}, {"type": "f1", "value": 21.37230564276654}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 23.976}, {"type": "f1", "value": 23.741137981755482}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3"}, "metrics": [{"type": "map_at_1", "value": 13.442000000000002}, {"type": "map_at_10", "value": 24.275}, {"type": "map_at_100", "value": 25.588}, {"type": "map_at_1000", "value": 25.659}, {"type": "map_at_3", "value": 20.092}, {"type": "map_at_5", "value": 22.439999999999998}, {"type": "ndcg_at_1", "value": 13.442000000000002}, {"type": "ndcg_at_10", "value": 31.04}, {"type": "ndcg_at_100", "value": 37.529}, {"type": "ndcg_at_1000", "value": 39.348}, {"type": "ndcg_at_3", "value": 22.342000000000002}, {"type": "ndcg_at_5", "value": 26.595999999999997}, {"type": "precision_at_1", "value": 13.442000000000002}, {"type": "precision_at_10", "value": 5.299}, {"type": "precision_at_100", "value": 0.836}, {"type": "precision_at_1000", "value": 0.098}, {"type": "precision_at_3", "value": 9.625}, {"type": "precision_at_5", "value": 7.852}, {"type": "recall_at_1", "value": 13.442000000000002}, {"type": "recall_at_10", "value": 52.986999999999995}, {"type": "recall_at_100", "value": 83.64200000000001}, {"type": "recall_at_1000", "value": 97.795}, {"type": "recall_at_3", "value": 28.876}, {"type": "recall_at_5", "value": 39.26}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "0bbdb47bcbe3a90093699aefeed338a0f28a7ee8"}, "metrics": [{"type": "v_measure", "value": 34.742482477870766}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3"}, "metrics": [{"type": "v_measure", "value": 24.67870651472156}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BlurbsClusteringS2S", "type": "slvnwhrl/blurbs-clustering-s2s", "config": "default", "split": "test", "revision": "9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d"}, "metrics": [{"type": "v_measure", "value": 8.00311862863495}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c"}, "metrics": [{"type": "map", "value": 52.63439984994702}, {"type": "mrr", "value": 65.75704612408214}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "9ee918f184421b6bd48b78f6c714d86546106103"}, "metrics": [{"type": "cos_sim_pearson", "value": 72.78000135012542}, {"type": "cos_sim_spearman", "value": 70.92812216947605}, {"type": "euclidean_pearson", "value": 77.1169214949292}, {"type": "euclidean_spearman", "value": 77.10175681583313}, {"type": "manhattan_pearson", "value": 76.84527031837595}, {"type": "manhattan_spearman", "value": 77.0704308008438}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (de-en)", "type": "mteb/bucc-bitext-mining", "config": "de-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 1.0960334029227559}, {"type": "f1", "value": 1.0925539318023658}, {"type": "precision", "value": 1.0908141962421711}, {"type": "recall", "value": 1.0960334029227559}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (fr-en)", "type": "mteb/bucc-bitext-mining", "config": "fr-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 0.02201188641866608}, {"type": "f1", "value": 0.02201188641866608}, {"type": "precision", "value": 0.02201188641866608}, {"type": "recall", "value": 0.02201188641866608}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (ru-en)", "type": "mteb/bucc-bitext-mining", "config": "ru-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 0.0}, {"type": "f1", "value": 0.0}, {"type": "precision", "value": 0.0}, {"type": "recall", "value": 0.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (zh-en)", "type": "mteb/bucc-bitext-mining", "config": "zh-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 0.0}, {"type": "f1", "value": 0.0}, {"type": "precision", "value": 0.0}, {"type": "recall", "value": 0.0}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "44fa15921b4c889113cc5df03dd4901b49161ab7"}, "metrics": [{"type": "accuracy", "value": 74.67857142857142}, {"type": "f1", "value": 74.61743413995573}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55"}, "metrics": [{"type": "v_measure", "value": 28.93427045246491}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "c0fab014e1bcb8d3a5e31b2088972a1e01547dc1"}, "metrics": [{"type": "v_measure", "value": 23.080939123955474}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "2b9f5791698b5be7bc5e10535c8690f20043c3db"}, "metrics": [{"type": "map_at_1", "value": 18.221999999999998}, {"type": "map_at_10", "value": 24.506}, {"type": "map_at_100", "value": 25.611}, {"type": "map_at_1000", "value": 25.758}, {"type": "map_at_3", "value": 22.264999999999997}, {"type": "map_at_5", "value": 23.698}, {"type": "ndcg_at_1", "value": 23.033}, {"type": "ndcg_at_10", "value": 28.719}, {"type": "ndcg_at_100", "value": 33.748}, {"type": "ndcg_at_1000", "value": 37.056}, {"type": "ndcg_at_3", "value": 25.240000000000002}, {"type": "ndcg_at_5", "value": 27.12}, {"type": "precision_at_1", "value": 23.033}, {"type": "precision_at_10", "value": 5.408}, {"type": "precision_at_100", "value": 1.004}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 11.874}, {"type": "precision_at_5", "value": 8.927}, {"type": "recall_at_1", "value": 18.221999999999998}, {"type": "recall_at_10", "value": 36.355}, {"type": "recall_at_100", "value": 58.724}, {"type": "recall_at_1000", "value": 81.33500000000001}, {"type": "recall_at_3", "value": 26.334000000000003}, {"type": "recall_at_5", "value": 31.4}, {"type": "map_at_1", "value": 12.058}, {"type": "map_at_10", "value": 16.051000000000002}, {"type": "map_at_100", "value": 16.772000000000002}, {"type": "map_at_1000", "value": 16.871}, {"type": "map_at_3", "value": 14.78}, {"type": "map_at_5", "value": 15.5}, {"type": "ndcg_at_1", "value": 15.35}, {"type": "ndcg_at_10", "value": 18.804000000000002}, {"type": "ndcg_at_100", "value": 22.346}, {"type": "ndcg_at_1000", "value": 25.007}, {"type": "ndcg_at_3", "value": 16.768}, {"type": "ndcg_at_5", "value": 17.692}, {"type": "precision_at_1", "value": 15.35}, {"type": "precision_at_10", "value": 3.51}, {"type": "precision_at_100", "value": 0.664}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 7.983}, {"type": "precision_at_5", "value": 5.656}, {"type": "recall_at_1", "value": 12.058}, {"type": "recall_at_10", "value": 23.644000000000002}, {"type": "recall_at_100", "value": 39.76}, {"type": "recall_at_1000", "value": 58.56}, {"type": "recall_at_3", "value": 17.541999999999998}, {"type": "recall_at_5", "value": 20.232}, {"type": "map_at_1", "value": 21.183}, {"type": "map_at_10", "value": 28.9}, {"type": "map_at_100", "value": 29.858}, {"type": "map_at_1000", "value": 29.953999999999997}, {"type": "map_at_3", "value": 26.58}, {"type": "map_at_5", "value": 27.912}, {"type": "ndcg_at_1", "value": 24.765}, {"type": "ndcg_at_10", "value": 33.339999999999996}, {"type": "ndcg_at_100", "value": 37.997}, {"type": "ndcg_at_1000", "value": 40.416000000000004}, {"type": "ndcg_at_3", "value": 29.044999999999998}, {"type": "ndcg_at_5", "value": 31.121}, {"type": "precision_at_1", "value": 24.765}, {"type": "precision_at_10", "value": 5.599}, {"type": "precision_at_100", "value": 0.8699999999999999}, {"type": "precision_at_1000", "value": 0.11499999999999999}, {"type": "precision_at_3", "value": 13.270999999999999}, {"type": "precision_at_5", "value": 9.367}, {"type": "recall_at_1", "value": 21.183}, {"type": "recall_at_10", "value": 43.875}, {"type": "recall_at_100", "value": 65.005}, {"type": "recall_at_1000", "value": 83.017}, {"type": "recall_at_3", "value": 32.232}, {"type": "recall_at_5", "value": 37.308}, {"type": "map_at_1", "value": 11.350999999999999}, {"type": "map_at_10", "value": 14.953}, {"type": "map_at_100", "value": 15.623000000000001}, {"type": "map_at_1000", "value": 15.716}, {"type": "map_at_3", "value": 13.603000000000002}, {"type": "map_at_5", "value": 14.343}, {"type": "ndcg_at_1", "value": 12.429}, {"type": "ndcg_at_10", "value": 17.319000000000003}, {"type": "ndcg_at_100", "value": 20.990000000000002}, {"type": "ndcg_at_1000", "value": 23.899}, {"type": "ndcg_at_3", "value": 14.605}, {"type": "ndcg_at_5", "value": 15.89}, {"type": "precision_at_1", "value": 12.429}, {"type": "precision_at_10", "value": 2.701}, {"type": "precision_at_100", "value": 0.48700000000000004}, {"type": "precision_at_1000", "value": 0.078}, {"type": "precision_at_3", "value": 6.026}, {"type": "precision_at_5", "value": 4.3839999999999995}, {"type": "recall_at_1", "value": 11.350999999999999}, {"type": "recall_at_10", "value": 23.536}, {"type": "recall_at_100", "value": 40.942}, {"type": "recall_at_1000", "value": 64.05}, {"type": "recall_at_3", "value": 16.195}, {"type": "recall_at_5", "value": 19.264}, {"type": "map_at_1", "value": 8.08}, {"type": "map_at_10", "value": 11.691}, {"type": "map_at_100", "value": 12.312}, {"type": "map_at_1000", "value": 12.439}, {"type": "map_at_3", "value": 10.344000000000001}, {"type": "map_at_5", "value": 10.996}, {"type": "ndcg_at_1", "value": 10.697}, {"type": "ndcg_at_10", "value": 14.48}, {"type": "ndcg_at_100", "value": 18.160999999999998}, {"type": "ndcg_at_1000", "value": 21.886}, {"type": "ndcg_at_3", "value": 11.872}, {"type": "ndcg_at_5", "value": 12.834000000000001}, {"type": "precision_at_1", "value": 10.697}, {"type": "precision_at_10", "value": 2.811}, {"type": "precision_at_100", "value": 0.551}, {"type": "precision_at_1000", "value": 0.10200000000000001}, {"type": "precision_at_3", "value": 5.804}, {"type": "precision_at_5", "value": 4.154}, {"type": "recall_at_1", "value": 8.08}, {"type": "recall_at_10", "value": 20.235}, {"type": "recall_at_100", "value": 37.525999999999996}, {"type": "recall_at_1000", "value": 65.106}, {"type": "recall_at_3", "value": 12.803999999999998}, {"type": "recall_at_5", "value": 15.498999999999999}, {"type": "map_at_1", "value": 13.908999999999999}, {"type": "map_at_10", "value": 19.256}, {"type": "map_at_100", "value": 20.286}, {"type": "map_at_1000", "value": 20.429}, {"type": "map_at_3", "value": 17.399}, {"type": "map_at_5", "value": 18.398999999999997}, {"type": "ndcg_at_1", "value": 17.421}, {"type": "ndcg_at_10", "value": 23.105999999999998}, {"type": "ndcg_at_100", "value": 28.128999999999998}, {"type": "ndcg_at_1000", "value": 31.480999999999998}, {"type": "ndcg_at_3", "value": 19.789}, {"type": "ndcg_at_5", "value": 21.237000000000002}, {"type": "precision_at_1", "value": 17.421}, {"type": "precision_at_10", "value": 4.331}, {"type": "precision_at_100", "value": 0.839}, {"type": "precision_at_1000", "value": 0.131}, {"type": "precision_at_3", "value": 9.4}, {"type": "precision_at_5", "value": 6.776}, {"type": "recall_at_1", "value": 13.908999999999999}, {"type": "recall_at_10", "value": 31.086999999999996}, {"type": "recall_at_100", "value": 52.946000000000005}, {"type": "recall_at_1000", "value": 76.546}, {"type": "recall_at_3", "value": 21.351}, {"type": "recall_at_5", "value": 25.264999999999997}, {"type": "map_at_1", "value": 12.598}, {"type": "map_at_10", "value": 17.304}, {"type": "map_at_100", "value": 18.209}, {"type": "map_at_1000", "value": 18.328}, {"type": "map_at_3", "value": 15.784}, {"type": "map_at_5", "value": 16.669999999999998}, {"type": "ndcg_at_1", "value": 15.867999999999999}, {"type": "ndcg_at_10", "value": 20.623}, {"type": "ndcg_at_100", "value": 25.093}, {"type": "ndcg_at_1000", "value": 28.498}, {"type": "ndcg_at_3", "value": 17.912}, {"type": "ndcg_at_5", "value": 19.198}, {"type": "precision_at_1", "value": 15.867999999999999}, {"type": "precision_at_10", "value": 3.7670000000000003}, {"type": "precision_at_100", "value": 0.716}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 8.638}, {"type": "precision_at_5", "value": 6.21}, {"type": "recall_at_1", "value": 12.598}, {"type": "recall_at_10", "value": 27.144000000000002}, {"type": "recall_at_100", "value": 46.817}, {"type": "recall_at_1000", "value": 71.86099999999999}, {"type": "recall_at_3", "value": 19.231}, {"type": "recall_at_5", "value": 22.716}, {"type": "map_at_1", "value": 12.738416666666666}, {"type": "map_at_10", "value": 17.235916666666668}, {"type": "map_at_100", "value": 18.063333333333333}, {"type": "map_at_1000", "value": 18.18433333333333}, {"type": "map_at_3", "value": 15.74775}, {"type": "map_at_5", "value": 16.57825}, {"type": "ndcg_at_1", "value": 15.487416666666665}, {"type": "ndcg_at_10", "value": 20.290166666666668}, {"type": "ndcg_at_100", "value": 24.41291666666666}, {"type": "ndcg_at_1000", "value": 27.586333333333336}, {"type": "ndcg_at_3", "value": 17.622083333333332}, {"type": "ndcg_at_5", "value": 18.859916666666667}, {"type": "precision_at_1", "value": 15.487416666666665}, {"type": "precision_at_10", "value": 3.6226666666666665}, {"type": "precision_at_100", "value": 0.6820833333333334}, {"type": "precision_at_1000", "value": 0.11216666666666666}, {"type": "precision_at_3", "value": 8.163749999999999}, {"type": "precision_at_5", "value": 5.865416666666667}, {"type": "recall_at_1", "value": 12.738416666666666}, {"type": "recall_at_10", "value": 26.599416666666663}, {"type": "recall_at_100", "value": 45.41258333333334}, {"type": "recall_at_1000", "value": 68.7565}, {"type": "recall_at_3", "value": 19.008166666666668}, {"type": "recall_at_5", "value": 22.24991666666667}, {"type": "map_at_1", "value": 12.307}, {"type": "map_at_10", "value": 15.440000000000001}, {"type": "map_at_100", "value": 16.033}, {"type": "map_at_1000", "value": 16.14}, {"type": "map_at_3", "value": 14.393}, {"type": "map_at_5", "value": 14.856}, {"type": "ndcg_at_1", "value": 14.571000000000002}, {"type": "ndcg_at_10", "value": 17.685000000000002}, {"type": "ndcg_at_100", "value": 20.882}, {"type": "ndcg_at_1000", "value": 23.888}, {"type": "ndcg_at_3", "value": 15.739}, {"type": "ndcg_at_5", "value": 16.391}, {"type": "precision_at_1", "value": 14.571000000000002}, {"type": "precision_at_10", "value": 2.883}, {"type": "precision_at_100", "value": 0.49100000000000005}, {"type": "precision_at_1000", "value": 0.08}, {"type": "precision_at_3", "value": 7.0040000000000004}, {"type": "precision_at_5", "value": 4.693}, {"type": "recall_at_1", "value": 12.307}, {"type": "recall_at_10", "value": 22.566}, {"type": "recall_at_100", "value": 37.469}, {"type": "recall_at_1000", "value": 60.550000000000004}, {"type": "recall_at_3", "value": 16.742}, {"type": "recall_at_5", "value": 18.634}, {"type": "map_at_1", "value": 6.496}, {"type": "map_at_10", "value": 9.243}, {"type": "map_at_100", "value": 9.841}, {"type": "map_at_1000", "value": 9.946000000000002}, {"type": "map_at_3", "value": 8.395}, {"type": "map_at_5", "value": 8.872}, {"type": "ndcg_at_1", "value": 8.224}, {"type": "ndcg_at_10", "value": 11.24}, {"type": "ndcg_at_100", "value": 14.524999999999999}, {"type": "ndcg_at_1000", "value": 17.686}, {"type": "ndcg_at_3", "value": 9.617}, {"type": "ndcg_at_5", "value": 10.37}, {"type": "precision_at_1", "value": 8.224}, {"type": "precision_at_10", "value": 2.0820000000000003}, {"type": "precision_at_100", "value": 0.443}, {"type": "precision_at_1000", "value": 0.08499999999999999}, {"type": "precision_at_3", "value": 4.623}, {"type": "precision_at_5", "value": 3.331}, {"type": "recall_at_1", "value": 6.496}, {"type": "recall_at_10", "value": 15.310000000000002}, {"type": "recall_at_100", "value": 30.680000000000003}, {"type": "recall_at_1000", "value": 54.335}, {"type": "recall_at_3", "value": 10.691}, {"type": "recall_at_5", "value": 12.687999999999999}, {"type": "map_at_1", "value": 13.843}, {"type": "map_at_10", "value": 17.496000000000002}, {"type": "map_at_100", "value": 18.304000000000002}, {"type": "map_at_1000", "value": 18.426000000000002}, {"type": "map_at_3", "value": 16.225}, {"type": "map_at_5", "value": 16.830000000000002}, {"type": "ndcg_at_1", "value": 16.698}, {"type": "ndcg_at_10", "value": 20.301}, {"type": "ndcg_at_100", "value": 24.523}, {"type": "ndcg_at_1000", "value": 27.784}, {"type": "ndcg_at_3", "value": 17.822}, {"type": "ndcg_at_5", "value": 18.794}, {"type": "precision_at_1", "value": 16.698}, {"type": "precision_at_10", "value": 3.3579999999999997}, {"type": "precision_at_100", "value": 0.618}, {"type": "precision_at_1000", "value": 0.101}, {"type": "precision_at_3", "value": 7.898}, {"type": "precision_at_5", "value": 5.428999999999999}, {"type": "recall_at_1", "value": 13.843}, {"type": "recall_at_10", "value": 25.887999999999998}, {"type": "recall_at_100", "value": 45.028}, {"type": "recall_at_1000", "value": 68.991}, {"type": "recall_at_3", "value": 18.851000000000003}, {"type": "recall_at_5", "value": 21.462}, {"type": "map_at_1", "value": 13.757}, {"type": "map_at_10", "value": 19.27}, {"type": "map_at_100", "value": 20.461}, {"type": "map_at_1000", "value": 20.641000000000002}, {"type": "map_at_3", "value": 17.865000000000002}, {"type": "map_at_5", "value": 18.618000000000002}, {"type": "ndcg_at_1", "value": 16.996}, {"type": "ndcg_at_10", "value": 22.774}, {"type": "ndcg_at_100", "value": 27.675}, {"type": "ndcg_at_1000", "value": 31.145}, {"type": "ndcg_at_3", "value": 20.691000000000003}, {"type": "ndcg_at_5", "value": 21.741}, {"type": "precision_at_1", "value": 16.996}, {"type": "precision_at_10", "value": 4.545}, {"type": "precision_at_100", "value": 1.036}, {"type": "precision_at_1000", "value": 0.185}, {"type": "precision_at_3", "value": 10.145}, {"type": "precision_at_5", "value": 7.391}, {"type": "recall_at_1", "value": 13.757}, {"type": "recall_at_10", "value": 28.233999999999998}, {"type": "recall_at_100", "value": 51.05499999999999}, {"type": "recall_at_1000", "value": 75.35300000000001}, {"type": "recall_at_3", "value": 21.794}, {"type": "recall_at_5", "value": 24.614}, {"type": "map_at_1", "value": 9.057}, {"type": "map_at_10", "value": 12.720999999999998}, {"type": "map_at_100", "value": 13.450000000000001}, {"type": "map_at_1000", "value": 13.564000000000002}, {"type": "map_at_3", "value": 11.34}, {"type": "map_at_5", "value": 12.245000000000001}, {"type": "ndcg_at_1", "value": 9.797}, {"type": "ndcg_at_10", "value": 15.091}, {"type": "ndcg_at_100", "value": 18.886}, {"type": "ndcg_at_1000", "value": 22.29}, {"type": "ndcg_at_3", "value": 12.365}, {"type": "ndcg_at_5", "value": 13.931}, {"type": "precision_at_1", "value": 9.797}, {"type": "precision_at_10", "value": 2.477}, {"type": "precision_at_100", "value": 0.466}, {"type": "precision_at_1000", "value": 0.082}, {"type": "precision_at_3", "value": 5.299}, {"type": "precision_at_5", "value": 4.067}, {"type": "recall_at_1", "value": 9.057}, {"type": "recall_at_10", "value": 21.319}, {"type": "recall_at_100", "value": 38.999}, {"type": "recall_at_1000", "value": 65.374}, {"type": "recall_at_3", "value": 14.331}, {"type": "recall_at_5", "value": 17.916999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "392b78eb68c07badcd7c2cd8f39af108375dfcce"}, "metrics": [{"type": "map_at_1", "value": 3.714}, {"type": "map_at_10", "value": 6.926}, {"type": "map_at_100", "value": 7.879}, {"type": "map_at_1000", "value": 8.032}, {"type": "map_at_3", "value": 5.504}, {"type": "map_at_5", "value": 6.357}, {"type": "ndcg_at_1", "value": 8.86}, {"type": "ndcg_at_10", "value": 11.007}, {"type": "ndcg_at_100", "value": 16.154}, {"type": "ndcg_at_1000", "value": 19.668}, {"type": "ndcg_at_3", "value": 8.103}, {"type": "ndcg_at_5", "value": 9.456000000000001}, {"type": "precision_at_1", "value": 8.86}, {"type": "precision_at_10", "value": 3.7199999999999998}, {"type": "precision_at_100", "value": 0.9169999999999999}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 6.254}, {"type": "precision_at_5", "value": 5.380999999999999}, {"type": "recall_at_1", "value": 3.714}, {"type": "recall_at_10", "value": 14.382}, {"type": "recall_at_100", "value": 33.166000000000004}, {"type": "recall_at_1000", "value": 53.444}, {"type": "recall_at_3", "value": 7.523000000000001}, {"type": "recall_at_5", "value": 10.91}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "f097057d03ed98220bc7309ddb10b71a54d667d6"}, "metrics": [{"type": "map_at_1", "value": 1.764}, {"type": "map_at_10", "value": 3.8600000000000003}, {"type": "map_at_100", "value": 5.457}, {"type": "map_at_1000", "value": 5.938000000000001}, {"type": "map_at_3", "value": 2.667}, {"type": "map_at_5", "value": 3.2199999999999998}, {"type": "ndcg_at_1", "value": 14.000000000000002}, {"type": "ndcg_at_10", "value": 10.868}, {"type": "ndcg_at_100", "value": 12.866}, {"type": "ndcg_at_1000", "value": 17.43}, {"type": "ndcg_at_3", "value": 11.943}, {"type": "ndcg_at_5", "value": 11.66}, {"type": "precision_at_1", "value": 19.25}, {"type": "precision_at_10", "value": 10.274999999999999}, {"type": "precision_at_100", "value": 3.527}, {"type": "precision_at_1000", "value": 0.9119999999999999}, {"type": "precision_at_3", "value": 14.917}, {"type": "precision_at_5", "value": 13.5}, {"type": "recall_at_1", "value": 1.764}, {"type": "recall_at_10", "value": 6.609}, {"type": "recall_at_100", "value": 17.616}, {"type": "recall_at_1000", "value": 33.085}, {"type": "recall_at_3", "value": 3.115}, {"type": "recall_at_5", "value": 4.605}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "829147f8f75a25f005913200eb5ed41fae320aa1"}, "metrics": [{"type": "accuracy", "value": 42.225}, {"type": "f1", "value": 37.563516542112104}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "1429cf27e393599b8b359b9b72c666f96b2525f9"}, "metrics": [{"type": "map_at_1", "value": 11.497}, {"type": "map_at_10", "value": 15.744}, {"type": "map_at_100", "value": 16.3}, {"type": "map_at_1000", "value": 16.365}, {"type": "map_at_3", "value": 14.44}, {"type": "map_at_5", "value": 15.18}, {"type": "ndcg_at_1", "value": 12.346}, {"type": "ndcg_at_10", "value": 18.398999999999997}, {"type": "ndcg_at_100", "value": 21.399}, {"type": "ndcg_at_1000", "value": 23.442}, {"type": "ndcg_at_3", "value": 15.695}, {"type": "ndcg_at_5", "value": 17.027}, {"type": "precision_at_1", "value": 12.346}, {"type": "precision_at_10", "value": 2.798}, {"type": "precision_at_100", "value": 0.445}, {"type": "precision_at_1000", "value": 0.063}, {"type": "precision_at_3", "value": 6.586}, {"type": "precision_at_5", "value": 4.665}, {"type": "recall_at_1", "value": 11.497}, {"type": "recall_at_10", "value": 25.636}, {"type": "recall_at_100", "value": 39.894}, {"type": "recall_at_1000", "value": 56.181000000000004}, {"type": "recall_at_3", "value": 18.273}, {"type": "recall_at_5", "value": 21.474}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "41b686a7f28c59bcaaa5791efd47c67c8ebe28be"}, "metrics": [{"type": "map_at_1", "value": 3.637}, {"type": "map_at_10", "value": 6.084}, {"type": "map_at_100", "value": 6.9190000000000005}, {"type": "map_at_1000", "value": 7.1080000000000005}, {"type": "map_at_3", "value": 5.071}, {"type": "map_at_5", "value": 5.5649999999999995}, {"type": "ndcg_at_1", "value": 7.407}, {"type": "ndcg_at_10", "value": 8.94}, {"type": "ndcg_at_100", "value": 13.594999999999999}, {"type": "ndcg_at_1000", "value": 18.29}, {"type": "ndcg_at_3", "value": 7.393}, {"type": "ndcg_at_5", "value": 7.854}, {"type": "precision_at_1", "value": 7.407}, {"type": "precision_at_10", "value": 2.778}, {"type": "precision_at_100", "value": 0.75}, {"type": "precision_at_1000", "value": 0.154}, {"type": "precision_at_3", "value": 5.144}, {"type": "precision_at_5", "value": 3.981}, {"type": "recall_at_1", "value": 3.637}, {"type": "recall_at_10", "value": 11.821}, {"type": "recall_at_100", "value": 30.18}, {"type": "recall_at_1000", "value": 60.207}, {"type": "recall_at_3", "value": 6.839}, {"type": "recall_at_5", "value": 8.649}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "766870b35a1b9ca65e67a0d1913899973551fc6c"}, "metrics": [{"type": "map_at_1", "value": 9.676}, {"type": "map_at_10", "value": 13.350999999999999}, {"type": "map_at_100", "value": 13.919}, {"type": "map_at_1000", "value": 14.01}, {"type": "map_at_3", "value": 12.223}, {"type": "map_at_5", "value": 12.812000000000001}, {"type": "ndcg_at_1", "value": 19.352}, {"type": "ndcg_at_10", "value": 17.727}, {"type": "ndcg_at_100", "value": 20.837}, {"type": "ndcg_at_1000", "value": 23.412}, {"type": "ndcg_at_3", "value": 15.317}, {"type": "ndcg_at_5", "value": 16.436}, {"type": "precision_at_1", "value": 19.352}, {"type": "precision_at_10", "value": 3.993}, {"type": "precision_at_100", "value": 0.651}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 9.669}, {"type": "precision_at_5", "value": 6.69}, {"type": "recall_at_1", "value": 9.676}, {"type": "recall_at_10", "value": 19.966}, {"type": "recall_at_100", "value": 32.573}, {"type": "recall_at_1000", "value": 49.905}, {"type": "recall_at_3", "value": 14.504}, {"type": "recall_at_5", "value": 16.725}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "8d743909f834c38949e8323a8a6ce8721ea6c7f4"}, "metrics": [{"type": "accuracy", "value": 62.895999999999994}, {"type": "ap", "value": 58.47769349850157}, {"type": "f1", "value": 62.67885149592086}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "validation", "revision": "e6838a846e2408f22cf5cc337ebc83e0bcf77849"}, "metrics": [{"type": "map_at_1", "value": 2.88}, {"type": "map_at_10", "value": 4.914000000000001}, {"type": "map_at_100", "value": 5.459}, {"type": "map_at_1000", "value": 5.538}, {"type": "map_at_3", "value": 4.087}, {"type": "map_at_5", "value": 4.518}, {"type": "ndcg_at_1", "value": 2.937}, {"type": "ndcg_at_10", "value": 6.273}, {"type": "ndcg_at_100", "value": 9.426}, {"type": "ndcg_at_1000", "value": 12.033000000000001}, {"type": "ndcg_at_3", "value": 4.513}, {"type": "ndcg_at_5", "value": 5.292}, {"type": "precision_at_1", "value": 2.937}, {"type": "precision_at_10", "value": 1.089}, {"type": "precision_at_100", "value": 0.27699999999999997}, {"type": "precision_at_1000", "value": 0.051000000000000004}, {"type": "precision_at_3", "value": 1.9290000000000003}, {"type": "precision_at_5", "value": 1.547}, {"type": "recall_at_1", "value": 2.88}, {"type": "recall_at_10", "value": 10.578}, {"type": "recall_at_100", "value": 26.267000000000003}, {"type": "recall_at_1000", "value": 47.589999999999996}, {"type": "recall_at_3", "value": 5.673}, {"type": "recall_at_5", "value": 7.545}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 81.51846785225717}, {"type": "f1", "value": 81.648869152345}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 60.37475345167653}, {"type": "f1", "value": 58.452649375517026}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 67.36824549699799}, {"type": "f1", "value": 65.35927434998516}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 63.12871907297212}, {"type": "f1", "value": 61.37620329272278}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 47.04553603442094}, {"type": "f1", "value": 46.20389912644561}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 52.282097649186255}, {"type": "f1", "value": 50.75489206473579}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 58.2421340629275}, {"type": "f1", "value": 40.11696046622642}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 45.069033530571986}, {"type": "f1", "value": 30.468468273374967}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 48.80920613742495}, {"type": "f1", "value": 32.65985375400447}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 44.337613529595984}, {"type": "f1", "value": 29.302047435606436}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 34.198637504481894}, {"type": "f1", "value": 22.063706032248408}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 43.11030741410488}, {"type": "f1", "value": 26.92408933648504}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (af)", "type": "mteb/amazon_massive_intent", "config": "af", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 37.79421654337593}, {"type": "f1", "value": 36.81580701507746}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (am)", "type": "mteb/amazon_massive_intent", "config": "am", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 23.722259583053127}, {"type": "f1", "value": 23.235269695764273}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ar)", "type": "mteb/amazon_massive_intent", "config": "ar", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 29.64021519838601}, {"type": "f1", "value": 28.273175327650137}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (az)", "type": "mteb/amazon_massive_intent", "config": "az", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 39.4754539340955}, {"type": "f1", "value": 39.25997361415121}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (bn)", "type": "mteb/amazon_massive_intent", "config": "bn", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 26.550100874243444}, {"type": "f1", "value": 25.607924873522975}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (cy)", "type": "mteb/amazon_massive_intent", "config": "cy", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.78278412911904}, {"type": "f1", "value": 37.64180582626517}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (da)", "type": "mteb/amazon_massive_intent", "config": "da", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 43.557498318762605}, {"type": "f1", "value": 41.35305173800667}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.39340954942838}, {"type": "f1", "value": 38.33393219528934}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (el)", "type": "mteb/amazon_massive_intent", "config": "el", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 37.28648285137861}, {"type": "f1", "value": 36.64005906680284}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 58.080026899798256}, {"type": "f1", "value": 56.49243881660991}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.176866173503704}, {"type": "f1", "value": 40.66779962225799}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fa)", "type": "mteb/amazon_massive_intent", "config": "fa", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 36.422326832548755}, {"type": "f1", "value": 34.6441738042885}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fi)", "type": "mteb/amazon_massive_intent", "config": "fi", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.75588433086752}, {"type": "f1", "value": 37.26725894668694}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 43.67182246133153}, {"type": "f1", "value": 42.351846624566605}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (he)", "type": "mteb/amazon_massive_intent", "config": "he", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 31.980497646267658}, {"type": "f1", "value": 30.557928872809008}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hi)", "type": "mteb/amazon_massive_intent", "config": "hi", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 28.039677202420982}, {"type": "f1", "value": 28.428418145508306}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hu)", "type": "mteb/amazon_massive_intent", "config": "hu", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.13718897108272}, {"type": "f1", "value": 37.057406988196874}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hy)", "type": "mteb/amazon_massive_intent", "config": "hy", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 26.05245460659045}, {"type": "f1", "value": 25.25483953344816}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (id)", "type": "mteb/amazon_massive_intent", "config": "id", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.156691324815064}, {"type": "f1", "value": 40.83715033247605}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (is)", "type": "mteb/amazon_massive_intent", "config": "is", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.62811028917284}, {"type": "f1", "value": 37.67691901246032}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (it)", "type": "mteb/amazon_massive_intent", "config": "it", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 44.0383322125084}, {"type": "f1", "value": 43.77259010877456}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ja)", "type": "mteb/amazon_massive_intent", "config": "ja", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 46.20712844653666}, {"type": "f1", "value": 44.66632875940824}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (jv)", "type": "mteb/amazon_massive_intent", "config": "jv", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 37.60591795561533}, {"type": "f1", "value": 36.581071742378015}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ka)", "type": "mteb/amazon_massive_intent", "config": "ka", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 24.47209145931405}, {"type": "f1", "value": 24.238209697895606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (km)", "type": "mteb/amazon_massive_intent", "config": "km", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 26.23739071956961}, {"type": "f1", "value": 25.378783150845052}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (kn)", "type": "mteb/amazon_massive_intent", "config": "kn", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 17.831203765971754}, {"type": "f1", "value": 17.275078420466343}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ko)", "type": "mteb/amazon_massive_intent", "config": "ko", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 37.266308002689975}, {"type": "f1", "value": 36.92473791708214}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (lv)", "type": "mteb/amazon_massive_intent", "config": "lv", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.93140551445864}, {"type": "f1", "value": 40.825227889641965}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ml)", "type": "mteb/amazon_massive_intent", "config": "ml", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 17.88500336247478}, {"type": "f1", "value": 17.621569082971817}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (mn)", "type": "mteb/amazon_massive_intent", "config": "mn", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 32.975790181573636}, {"type": "f1", "value": 33.402014633349665}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ms)", "type": "mteb/amazon_massive_intent", "config": "ms", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.91123066577001}, {"type": "f1", "value": 40.09538559124075}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (my)", "type": "mteb/amazon_massive_intent", "config": "my", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 17.834566240753194}, {"type": "f1", "value": 17.006381849454314}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nb)", "type": "mteb/amazon_massive_intent", "config": "nb", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 39.47881640887693}, {"type": "f1", "value": 37.819934317839305}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nl)", "type": "mteb/amazon_massive_intent", "config": "nl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.76193678547412}, {"type": "f1", "value": 40.281991759509694}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.61936785474109}, {"type": "f1", "value": 40.83673914649905}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pt)", "type": "mteb/amazon_massive_intent", "config": "pt", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 44.54270342972427}, {"type": "f1", "value": 43.45243164278448}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ro)", "type": "mteb/amazon_massive_intent", "config": "ro", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 39.96973772696705}, {"type": "f1", "value": 38.74209466530094}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ru)", "type": "mteb/amazon_massive_intent", "config": "ru", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 37.461331540013454}, {"type": "f1", "value": 36.91132021821187}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sl)", "type": "mteb/amazon_massive_intent", "config": "sl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.28850033624748}, {"type": "f1", "value": 37.37259394049676}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sq)", "type": "mteb/amazon_massive_intent", "config": "sq", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.95494283792872}, {"type": "f1", "value": 39.767707902869084}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sv)", "type": "mteb/amazon_massive_intent", "config": "sv", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 41.85272360457296}, {"type": "f1", "value": 40.42848260365438}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sw)", "type": "mteb/amazon_massive_intent", "config": "sw", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.328850033624754}, {"type": "f1", "value": 36.90334596675622}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ta)", "type": "mteb/amazon_massive_intent", "config": "ta", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 19.031607262945528}, {"type": "f1", "value": 18.66510306325761}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (te)", "type": "mteb/amazon_massive_intent", "config": "te", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 19.38466711499664}, {"type": "f1", "value": 19.186399376652535}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (th)", "type": "mteb/amazon_massive_intent", "config": "th", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 34.088769334229994}, {"type": "f1", "value": 34.20383086009429}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tl)", "type": "mteb/amazon_massive_intent", "config": "tl", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 40.285810356422324}, {"type": "f1", "value": 39.361500249640414}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tr)", "type": "mteb/amazon_massive_intent", "config": "tr", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.860121049092136}, {"type": "f1", "value": 37.81916859627235}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ur)", "type": "mteb/amazon_massive_intent", "config": "ur", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 27.834566240753194}, {"type": "f1", "value": 26.898389386106487}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (vi)", "type": "mteb/amazon_massive_intent", "config": "vi", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 38.70544720914593}, {"type": "f1", "value": 38.280026442024415}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 45.78009414929387}, {"type": "f1", "value": 44.21526778674136}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-TW)", "type": "mteb/amazon_massive_intent", "config": "zh-TW", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 42.32010759919301}, {"type": "f1", "value": 42.25772977490916}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (af)", "type": "mteb/amazon_massive_scenario", "config": "af", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.24546065904506}, {"type": "f1", "value": 38.79924050989544}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (am)", "type": "mteb/amazon_massive_scenario", "config": "am", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 25.68930733019502}, {"type": "f1", "value": 25.488166279162712}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ar)", "type": "mteb/amazon_massive_scenario", "config": "ar", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.39744451916611}, {"type": "f1", "value": 31.863029579075775}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (az)", "type": "mteb/amazon_massive_scenario", "config": "az", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.53127101546738}, {"type": "f1", "value": 39.707079033948936}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (bn)", "type": "mteb/amazon_massive_scenario", "config": "bn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 27.23268325487559}, {"type": "f1", "value": 26.443653281858793}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (cy)", "type": "mteb/amazon_massive_scenario", "config": "cy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.69872225958305}, {"type": "f1", "value": 36.55930387892567}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (da)", "type": "mteb/amazon_massive_scenario", "config": "da", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.75453934095494}, {"type": "f1", "value": 42.87356484024154}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 41.355077336919976}, {"type": "f1", "value": 39.82365179458047}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (el)", "type": "mteb/amazon_massive_scenario", "config": "el", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.43981170141224}, {"type": "f1", "value": 37.02538368296387}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.33826496301278}, {"type": "f1", "value": 65.89634765029932}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.17955615332885}, {"type": "f1", "value": 43.10228811620319}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fa)", "type": "mteb/amazon_massive_scenario", "config": "fa", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 34.82851378614661}, {"type": "f1", "value": 33.95952441502803}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fi)", "type": "mteb/amazon_massive_scenario", "config": "fi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.561533288500335}, {"type": "f1", "value": 38.04939011733627}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.917955615332886}, {"type": "f1", "value": 44.65741971572902}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (he)", "type": "mteb/amazon_massive_scenario", "config": "he", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.08473436449227}, {"type": "f1", "value": 29.53932929808133}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hi)", "type": "mteb/amazon_massive_scenario", "config": "hi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 28.369199731002016}, {"type": "f1", "value": 27.52902837981212}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hu)", "type": "mteb/amazon_massive_scenario", "config": "hu", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.49226630800269}, {"type": "f1", "value": 37.3272340470504}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hy)", "type": "mteb/amazon_massive_scenario", "config": "hy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 25.904505716207133}, {"type": "f1", "value": 24.547396574853444}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (id)", "type": "mteb/amazon_massive_scenario", "config": "id", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.95830531271016}, {"type": "f1", "value": 40.177843177422226}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (is)", "type": "mteb/amazon_massive_scenario", "config": "is", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.564223268325485}, {"type": "f1", "value": 37.35307758495248}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (it)", "type": "mteb/amazon_massive_scenario", "config": "it", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 46.58708809683928}, {"type": "f1", "value": 44.103900526804985}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ja)", "type": "mteb/amazon_massive_scenario", "config": "ja", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 46.24747814391393}, {"type": "f1", "value": 45.4107101796664}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (jv)", "type": "mteb/amazon_massive_scenario", "config": "jv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.6570275722932}, {"type": "f1", "value": 38.82737576832412}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ka)", "type": "mteb/amazon_massive_scenario", "config": "ka", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 25.279085406859448}, {"type": "f1", "value": 23.662661686788493}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (km)", "type": "mteb/amazon_massive_scenario", "config": "km", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 28.97108271687962}, {"type": "f1", "value": 27.195758324189246}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (kn)", "type": "mteb/amazon_massive_scenario", "config": "kn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 19.27370544720915}, {"type": "f1", "value": 18.694271924323637}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ko)", "type": "mteb/amazon_massive_scenario", "config": "ko", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 35.729657027572294}, {"type": "f1", "value": 34.38287006177308}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (lv)", "type": "mteb/amazon_massive_scenario", "config": "lv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.57296570275723}, {"type": "f1", "value": 38.074945140886925}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ml)", "type": "mteb/amazon_massive_scenario", "config": "ml", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 19.895763281775388}, {"type": "f1", "value": 20.00931364846829}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (mn)", "type": "mteb/amazon_massive_scenario", "config": "mn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 32.431069266980494}, {"type": "f1", "value": 31.395958664782576}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ms)", "type": "mteb/amazon_massive_scenario", "config": "ms", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.32347007397445}, {"type": "f1", "value": 40.81374026314701}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (my)", "type": "mteb/amazon_massive_scenario", "config": "my", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 20.864156018829856}, {"type": "f1", "value": 20.409870408935436}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nb)", "type": "mteb/amazon_massive_scenario", "config": "nb", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.47074646940148}, {"type": "f1", "value": 39.19044149415904}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nl)", "type": "mteb/amazon_massive_scenario", "config": "nl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 43.591123066577}, {"type": "f1", "value": 41.43420363064241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 41.876260928043045}, {"type": "f1", "value": 41.192117676667614}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pt)", "type": "mteb/amazon_massive_scenario", "config": "pt", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 46.30800268997983}, {"type": "f1", "value": 45.25536730126799}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ro)", "type": "mteb/amazon_massive_scenario", "config": "ro", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.525218560860786}, {"type": "f1", "value": 41.02418109296485}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ru)", "type": "mteb/amazon_massive_scenario", "config": "ru", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 35.94821788836584}, {"type": "f1", "value": 35.08598314806566}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sl)", "type": "mteb/amazon_massive_scenario", "config": "sl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.69199731002017}, {"type": "f1", "value": 37.68119408674127}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sq)", "type": "mteb/amazon_massive_scenario", "config": "sq", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.474108944182916}, {"type": "f1", "value": 39.480530387013594}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sv)", "type": "mteb/amazon_massive_scenario", "config": "sv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 41.523201075991935}, {"type": "f1", "value": 40.20097996024383}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sw)", "type": "mteb/amazon_massive_scenario", "config": "sw", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.54942837928716}, {"type": "f1", "value": 38.185561243338064}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ta)", "type": "mteb/amazon_massive_scenario", "config": "ta", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 22.8782784129119}, {"type": "f1", "value": 22.239467186721456}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (te)", "type": "mteb/amazon_massive_scenario", "config": "te", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 20.51445864156019}, {"type": "f1", "value": 19.999047885530217}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (th)", "type": "mteb/amazon_massive_scenario", "config": "th", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 34.92602555480834}, {"type": "f1", "value": 33.24016717215723}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tl)", "type": "mteb/amazon_massive_scenario", "config": "tl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.74983187626093}, {"type": "f1", "value": 39.30274328728882}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tr)", "type": "mteb/amazon_massive_scenario", "config": "tr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 39.06859448554136}, {"type": "f1", "value": 39.21542039662971}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ur)", "type": "mteb/amazon_massive_scenario", "config": "ur", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 29.747814391392062}, {"type": "f1", "value": 28.261836892220447}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (vi)", "type": "mteb/amazon_massive_scenario", "config": "vi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.02286482851379}, {"type": "f1", "value": 37.8742438608697}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 48.550773369199725}, {"type": "f1", "value": 46.7399625882649}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-TW)", "type": "mteb/amazon_massive_scenario", "config": "zh-TW", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.17821116341628}, {"type": "f1", "value": 44.84809741811729}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "dcefc037ef84348e49b0d29109e891c01067226b"}, "metrics": [{"type": "v_measure", "value": 28.301902023313875}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc"}, "metrics": [{"type": "v_measure", "value": 24.932123582259287}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 29.269341041468326}, {"type": "mrr", "value": 30.132140876875717}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "7eb63cc0c1eb59324d709ebed25fcab851fa7610"}, "metrics": [{"type": "map_at_1", "value": 1.2269999999999999}, {"type": "map_at_10", "value": 3.081}, {"type": "map_at_100", "value": 4.104}, {"type": "map_at_1000", "value": 4.989}, {"type": "map_at_3", "value": 2.221}, {"type": "map_at_5", "value": 2.535}, {"type": "ndcg_at_1", "value": 15.015}, {"type": "ndcg_at_10", "value": 11.805}, {"type": "ndcg_at_100", "value": 12.452}, {"type": "ndcg_at_1000", "value": 22.284000000000002}, {"type": "ndcg_at_3", "value": 13.257}, {"type": "ndcg_at_5", "value": 12.199}, {"type": "precision_at_1", "value": 16.409000000000002}, {"type": "precision_at_10", "value": 9.102}, {"type": "precision_at_100", "value": 3.678}, {"type": "precision_at_1000", "value": 1.609}, {"type": "precision_at_3", "value": 12.797}, {"type": "precision_at_5", "value": 10.464}, {"type": "recall_at_1", "value": 1.2269999999999999}, {"type": "recall_at_10", "value": 5.838}, {"type": "recall_at_100", "value": 15.716}, {"type": "recall_at_1000", "value": 48.837}, {"type": "recall_at_3", "value": 2.828}, {"type": "recall_at_5", "value": 3.697}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "6062aefc120bfe8ece5897809fb2e53bfe0d128c"}, "metrics": [{"type": "map_at_1", "value": 3.515}, {"type": "map_at_10", "value": 5.884}, {"type": "map_at_100", "value": 6.510000000000001}, {"type": "map_at_1000", "value": 6.598999999999999}, {"type": "map_at_3", "value": 4.8919999999999995}, {"type": "map_at_5", "value": 5.391}, {"type": "ndcg_at_1", "value": 4.056}, {"type": "ndcg_at_10", "value": 7.6259999999999994}, {"type": "ndcg_at_100", "value": 11.08}, {"type": "ndcg_at_1000", "value": 13.793}, {"type": "ndcg_at_3", "value": 5.537}, {"type": "ndcg_at_5", "value": 6.45}, {"type": "precision_at_1", "value": 4.056}, {"type": "precision_at_10", "value": 1.4569999999999999}, {"type": "precision_at_100", "value": 0.347}, {"type": "precision_at_1000", "value": 0.061}, {"type": "precision_at_3", "value": 2.6069999999999998}, {"type": "precision_at_5", "value": 2.086}, {"type": "recall_at_1", "value": 3.515}, {"type": "recall_at_10", "value": 12.312}, {"type": "recall_at_100", "value": 28.713}, {"type": "recall_at_1000", "value": 50.027}, {"type": "recall_at_3", "value": 6.701}, {"type": "recall_at_5", "value": 8.816}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "6205996560df11e3a3da9ab4f926788fc30a7db4"}, "metrics": [{"type": "map_at_1", "value": 61.697}, {"type": "map_at_10", "value": 74.20400000000001}, {"type": "map_at_100", "value": 75.023}, {"type": "map_at_1000", "value": 75.059}, {"type": "map_at_3", "value": 71.265}, {"type": "map_at_5", "value": 73.001}, {"type": "ndcg_at_1", "value": 70.95}, {"type": "ndcg_at_10", "value": 78.96}, {"type": "ndcg_at_100", "value": 81.26}, {"type": "ndcg_at_1000", "value": 81.679}, {"type": "ndcg_at_3", "value": 75.246}, {"type": "ndcg_at_5", "value": 77.092}, {"type": "precision_at_1", "value": 70.95}, {"type": "precision_at_10", "value": 11.998000000000001}, {"type": "precision_at_100", "value": 1.451}, {"type": "precision_at_1000", "value": 0.154}, {"type": "precision_at_3", "value": 32.629999999999995}, {"type": "precision_at_5", "value": 21.573999999999998}, {"type": "recall_at_1", "value": 61.697}, {"type": "recall_at_10", "value": 88.23299999999999}, {"type": "recall_at_100", "value": 96.961}, {"type": "recall_at_1000", "value": 99.401}, {"type": "recall_at_3", "value": 77.689}, {"type": "recall_at_5", "value": 82.745}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "b2805658ae38990172679479369a78b86de8c390"}, "metrics": [{"type": "v_measure", "value": 33.75741018380938}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 41.00799910099266}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "5c59ef3e437a0a9651c8fe6fde943e7dce59fba5"}, "metrics": [{"type": "map_at_1", "value": 1.72}, {"type": "map_at_10", "value": 3.8240000000000003}, {"type": "map_at_100", "value": 4.727}, {"type": "map_at_1000", "value": 4.932}, {"type": "map_at_3", "value": 2.867}, {"type": "map_at_5", "value": 3.3230000000000004}, {"type": "ndcg_at_1", "value": 8.5}, {"type": "ndcg_at_10", "value": 7.133000000000001}, {"type": "ndcg_at_100", "value": 11.911}, {"type": "ndcg_at_1000", "value": 16.962}, {"type": "ndcg_at_3", "value": 6.763}, {"type": "ndcg_at_5", "value": 5.832}, {"type": "precision_at_1", "value": 8.5}, {"type": "precision_at_10", "value": 3.6799999999999997}, {"type": "precision_at_100", "value": 1.0670000000000002}, {"type": "precision_at_1000", "value": 0.22999999999999998}, {"type": "precision_at_3", "value": 6.2330000000000005}, {"type": "precision_at_5", "value": 5.0200000000000005}, {"type": "recall_at_1", "value": 1.72}, {"type": "recall_at_10", "value": 7.487000000000001}, {"type": "recall_at_100", "value": 21.683}, {"type": "recall_at_1000", "value": 46.688}, {"type": "recall_at_3", "value": 3.798}, {"type": "recall_at_5", "value": 5.113}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.96286245858941}, {"type": "cos_sim_spearman", "value": 74.57093488947429}, {"type": "euclidean_pearson", "value": 75.50377970259402}, {"type": "euclidean_spearman", "value": 71.7498004622999}, {"type": "manhattan_pearson", "value": 75.3256836091382}, {"type": "manhattan_spearman", "value": 71.80676733410375}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "fdf84275bb8ce4b49c971d02e84dd1abc677a50f"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.20938796088339}, {"type": "cos_sim_spearman", "value": 69.16914010333394}, {"type": "euclidean_pearson", "value": 79.33415250097545}, {"type": "euclidean_spearman", "value": 71.46707320292745}, {"type": "manhattan_pearson", "value": 79.73669837981976}, {"type": "manhattan_spearman", "value": 71.87919511134902}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "1591bfcbe8c69d4bf7fe2a16e2451017832cafb9"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.401935081936}, {"type": "cos_sim_spearman", "value": 77.23446219694267}, {"type": "euclidean_pearson", "value": 74.61017160439877}, {"type": "euclidean_spearman", "value": 75.85871531365609}, {"type": "manhattan_pearson", "value": 74.83034779539724}, {"type": "manhattan_spearman", "value": 75.95948993588429}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "e2125984e7df8b7871f6ae9949cf6b6795e7c54b"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.35551963935667}, {"type": "cos_sim_spearman", "value": 70.98892671568665}, {"type": "euclidean_pearson", "value": 73.24467338564628}, {"type": "euclidean_spearman", "value": 71.97533151639425}, {"type": "manhattan_pearson", "value": 73.2776559359938}, {"type": "manhattan_spearman", "value": 72.2221421456084}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "1cd7298cac12a96a373b6a2f18738bb3e739a9b6"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.05293131911803}, {"type": "cos_sim_spearman", "value": 79.7379478259805}, {"type": "euclidean_pearson", "value": 78.17016171851057}, {"type": "euclidean_spearman", "value": 78.76038607583105}, {"type": "manhattan_pearson", "value": 78.4994607532332}, {"type": "manhattan_spearman", "value": 79.13026720132872}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "360a0b2dff98700d09e634a01e1cc1624d3e42cd"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.04750373932828}, {"type": "cos_sim_spearman", "value": 77.93230986462234}, {"type": "euclidean_pearson", "value": 75.8320302521164}, {"type": "euclidean_spearman", "value": 76.83154481579385}, {"type": "manhattan_pearson", "value": 75.98713517720608}, {"type": "manhattan_spearman", "value": 76.95479705521507}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ko-ko)", "type": "mteb/sts17-crosslingual-sts", "config": "ko-ko", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 43.0464619152799}, {"type": "cos_sim_spearman", "value": 45.65606588928089}, {"type": "euclidean_pearson", "value": 45.69437788355499}, {"type": "euclidean_spearman", "value": 45.08552742346606}, {"type": "manhattan_pearson", "value": 45.87166698903681}, {"type": "manhattan_spearman", "value": 45.155963016434164}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ar-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "ar-ar", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 53.27469278912148}, {"type": "cos_sim_spearman", "value": 54.16113207623789}, {"type": "euclidean_pearson", "value": 55.97026429327157}, {"type": "euclidean_spearman", "value": 54.71320909074608}, {"type": "manhattan_pearson", "value": 56.12511774278802}, {"type": "manhattan_spearman", "value": 55.22875659158676}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 1.5482997790039945}, {"type": "cos_sim_spearman", "value": 1.7208386347363582}, {"type": "euclidean_pearson", "value": 6.727915670345885}, {"type": "euclidean_spearman", "value": 6.112826908474543}, {"type": "manhattan_pearson", "value": 4.94386093060865}, {"type": "manhattan_spearman", "value": 5.018174110623732}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 27.5420218362265}, {"type": "cos_sim_spearman", "value": 25.483838431031007}, {"type": "euclidean_pearson", "value": 6.268684143856358}, {"type": "euclidean_spearman", "value": 5.877961421091679}, {"type": "manhattan_pearson", "value": 2.667237739227861}, {"type": "manhattan_spearman", "value": 2.5683839956554775}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.32029757646663}, {"type": "cos_sim_spearman", "value": 87.32720847297225}, {"type": "euclidean_pearson", "value": 81.12594485791254}, {"type": "euclidean_spearman", "value": 81.1531079489332}, {"type": "manhattan_pearson", "value": 81.32899414704019}, {"type": "manhattan_spearman", "value": 81.3897040261192}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 4.37162299241808}, {"type": "cos_sim_spearman", "value": 2.0879072561774543}, {"type": "euclidean_pearson", "value": 3.0725243785454595}, {"type": "euclidean_spearman", "value": 5.3721339279483535}, {"type": "manhattan_pearson", "value": 4.867795293367359}, {"type": "manhattan_spearman", "value": 7.9397069840018775}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 20.306030448858603}, {"type": "cos_sim_spearman", "value": 21.93220782551375}, {"type": "euclidean_pearson", "value": 3.878631934602361}, {"type": "euclidean_spearman", "value": 5.171796902725965}, {"type": "manhattan_pearson", "value": 7.13020644036815}, {"type": "manhattan_spearman", "value": 7.707315591498748}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 66.81873207478459}, {"type": "cos_sim_spearman", "value": 67.80273445636502}, {"type": "euclidean_pearson", "value": 70.60654682977268}, {"type": "euclidean_spearman", "value": 69.4566208379486}, {"type": "manhattan_pearson", "value": 70.9548461896642}, {"type": "manhattan_spearman", "value": 69.78323323058773}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 21.366487281202602}, {"type": "cos_sim_spearman", "value": 18.90627528698481}, {"type": "euclidean_pearson", "value": 2.3390998579461995}, {"type": "euclidean_spearman", "value": 4.151213674012541}, {"type": "manhattan_pearson", "value": 2.234831868844863}, {"type": "manhattan_spearman", "value": 4.555291328501442}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 20.73153177251085}, {"type": "cos_sim_spearman", "value": 16.3855949033176}, {"type": "euclidean_pearson", "value": 8.734648741714238}, {"type": "euclidean_spearman", "value": 10.75672244732182}, {"type": "manhattan_pearson", "value": 7.536654126608877}, {"type": "manhattan_spearman", "value": 8.330065460047296}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 26.618435024084253}, {"type": "cos_sim_spearman", "value": 23.488974089577816}, {"type": "euclidean_pearson", "value": 3.1310350304707866}, {"type": "euclidean_spearman", "value": 3.1242598481634665}, {"type": "manhattan_pearson", "value": 1.1096752982707008}, {"type": "manhattan_spearman", "value": 1.4591693078765848}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.17638344661753}, {"type": "cos_sim_spearman", "value": 59.636760071130865}, {"type": "euclidean_pearson", "value": 56.68753290255448}, {"type": "euclidean_spearman", "value": 57.613280258574484}, {"type": "manhattan_pearson", "value": 56.92312052723706}, {"type": "manhattan_spearman", "value": 57.76774918418505}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de)", "type": "mteb/sts22-crosslingual-sts", "config": "de", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 10.322254716987457}, {"type": "cos_sim_spearman", "value": 11.0033092996862}, {"type": "euclidean_pearson", "value": 6.006926471684402}, {"type": "euclidean_spearman", "value": 10.972140246688376}, {"type": "manhattan_pearson", "value": 5.933298751861177}, {"type": "manhattan_spearman", "value": 11.030111585680233}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es)", "type": "mteb/sts22-crosslingual-sts", "config": "es", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 43.38031880545056}, {"type": "cos_sim_spearman", "value": 43.05358201410913}, {"type": "euclidean_pearson", "value": 42.72327196362553}, {"type": "euclidean_spearman", "value": 42.55163899944477}, {"type": "manhattan_pearson", "value": 44.01557499780587}, {"type": "manhattan_spearman", "value": 43.12473221615855}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 4.291290504363136}, {"type": "cos_sim_spearman", "value": 14.912727487893479}, {"type": "euclidean_pearson", "value": 3.2855132112394485}, {"type": "euclidean_spearman", "value": 16.575204463951025}, {"type": "manhattan_pearson", "value": 3.2398776723465814}, {"type": "manhattan_spearman", "value": 16.841985772913855}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (tr)", "type": "mteb/sts22-crosslingual-sts", "config": "tr", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 4.102739498555817}, {"type": "cos_sim_spearman", "value": 3.818238576547375}, {"type": "euclidean_pearson", "value": 2.3181033496453556}, {"type": "euclidean_spearman", "value": 5.1826811802703565}, {"type": "manhattan_pearson", "value": 4.8006179265256455}, {"type": "manhattan_spearman", "value": 6.738401400306252}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ar)", "type": "mteb/sts22-crosslingual-sts", "config": "ar", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 2.38765395226737}, {"type": "cos_sim_spearman", "value": 5.173899391162327}, {"type": "euclidean_pearson", "value": 3.0710263954769825}, {"type": "euclidean_spearman", "value": 5.04922290903982}, {"type": "manhattan_pearson", "value": 3.7826314109861703}, {"type": "manhattan_spearman", "value": 5.042238232170212}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ru)", "type": "mteb/sts22-crosslingual-sts", "config": "ru", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 7.6735490672676345}, {"type": "cos_sim_spearman", "value": 3.3631215256878892}, {"type": "euclidean_pearson", "value": 4.64331702652217}, {"type": "euclidean_spearman", "value": 3.6129205171334324}, {"type": "manhattan_pearson", "value": 4.011231736076196}, {"type": "manhattan_spearman", "value": 3.233959766173701}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 0.06167614416104335}, {"type": "cos_sim_spearman", "value": 6.521685391703255}, {"type": "euclidean_pearson", "value": 4.884572579069032}, {"type": "euclidean_spearman", "value": 5.59058032900239}, {"type": "manhattan_pearson", "value": 6.139838096573897}, {"type": "manhattan_spearman", "value": 5.0060884837066215}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 53.19490347682836}, {"type": "cos_sim_spearman", "value": 54.56055727079527}, {"type": "euclidean_pearson", "value": 52.55574442039842}, {"type": "euclidean_spearman", "value": 52.94640154371587}, {"type": "manhattan_pearson", "value": 53.275993040454196}, {"type": "manhattan_spearman", "value": 53.174561503510155}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 51.151158530122146}, {"type": "cos_sim_spearman", "value": 53.926925081736655}, {"type": "euclidean_pearson", "value": 44.55629287737235}, {"type": "euclidean_spearman", "value": 46.222372143731384}, {"type": "manhattan_pearson", "value": 42.831322151459005}, {"type": "manhattan_spearman", "value": 45.70991764985799}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.36194885126792}, {"type": "cos_sim_spearman", "value": 32.739632941633836}, {"type": "euclidean_pearson", "value": 29.83135800843496}, {"type": "euclidean_spearman", "value": 31.114406001326923}, {"type": "manhattan_pearson", "value": 31.264502938148286}, {"type": "manhattan_spearman", "value": 33.3112040753475}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (it)", "type": "mteb/sts22-crosslingual-sts", "config": "it", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 35.23883630335275}, {"type": "cos_sim_spearman", "value": 33.67797082086704}, {"type": "euclidean_pearson", "value": 34.878640693874544}, {"type": "euclidean_spearman", "value": 33.525189235133496}, {"type": "manhattan_pearson", "value": 34.22761246389947}, {"type": "manhattan_spearman", "value": 32.713218497609176}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl-en)", "type": "mteb/sts22-crosslingual-sts", "config": "pl-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 19.809302548119547}, {"type": "cos_sim_spearman", "value": 20.540370202115497}, {"type": "euclidean_pearson", "value": 23.006803962133016}, {"type": "euclidean_spearman", "value": 22.96270653079511}, {"type": "manhattan_pearson", "value": 25.40168317585851}, {"type": "manhattan_spearman", "value": 25.421508137540865}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh-en)", "type": "mteb/sts22-crosslingual-sts", "config": "zh-en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 20.393500955410488}, {"type": "cos_sim_spearman", "value": 26.705713693011603}, {"type": "euclidean_pearson", "value": 18.168376767724585}, {"type": "euclidean_spearman", "value": 19.260826601517245}, {"type": "manhattan_pearson", "value": 18.302619990671527}, {"type": "manhattan_spearman", "value": 19.4691037846159}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-it)", "type": "mteb/sts22-crosslingual-sts", "config": "es-it", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 36.58919983075148}, {"type": "cos_sim_spearman", "value": 35.989722099974045}, {"type": "euclidean_pearson", "value": 41.045112547574206}, {"type": "euclidean_spearman", "value": 39.322301680629835}, {"type": "manhattan_pearson", "value": 41.36802503205308}, {"type": "manhattan_spearman", "value": 40.76270030293609}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-fr)", "type": "mteb/sts22-crosslingual-sts", "config": "de-fr", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 26.350936227950083}, {"type": "cos_sim_spearman", "value": 25.108218032460343}, {"type": "euclidean_pearson", "value": 28.61681094744849}, {"type": "euclidean_spearman", "value": 27.350990203943592}, {"type": "manhattan_pearson", "value": 30.527977072984513}, {"type": "manhattan_spearman", "value": 26.403339990640813}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "de-pl", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 20.056269198600322}, {"type": "cos_sim_spearman", "value": 20.939990379746757}, {"type": "euclidean_pearson", "value": 18.942765438962198}, {"type": "euclidean_spearman", "value": 21.709842967237446}, {"type": "manhattan_pearson", "value": 23.643909798655123}, {"type": "manhattan_spearman", "value": 23.58828328071473}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "fr-pl", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 19.563740271419395}, {"type": "cos_sim_spearman", "value": 5.634361698190111}, {"type": "euclidean_pearson", "value": 16.833522619239474}, {"type": "euclidean_spearman", "value": 16.903085094570333}, {"type": "manhattan_pearson", "value": 5.805392712660814}, {"type": "manhattan_spearman", "value": 16.903085094570333}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "8913289635987208e6e7c72789e4be2fe94b6abd"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.00905671833966}, {"type": "cos_sim_spearman", "value": 79.54269211027272}, {"type": "euclidean_pearson", "value": 79.51954544247441}, {"type": "euclidean_spearman", "value": 78.93670303434288}, {"type": "manhattan_pearson", "value": 79.47610653340678}, {"type": "manhattan_spearman", "value": 79.07344156719613}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "56a6d0140cf6356659e2a7c1413286a774468d44"}, "metrics": [{"type": "map", "value": 68.35710819755543}, {"type": "mrr", "value": 88.05442832403617}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "a75ae049398addde9b70f6b268875f5cbce99089"}, "metrics": [{"type": "map_at_1", "value": 21.556}, {"type": "map_at_10", "value": 27.982000000000003}, {"type": "map_at_100", "value": 28.937}, {"type": "map_at_1000", "value": 29.058}, {"type": "map_at_3", "value": 25.644}, {"type": "map_at_5", "value": 26.996}, {"type": "ndcg_at_1", "value": 23.333000000000002}, {"type": "ndcg_at_10", "value": 31.787}, {"type": "ndcg_at_100", "value": 36.647999999999996}, {"type": "ndcg_at_1000", "value": 39.936}, {"type": "ndcg_at_3", "value": 27.299}, {"type": "ndcg_at_5", "value": 29.659000000000002}, {"type": "precision_at_1", "value": 23.333000000000002}, {"type": "precision_at_10", "value": 4.867}, {"type": "precision_at_100", "value": 0.743}, {"type": "precision_at_1000", "value": 0.10200000000000001}, {"type": "precision_at_3", "value": 11.333}, {"type": "precision_at_5", "value": 8.133}, {"type": "recall_at_1", "value": 21.556}, {"type": "recall_at_10", "value": 42.333}, {"type": "recall_at_100", "value": 65.706}, {"type": "recall_at_1000", "value": 91.489}, {"type": "recall_at_3", "value": 30.361}, {"type": "recall_at_5", "value": 36.222}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.49306930693069}, {"type": "cos_sim_ap", "value": 77.7308550291728}, {"type": "cos_sim_f1", "value": 71.78978681209718}, {"type": "cos_sim_precision", "value": 71.1897738446411}, {"type": "cos_sim_recall", "value": 72.39999999999999}, {"type": "dot_accuracy", "value": 99.08118811881188}, {"type": "dot_ap", "value": 30.267748833368234}, {"type": "dot_f1", "value": 34.335201222618444}, {"type": "dot_precision", "value": 34.994807892004154}, {"type": "dot_recall", "value": 33.7}, {"type": "euclidean_accuracy", "value": 99.51683168316832}, {"type": "euclidean_ap", "value": 78.64498778235628}, {"type": "euclidean_f1", "value": 73.09149972929075}, {"type": "euclidean_precision", "value": 79.69303423848878}, {"type": "euclidean_recall", "value": 67.5}, {"type": "manhattan_accuracy", "value": 99.53168316831683}, {"type": "manhattan_ap", "value": 79.45274878693958}, {"type": "manhattan_f1", "value": 74.19863373620599}, {"type": "manhattan_precision", "value": 78.18383167220377}, {"type": "manhattan_recall", "value": 70.6}, {"type": "max_accuracy", "value": 99.53168316831683}, {"type": "max_ap", "value": 79.45274878693958}, {"type": "max_f1", "value": 74.19863373620599}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "70a89468f6dccacc6aa2b12a6eac54e74328f235"}, "metrics": [{"type": "v_measure", "value": 44.59127540530939}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "d88009ab563dd0b16cfaf4436abaf97fa3550cf0"}, "metrics": [{"type": "v_measure", "value": 28.230204578753636}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9"}, "metrics": [{"type": "map", "value": 39.96520488022785}, {"type": "mrr", "value": 40.189248047703934}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "8753c2788d36c01fc6f05d03fe3f7268d63f9122"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.56303767714449}, {"type": "cos_sim_spearman", "value": 30.256847004390487}, {"type": "dot_pearson", "value": 29.453520030995005}, {"type": "dot_spearman", "value": 29.561732550926777}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217"}, "metrics": [{"type": "map_at_1", "value": 0.11299999999999999}, {"type": "map_at_10", "value": 0.733}, {"type": "map_at_100", "value": 3.313}, {"type": "map_at_1000", "value": 7.355}, {"type": "map_at_3", "value": 0.28200000000000003}, {"type": "map_at_5", "value": 0.414}, {"type": "ndcg_at_1", "value": 42.0}, {"type": "ndcg_at_10", "value": 39.31}, {"type": "ndcg_at_100", "value": 26.904}, {"type": "ndcg_at_1000", "value": 23.778}, {"type": "ndcg_at_3", "value": 42.775999999999996}, {"type": "ndcg_at_5", "value": 41.554}, {"type": "precision_at_1", "value": 48.0}, {"type": "precision_at_10", "value": 43.0}, {"type": "precision_at_100", "value": 27.08}, {"type": "precision_at_1000", "value": 11.014}, {"type": "precision_at_3", "value": 48.0}, {"type": "precision_at_5", "value": 45.6}, {"type": "recall_at_1", "value": 0.11299999999999999}, {"type": "recall_at_10", "value": 0.976}, {"type": "recall_at_100", "value": 5.888}, {"type": "recall_at_1000", "value": 22.634999999999998}, {"type": "recall_at_3", "value": 0.329}, {"type": "recall_at_5", "value": 0.518}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "527b7d77e16e343303e68cb6af11d6e18b9f7b3b"}, "metrics": [{"type": "map_at_1", "value": 0.645}, {"type": "map_at_10", "value": 4.1160000000000005}, {"type": "map_at_100", "value": 7.527}, {"type": "map_at_1000", "value": 8.677999999999999}, {"type": "map_at_3", "value": 1.6019999999999999}, {"type": "map_at_5", "value": 2.6}, {"type": "ndcg_at_1", "value": 10.204}, {"type": "ndcg_at_10", "value": 12.27}, {"type": "ndcg_at_100", "value": 22.461000000000002}, {"type": "ndcg_at_1000", "value": 33.543}, {"type": "ndcg_at_3", "value": 9.982000000000001}, {"type": "ndcg_at_5", "value": 11.498}, {"type": "precision_at_1", "value": 10.204}, {"type": "precision_at_10", "value": 12.245000000000001}, {"type": "precision_at_100", "value": 5.286}, {"type": "precision_at_1000", "value": 1.2630000000000001}, {"type": "precision_at_3", "value": 10.884}, {"type": "precision_at_5", "value": 13.061}, {"type": "recall_at_1", "value": 0.645}, {"type": "recall_at_10", "value": 8.996}, {"type": "recall_at_100", "value": 33.666000000000004}, {"type": "recall_at_1000", "value": 67.704}, {"type": "recall_at_3", "value": 2.504}, {"type": "recall_at_5", "value": 4.95}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 62.7862}, {"type": "ap", "value": 10.958454618347831}, {"type": "f1", "value": 48.37243417046763}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "62146448f05be9e52a36b8ee9936447ea787eede"}, "metrics": [{"type": "accuracy", "value": 54.821731748726656}, {"type": "f1", "value": 55.14729314789282}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "091a54f9a36281ce7d6590ec8c75dd485e7e01d4"}, "metrics": [{"type": "v_measure", "value": 28.24295128553035}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 81.5640460153782}, {"type": "cos_sim_ap", "value": 57.094095366921536}, {"type": "cos_sim_f1", "value": 55.29607083563918}, {"type": "cos_sim_precision", "value": 47.62631077216397}, {"type": "cos_sim_recall", "value": 65.91029023746702}, {"type": "dot_accuracy", "value": 78.81623651427549}, {"type": "dot_ap", "value": 47.42989400382077}, {"type": "dot_f1", "value": 51.25944584382871}, {"type": "dot_precision", "value": 42.55838271174625}, {"type": "dot_recall", "value": 64.43271767810026}, {"type": "euclidean_accuracy", "value": 80.29445073612685}, {"type": "euclidean_ap", "value": 53.42012231336148}, {"type": "euclidean_f1", "value": 51.867783563504645}, {"type": "euclidean_precision", "value": 45.4203013481364}, {"type": "euclidean_recall", "value": 60.4485488126649}, {"type": "manhattan_accuracy", "value": 80.2884901949097}, {"type": "manhattan_ap", "value": 53.43205271323232}, {"type": "manhattan_f1", "value": 52.014165559982295}, {"type": "manhattan_precision", "value": 44.796035074342356}, {"type": "manhattan_recall", "value": 62.00527704485488}, {"type": "max_accuracy", "value": 81.5640460153782}, {"type": "max_ap", "value": 57.094095366921536}, {"type": "max_f1", "value": 55.29607083563918}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.63018589668955}, {"type": "cos_sim_ap", "value": 80.51063771262909}, {"type": "cos_sim_f1", "value": 72.70810586950793}, {"type": "cos_sim_precision", "value": 71.14123627790467}, {"type": "cos_sim_recall", "value": 74.3455497382199}, {"type": "dot_accuracy", "value": 82.41743315092948}, {"type": "dot_ap", "value": 69.2393381283664}, {"type": "dot_f1", "value": 65.61346624814597}, {"type": "dot_precision", "value": 59.43260638630257}, {"type": "dot_recall", "value": 73.22913458577148}, {"type": "euclidean_accuracy", "value": 86.49435324251951}, {"type": "euclidean_ap", "value": 80.28100477250926}, {"type": "euclidean_f1", "value": 72.58242344489099}, {"type": "euclidean_precision", "value": 67.44662568576906}, {"type": "euclidean_recall", "value": 78.56482907299045}, {"type": "manhattan_accuracy", "value": 86.59525749990297}, {"type": "manhattan_ap", "value": 80.37850832566262}, {"type": "manhattan_f1", "value": 72.59435321233073}, {"type": "manhattan_precision", "value": 68.19350473612991}, {"type": "manhattan_recall", "value": 77.60240221743148}, {"type": "max_accuracy", "value": 86.63018589668955}, {"type": "max_ap", "value": 80.51063771262909}, {"type": "max_f1", "value": 72.70810586950793}]}]}]} | Muennighoff/SGPT-125M-weightedmean-nli-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us
|
# SGPT-125M-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to the eval folder or our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us \n",
"# SGPT-125M-weightedmean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-125M-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-125M-weightedmean-nli | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-125M-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-125M-weightedmean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-125M-weightedmean-nli",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8807 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-2.7B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 124796 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 7.5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2560, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
``` | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "SGPT-2.7B-weightedmean-msmarco-specb-bitfit", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 67.56716417910448}, {"type": "ap", "value": 30.75574629595259}, {"type": "f1", "value": 61.805121301858655}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1"}, "metrics": [{"type": "accuracy", "value": 71.439575}, {"type": "ap", "value": 65.91341330532453}, {"type": "f1", "value": 70.90561852619555}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 35.748000000000005}, {"type": "f1", "value": 35.48576287186347}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3"}, "metrics": [{"type": "map_at_1", "value": 25.96}, {"type": "map_at_10", "value": 41.619}, {"type": "map_at_100", "value": 42.673}, {"type": "map_at_1000", "value": 42.684}, {"type": "map_at_3", "value": 36.569}, {"type": "map_at_5", "value": 39.397}, {"type": "mrr_at_1", "value": 26.316}, {"type": "mrr_at_10", "value": 41.772}, {"type": "mrr_at_100", "value": 42.82}, {"type": "mrr_at_1000", "value": 42.83}, {"type": "mrr_at_3", "value": 36.724000000000004}, {"type": "mrr_at_5", "value": 39.528999999999996}, {"type": "ndcg_at_1", "value": 25.96}, {"type": "ndcg_at_10", "value": 50.491}, {"type": "ndcg_at_100", "value": 54.864999999999995}, {"type": "ndcg_at_1000", "value": 55.10699999999999}, {"type": "ndcg_at_3", "value": 40.053}, {"type": "ndcg_at_5", "value": 45.134}, {"type": "precision_at_1", "value": 25.96}, {"type": "precision_at_10", "value": 7.8950000000000005}, {"type": "precision_at_100", "value": 0.9780000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 16.714000000000002}, {"type": "precision_at_5", "value": 12.489}, {"type": "recall_at_1", "value": 25.96}, {"type": "recall_at_10", "value": 78.947}, {"type": "recall_at_100", "value": 97.795}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 50.141999999999996}, {"type": "recall_at_5", "value": 62.446999999999996}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "0bbdb47bcbe3a90093699aefeed338a0f28a7ee8"}, "metrics": [{"type": "v_measure", "value": 44.72125714642202}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3"}, "metrics": [{"type": "v_measure", "value": 35.081451519142064}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c"}, "metrics": [{"type": "map", "value": 59.634661990392054}, {"type": "mrr", "value": 73.6813525040672}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "9ee918f184421b6bd48b78f6c714d86546106103"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.42754550496836}, {"type": "cos_sim_spearman", "value": 84.84289705838664}, {"type": "euclidean_pearson", "value": 85.59331970450859}, {"type": "euclidean_spearman", "value": 85.8525586184271}, {"type": "manhattan_pearson", "value": 85.41233134466698}, {"type": "manhattan_spearman", "value": 85.52303303767404}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "44fa15921b4c889113cc5df03dd4901b49161ab7"}, "metrics": [{"type": "accuracy", "value": 83.21753246753246}, {"type": "f1", "value": 83.15394543120915}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55"}, "metrics": [{"type": "v_measure", "value": 34.41414219680629}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "c0fab014e1bcb8d3a5e31b2088972a1e01547dc1"}, "metrics": [{"type": "v_measure", "value": 30.533275862270028}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "2b9f5791698b5be7bc5e10535c8690f20043c3db"}, "metrics": [{"type": "map_at_1", "value": 30.808999999999997}, {"type": "map_at_10", "value": 40.617}, {"type": "map_at_100", "value": 41.894999999999996}, {"type": "map_at_1000", "value": 42.025}, {"type": "map_at_3", "value": 37.0}, {"type": "map_at_5", "value": 38.993}, {"type": "mrr_at_1", "value": 37.482}, {"type": "mrr_at_10", "value": 46.497}, {"type": "mrr_at_100", "value": 47.144000000000005}, {"type": "mrr_at_1000", "value": 47.189}, {"type": "mrr_at_3", "value": 43.705}, {"type": "mrr_at_5", "value": 45.193}, {"type": "ndcg_at_1", "value": 37.482}, {"type": "ndcg_at_10", "value": 46.688}, {"type": "ndcg_at_100", "value": 51.726000000000006}, {"type": "ndcg_at_1000", "value": 53.825}, {"type": "ndcg_at_3", "value": 41.242000000000004}, {"type": "ndcg_at_5", "value": 43.657000000000004}, {"type": "precision_at_1", "value": 37.482}, {"type": "precision_at_10", "value": 8.827}, {"type": "precision_at_100", "value": 1.393}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 19.361}, {"type": "precision_at_5", "value": 14.106}, {"type": "recall_at_1", "value": 30.808999999999997}, {"type": "recall_at_10", "value": 58.47}, {"type": "recall_at_100", "value": 80.51899999999999}, {"type": "recall_at_1000", "value": 93.809}, {"type": "recall_at_3", "value": 42.462}, {"type": "recall_at_5", "value": 49.385}, {"type": "map_at_1", "value": 26.962000000000003}, {"type": "map_at_10", "value": 36.93}, {"type": "map_at_100", "value": 38.102000000000004}, {"type": "map_at_1000", "value": 38.22}, {"type": "map_at_3", "value": 34.065}, {"type": "map_at_5", "value": 35.72}, {"type": "mrr_at_1", "value": 33.567}, {"type": "mrr_at_10", "value": 42.269}, {"type": "mrr_at_100", "value": 42.99}, {"type": "mrr_at_1000", "value": 43.033}, {"type": "mrr_at_3", "value": 40.064}, {"type": "mrr_at_5", "value": 41.258}, {"type": "ndcg_at_1", "value": 33.567}, {"type": "ndcg_at_10", "value": 42.405}, {"type": "ndcg_at_100", "value": 46.847}, {"type": "ndcg_at_1000", "value": 48.951}, {"type": "ndcg_at_3", "value": 38.312000000000005}, {"type": "ndcg_at_5", "value": 40.242}, {"type": "precision_at_1", "value": 33.567}, {"type": "precision_at_10", "value": 8.032}, {"type": "precision_at_100", "value": 1.295}, {"type": "precision_at_1000", "value": 0.17600000000000002}, {"type": "precision_at_3", "value": 18.662}, {"type": "precision_at_5", "value": 13.299}, {"type": "recall_at_1", "value": 26.962000000000003}, {"type": "recall_at_10", "value": 52.489}, {"type": "recall_at_100", "value": 71.635}, {"type": "recall_at_1000", "value": 85.141}, {"type": "recall_at_3", "value": 40.28}, {"type": "recall_at_5", "value": 45.757}, {"type": "map_at_1", "value": 36.318}, {"type": "map_at_10", "value": 47.97}, {"type": "map_at_100", "value": 49.003}, {"type": "map_at_1000", "value": 49.065999999999995}, {"type": "map_at_3", "value": 45.031}, {"type": "map_at_5", "value": 46.633}, {"type": "mrr_at_1", "value": 41.504999999999995}, {"type": "mrr_at_10", "value": 51.431000000000004}, {"type": "mrr_at_100", "value": 52.129000000000005}, {"type": "mrr_at_1000", "value": 52.161}, {"type": "mrr_at_3", "value": 48.934}, {"type": "mrr_at_5", "value": 50.42}, {"type": "ndcg_at_1", "value": 41.504999999999995}, {"type": "ndcg_at_10", "value": 53.676}, {"type": "ndcg_at_100", "value": 57.867000000000004}, {"type": "ndcg_at_1000", "value": 59.166}, {"type": "ndcg_at_3", "value": 48.516}, {"type": "ndcg_at_5", "value": 50.983999999999995}, {"type": "precision_at_1", "value": 41.504999999999995}, {"type": "precision_at_10", "value": 8.608}, {"type": "precision_at_100", "value": 1.1560000000000001}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 21.462999999999997}, {"type": "precision_at_5", "value": 14.721}, {"type": "recall_at_1", "value": 36.318}, {"type": "recall_at_10", "value": 67.066}, {"type": "recall_at_100", "value": 85.34}, {"type": "recall_at_1000", "value": 94.491}, {"type": "recall_at_3", "value": 53.215999999999994}, {"type": "recall_at_5", "value": 59.214}, {"type": "map_at_1", "value": 22.167}, {"type": "map_at_10", "value": 29.543999999999997}, {"type": "map_at_100", "value": 30.579}, {"type": "map_at_1000", "value": 30.669999999999998}, {"type": "map_at_3", "value": 26.982}, {"type": "map_at_5", "value": 28.474}, {"type": "mrr_at_1", "value": 24.068}, {"type": "mrr_at_10", "value": 31.237}, {"type": "mrr_at_100", "value": 32.222}, {"type": "mrr_at_1000", "value": 32.292}, {"type": "mrr_at_3", "value": 28.776000000000003}, {"type": "mrr_at_5", "value": 30.233999999999998}, {"type": "ndcg_at_1", "value": 24.068}, {"type": "ndcg_at_10", "value": 33.973}, {"type": "ndcg_at_100", "value": 39.135}, {"type": "ndcg_at_1000", "value": 41.443999999999996}, {"type": "ndcg_at_3", "value": 29.018}, {"type": "ndcg_at_5", "value": 31.558999999999997}, {"type": "precision_at_1", "value": 24.068}, {"type": "precision_at_10", "value": 5.299}, {"type": "precision_at_100", "value": 0.823}, {"type": "precision_at_1000", "value": 0.106}, {"type": "precision_at_3", "value": 12.166}, {"type": "precision_at_5", "value": 8.767999999999999}, {"type": "recall_at_1", "value": 22.167}, {"type": "recall_at_10", "value": 46.115}, {"type": "recall_at_100", "value": 69.867}, {"type": "recall_at_1000", "value": 87.234}, {"type": "recall_at_3", "value": 32.798}, {"type": "recall_at_5", "value": 38.951}, {"type": "map_at_1", "value": 12.033000000000001}, {"type": "map_at_10", "value": 19.314}, {"type": "map_at_100", "value": 20.562}, {"type": "map_at_1000", "value": 20.695}, {"type": "map_at_3", "value": 16.946}, {"type": "map_at_5", "value": 18.076999999999998}, {"type": "mrr_at_1", "value": 14.801}, {"type": "mrr_at_10", "value": 22.74}, {"type": "mrr_at_100", "value": 23.876}, {"type": "mrr_at_1000", "value": 23.949}, {"type": "mrr_at_3", "value": 20.211000000000002}, {"type": "mrr_at_5", "value": 21.573}, {"type": "ndcg_at_1", "value": 14.801}, {"type": "ndcg_at_10", "value": 24.038}, {"type": "ndcg_at_100", "value": 30.186}, {"type": "ndcg_at_1000", "value": 33.321}, {"type": "ndcg_at_3", "value": 19.431}, {"type": "ndcg_at_5", "value": 21.34}, {"type": "precision_at_1", "value": 14.801}, {"type": "precision_at_10", "value": 4.776}, {"type": "precision_at_100", "value": 0.897}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 9.66}, {"type": "precision_at_5", "value": 7.239}, {"type": "recall_at_1", "value": 12.033000000000001}, {"type": "recall_at_10", "value": 35.098}, {"type": "recall_at_100", "value": 62.175000000000004}, {"type": "recall_at_1000", "value": 84.17099999999999}, {"type": "recall_at_3", "value": 22.61}, {"type": "recall_at_5", "value": 27.278999999999996}, {"type": "map_at_1", "value": 26.651000000000003}, {"type": "map_at_10", "value": 36.901}, {"type": "map_at_100", "value": 38.249}, {"type": "map_at_1000", "value": 38.361000000000004}, {"type": "map_at_3", "value": 33.891}, {"type": "map_at_5", "value": 35.439}, {"type": "mrr_at_1", "value": 32.724}, {"type": "mrr_at_10", "value": 42.504}, {"type": "mrr_at_100", "value": 43.391999999999996}, {"type": "mrr_at_1000", "value": 43.436}, {"type": "mrr_at_3", "value": 39.989999999999995}, {"type": "mrr_at_5", "value": 41.347}, {"type": "ndcg_at_1", "value": 32.724}, {"type": "ndcg_at_10", "value": 43.007}, {"type": "ndcg_at_100", "value": 48.601}, {"type": "ndcg_at_1000", "value": 50.697}, {"type": "ndcg_at_3", "value": 37.99}, {"type": "ndcg_at_5", "value": 40.083999999999996}, {"type": "precision_at_1", "value": 32.724}, {"type": "precision_at_10", "value": 7.872999999999999}, {"type": "precision_at_100", "value": 1.247}, {"type": "precision_at_1000", "value": 0.16199999999999998}, {"type": "precision_at_3", "value": 18.062}, {"type": "precision_at_5", "value": 12.666}, {"type": "recall_at_1", "value": 26.651000000000003}, {"type": "recall_at_10", "value": 55.674}, {"type": "recall_at_100", "value": 78.904}, {"type": "recall_at_1000", "value": 92.55799999999999}, {"type": "recall_at_3", "value": 41.36}, {"type": "recall_at_5", "value": 46.983999999999995}, {"type": "map_at_1", "value": 22.589000000000002}, {"type": "map_at_10", "value": 32.244}, {"type": "map_at_100", "value": 33.46}, {"type": "map_at_1000", "value": 33.593}, {"type": "map_at_3", "value": 29.21}, {"type": "map_at_5", "value": 31.019999999999996}, {"type": "mrr_at_1", "value": 28.425}, {"type": "mrr_at_10", "value": 37.282}, {"type": "mrr_at_100", "value": 38.187}, {"type": "mrr_at_1000", "value": 38.248}, {"type": "mrr_at_3", "value": 34.684}, {"type": "mrr_at_5", "value": 36.123}, {"type": "ndcg_at_1", "value": 28.425}, {"type": "ndcg_at_10", "value": 37.942}, {"type": "ndcg_at_100", "value": 43.443}, {"type": "ndcg_at_1000", "value": 45.995999999999995}, {"type": "ndcg_at_3", "value": 32.873999999999995}, {"type": "ndcg_at_5", "value": 35.325}, {"type": "precision_at_1", "value": 28.425}, {"type": "precision_at_10", "value": 7.1}, {"type": "precision_at_100", "value": 1.166}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 16.02}, {"type": "precision_at_5", "value": 11.644}, {"type": "recall_at_1", "value": 22.589000000000002}, {"type": "recall_at_10", "value": 50.03999999999999}, {"type": "recall_at_100", "value": 73.973}, {"type": "recall_at_1000", "value": 91.128}, {"type": "recall_at_3", "value": 35.882999999999996}, {"type": "recall_at_5", "value": 42.187999999999995}, {"type": "map_at_1", "value": 23.190833333333334}, {"type": "map_at_10", "value": 31.504916666666666}, {"type": "map_at_100", "value": 32.64908333333334}, {"type": "map_at_1000", "value": 32.77075}, {"type": "map_at_3", "value": 28.82575}, {"type": "map_at_5", "value": 30.2755}, {"type": "mrr_at_1", "value": 27.427499999999995}, {"type": "mrr_at_10", "value": 35.36483333333334}, {"type": "mrr_at_100", "value": 36.23441666666666}, {"type": "mrr_at_1000", "value": 36.297583333333336}, {"type": "mrr_at_3", "value": 32.97966666666667}, {"type": "mrr_at_5", "value": 34.294583333333335}, {"type": "ndcg_at_1", "value": 27.427499999999995}, {"type": "ndcg_at_10", "value": 36.53358333333333}, {"type": "ndcg_at_100", "value": 41.64508333333333}, {"type": "ndcg_at_1000", "value": 44.14499999999999}, {"type": "ndcg_at_3", "value": 31.88908333333333}, {"type": "ndcg_at_5", "value": 33.98433333333333}, {"type": "precision_at_1", "value": 27.427499999999995}, {"type": "precision_at_10", "value": 6.481083333333333}, {"type": "precision_at_100", "value": 1.0610833333333334}, {"type": "precision_at_1000", "value": 0.14691666666666667}, {"type": "precision_at_3", "value": 14.656749999999999}, {"type": "precision_at_5", "value": 10.493583333333332}, {"type": "recall_at_1", "value": 23.190833333333334}, {"type": "recall_at_10", "value": 47.65175}, {"type": "recall_at_100", "value": 70.41016666666667}, {"type": "recall_at_1000", "value": 87.82708333333332}, {"type": "recall_at_3", "value": 34.637583333333325}, {"type": "recall_at_5", "value": 40.05008333333333}, {"type": "map_at_1", "value": 20.409}, {"type": "map_at_10", "value": 26.794}, {"type": "map_at_100", "value": 27.682000000000002}, {"type": "map_at_1000", "value": 27.783}, {"type": "map_at_3", "value": 24.461}, {"type": "map_at_5", "value": 25.668000000000003}, {"type": "mrr_at_1", "value": 22.853}, {"type": "mrr_at_10", "value": 29.296}, {"type": "mrr_at_100", "value": 30.103}, {"type": "mrr_at_1000", "value": 30.179000000000002}, {"type": "mrr_at_3", "value": 27.173000000000002}, {"type": "mrr_at_5", "value": 28.223}, {"type": "ndcg_at_1", "value": 22.853}, {"type": "ndcg_at_10", "value": 31.007}, {"type": "ndcg_at_100", "value": 35.581}, {"type": "ndcg_at_1000", "value": 38.147}, {"type": "ndcg_at_3", "value": 26.590999999999998}, {"type": "ndcg_at_5", "value": 28.43}, {"type": "precision_at_1", "value": 22.853}, {"type": "precision_at_10", "value": 5.031}, {"type": "precision_at_100", "value": 0.7939999999999999}, {"type": "precision_at_1000", "value": 0.11}, {"type": "precision_at_3", "value": 11.401}, {"type": "precision_at_5", "value": 8.16}, {"type": "recall_at_1", "value": 20.409}, {"type": "recall_at_10", "value": 41.766}, {"type": "recall_at_100", "value": 62.964}, {"type": "recall_at_1000", "value": 81.682}, {"type": "recall_at_3", "value": 29.281000000000002}, {"type": "recall_at_5", "value": 33.83}, {"type": "map_at_1", "value": 14.549000000000001}, {"type": "map_at_10", "value": 20.315}, {"type": "map_at_100", "value": 21.301000000000002}, {"type": "map_at_1000", "value": 21.425}, {"type": "map_at_3", "value": 18.132}, {"type": "map_at_5", "value": 19.429}, {"type": "mrr_at_1", "value": 17.86}, {"type": "mrr_at_10", "value": 23.860999999999997}, {"type": "mrr_at_100", "value": 24.737000000000002}, {"type": "mrr_at_1000", "value": 24.82}, {"type": "mrr_at_3", "value": 21.685}, {"type": "mrr_at_5", "value": 23.008}, {"type": "ndcg_at_1", "value": 17.86}, {"type": "ndcg_at_10", "value": 24.396}, {"type": "ndcg_at_100", "value": 29.328}, {"type": "ndcg_at_1000", "value": 32.486}, {"type": "ndcg_at_3", "value": 20.375}, {"type": "ndcg_at_5", "value": 22.411}, {"type": "precision_at_1", "value": 17.86}, {"type": "precision_at_10", "value": 4.47}, {"type": "precision_at_100", "value": 0.8099999999999999}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 9.475}, {"type": "precision_at_5", "value": 7.170999999999999}, {"type": "recall_at_1", "value": 14.549000000000001}, {"type": "recall_at_10", "value": 33.365}, {"type": "recall_at_100", "value": 55.797}, {"type": "recall_at_1000", "value": 78.632}, {"type": "recall_at_3", "value": 22.229}, {"type": "recall_at_5", "value": 27.339000000000002}, {"type": "map_at_1", "value": 23.286}, {"type": "map_at_10", "value": 30.728}, {"type": "map_at_100", "value": 31.840000000000003}, {"type": "map_at_1000", "value": 31.953}, {"type": "map_at_3", "value": 28.302}, {"type": "map_at_5", "value": 29.615000000000002}, {"type": "mrr_at_1", "value": 27.239}, {"type": "mrr_at_10", "value": 34.408}, {"type": "mrr_at_100", "value": 35.335}, {"type": "mrr_at_1000", "value": 35.405}, {"type": "mrr_at_3", "value": 32.151999999999994}, {"type": "mrr_at_5", "value": 33.355000000000004}, {"type": "ndcg_at_1", "value": 27.239}, {"type": "ndcg_at_10", "value": 35.324}, {"type": "ndcg_at_100", "value": 40.866}, {"type": "ndcg_at_1000", "value": 43.584}, {"type": "ndcg_at_3", "value": 30.898999999999997}, {"type": "ndcg_at_5", "value": 32.812999999999995}, {"type": "precision_at_1", "value": 27.239}, {"type": "precision_at_10", "value": 5.896}, {"type": "precision_at_100", "value": 0.979}, {"type": "precision_at_1000", "value": 0.133}, {"type": "precision_at_3", "value": 13.713000000000001}, {"type": "precision_at_5", "value": 9.683}, {"type": "recall_at_1", "value": 23.286}, {"type": "recall_at_10", "value": 45.711}, {"type": "recall_at_100", "value": 70.611}, {"type": "recall_at_1000", "value": 90.029}, {"type": "recall_at_3", "value": 33.615}, {"type": "recall_at_5", "value": 38.41}, {"type": "map_at_1", "value": 23.962}, {"type": "map_at_10", "value": 31.942999999999998}, {"type": "map_at_100", "value": 33.384}, {"type": "map_at_1000", "value": 33.611000000000004}, {"type": "map_at_3", "value": 29.243000000000002}, {"type": "map_at_5", "value": 30.446}, {"type": "mrr_at_1", "value": 28.458}, {"type": "mrr_at_10", "value": 36.157000000000004}, {"type": "mrr_at_100", "value": 37.092999999999996}, {"type": "mrr_at_1000", "value": 37.163000000000004}, {"type": "mrr_at_3", "value": 33.86}, {"type": "mrr_at_5", "value": 35.086}, {"type": "ndcg_at_1", "value": 28.458}, {"type": "ndcg_at_10", "value": 37.201}, {"type": "ndcg_at_100", "value": 42.591}, {"type": "ndcg_at_1000", "value": 45.539}, {"type": "ndcg_at_3", "value": 32.889}, {"type": "ndcg_at_5", "value": 34.483000000000004}, {"type": "precision_at_1", "value": 28.458}, {"type": "precision_at_10", "value": 7.332}, {"type": "precision_at_100", "value": 1.437}, {"type": "precision_at_1000", "value": 0.233}, {"type": "precision_at_3", "value": 15.547}, {"type": "precision_at_5", "value": 11.146}, {"type": "recall_at_1", "value": 23.962}, {"type": "recall_at_10", "value": 46.751}, {"type": "recall_at_100", "value": 71.626}, {"type": "recall_at_1000", "value": 90.93900000000001}, {"type": "recall_at_3", "value": 34.138000000000005}, {"type": "recall_at_5", "value": 38.673}, {"type": "map_at_1", "value": 18.555}, {"type": "map_at_10", "value": 24.759}, {"type": "map_at_100", "value": 25.732}, {"type": "map_at_1000", "value": 25.846999999999998}, {"type": "map_at_3", "value": 22.646}, {"type": "map_at_5", "value": 23.791999999999998}, {"type": "mrr_at_1", "value": 20.148}, {"type": "mrr_at_10", "value": 26.695999999999998}, {"type": "mrr_at_100", "value": 27.605}, {"type": "mrr_at_1000", "value": 27.695999999999998}, {"type": "mrr_at_3", "value": 24.522}, {"type": "mrr_at_5", "value": 25.715}, {"type": "ndcg_at_1", "value": 20.148}, {"type": "ndcg_at_10", "value": 28.746}, {"type": "ndcg_at_100", "value": 33.57}, {"type": "ndcg_at_1000", "value": 36.584}, {"type": "ndcg_at_3", "value": 24.532}, {"type": "ndcg_at_5", "value": 26.484}, {"type": "precision_at_1", "value": 20.148}, {"type": "precision_at_10", "value": 4.529}, {"type": "precision_at_100", "value": 0.736}, {"type": "precision_at_1000", "value": 0.108}, {"type": "precision_at_3", "value": 10.351}, {"type": "precision_at_5", "value": 7.32}, {"type": "recall_at_1", "value": 18.555}, {"type": "recall_at_10", "value": 39.275999999999996}, {"type": "recall_at_100", "value": 61.511}, {"type": "recall_at_1000", "value": 84.111}, {"type": "recall_at_3", "value": 27.778999999999996}, {"type": "recall_at_5", "value": 32.591}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "392b78eb68c07badcd7c2cd8f39af108375dfcce"}, "metrics": [{"type": "map_at_1", "value": 10.366999999999999}, {"type": "map_at_10", "value": 18.953999999999997}, {"type": "map_at_100", "value": 20.674999999999997}, {"type": "map_at_1000", "value": 20.868000000000002}, {"type": "map_at_3", "value": 15.486}, {"type": "map_at_5", "value": 17.347}, {"type": "mrr_at_1", "value": 23.257}, {"type": "mrr_at_10", "value": 35.419}, {"type": "mrr_at_100", "value": 36.361}, {"type": "mrr_at_1000", "value": 36.403}, {"type": "mrr_at_3", "value": 31.747999999999998}, {"type": "mrr_at_5", "value": 34.077}, {"type": "ndcg_at_1", "value": 23.257}, {"type": "ndcg_at_10", "value": 27.11}, {"type": "ndcg_at_100", "value": 33.981}, {"type": "ndcg_at_1000", "value": 37.444}, {"type": "ndcg_at_3", "value": 21.471999999999998}, {"type": "ndcg_at_5", "value": 23.769000000000002}, {"type": "precision_at_1", "value": 23.257}, {"type": "precision_at_10", "value": 8.704}, {"type": "precision_at_100", "value": 1.606}, {"type": "precision_at_1000", "value": 0.22499999999999998}, {"type": "precision_at_3", "value": 16.287}, {"type": "precision_at_5", "value": 13.068}, {"type": "recall_at_1", "value": 10.366999999999999}, {"type": "recall_at_10", "value": 33.706}, {"type": "recall_at_100", "value": 57.375}, {"type": "recall_at_1000", "value": 76.79}, {"type": "recall_at_3", "value": 20.18}, {"type": "recall_at_5", "value": 26.215}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "f097057d03ed98220bc7309ddb10b71a54d667d6"}, "metrics": [{"type": "map_at_1", "value": 8.246}, {"type": "map_at_10", "value": 15.979}, {"type": "map_at_100", "value": 21.025}, {"type": "map_at_1000", "value": 22.189999999999998}, {"type": "map_at_3", "value": 11.997}, {"type": "map_at_5", "value": 13.697000000000001}, {"type": "mrr_at_1", "value": 60.75000000000001}, {"type": "mrr_at_10", "value": 68.70100000000001}, {"type": "mrr_at_100", "value": 69.1}, {"type": "mrr_at_1000", "value": 69.111}, {"type": "mrr_at_3", "value": 66.583}, {"type": "mrr_at_5", "value": 67.87100000000001}, {"type": "ndcg_at_1", "value": 49.75}, {"type": "ndcg_at_10", "value": 34.702}, {"type": "ndcg_at_100", "value": 37.607}, {"type": "ndcg_at_1000", "value": 44.322}, {"type": "ndcg_at_3", "value": 39.555}, {"type": "ndcg_at_5", "value": 36.684}, {"type": "precision_at_1", "value": 60.75000000000001}, {"type": "precision_at_10", "value": 26.625}, {"type": "precision_at_100", "value": 7.969999999999999}, {"type": "precision_at_1000", "value": 1.678}, {"type": "precision_at_3", "value": 41.833}, {"type": "precision_at_5", "value": 34.5}, {"type": "recall_at_1", "value": 8.246}, {"type": "recall_at_10", "value": 20.968}, {"type": "recall_at_100", "value": 42.065000000000005}, {"type": "recall_at_1000", "value": 63.671}, {"type": "recall_at_3", "value": 13.039000000000001}, {"type": "recall_at_5", "value": 16.042}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "829147f8f75a25f005913200eb5ed41fae320aa1"}, "metrics": [{"type": "accuracy", "value": 49.214999999999996}, {"type": "f1", "value": 44.85952451163755}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "1429cf27e393599b8b359b9b72c666f96b2525f9"}, "metrics": [{"type": "map_at_1", "value": 56.769000000000005}, {"type": "map_at_10", "value": 67.30199999999999}, {"type": "map_at_100", "value": 67.692}, {"type": "map_at_1000", "value": 67.712}, {"type": "map_at_3", "value": 65.346}, {"type": "map_at_5", "value": 66.574}, {"type": "mrr_at_1", "value": 61.370999999999995}, {"type": "mrr_at_10", "value": 71.875}, {"type": "mrr_at_100", "value": 72.195}, {"type": "mrr_at_1000", "value": 72.206}, {"type": "mrr_at_3", "value": 70.04}, {"type": "mrr_at_5", "value": 71.224}, {"type": "ndcg_at_1", "value": 61.370999999999995}, {"type": "ndcg_at_10", "value": 72.731}, {"type": "ndcg_at_100", "value": 74.468}, {"type": "ndcg_at_1000", "value": 74.91600000000001}, {"type": "ndcg_at_3", "value": 69.077}, {"type": "ndcg_at_5", "value": 71.111}, {"type": "precision_at_1", "value": 61.370999999999995}, {"type": "precision_at_10", "value": 9.325999999999999}, {"type": "precision_at_100", "value": 1.03}, {"type": "precision_at_1000", "value": 0.108}, {"type": "precision_at_3", "value": 27.303}, {"type": "precision_at_5", "value": 17.525}, {"type": "recall_at_1", "value": 56.769000000000005}, {"type": "recall_at_10", "value": 85.06}, {"type": "recall_at_100", "value": 92.767}, {"type": "recall_at_1000", "value": 95.933}, {"type": "recall_at_3", "value": 75.131}, {"type": "recall_at_5", "value": 80.17}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "41b686a7f28c59bcaaa5791efd47c67c8ebe28be"}, "metrics": [{"type": "map_at_1", "value": 15.753}, {"type": "map_at_10", "value": 25.875999999999998}, {"type": "map_at_100", "value": 27.415}, {"type": "map_at_1000", "value": 27.590999999999998}, {"type": "map_at_3", "value": 22.17}, {"type": "map_at_5", "value": 24.236}, {"type": "mrr_at_1", "value": 31.019000000000002}, {"type": "mrr_at_10", "value": 39.977000000000004}, {"type": "mrr_at_100", "value": 40.788999999999994}, {"type": "mrr_at_1000", "value": 40.832}, {"type": "mrr_at_3", "value": 37.088}, {"type": "mrr_at_5", "value": 38.655}, {"type": "ndcg_at_1", "value": 31.019000000000002}, {"type": "ndcg_at_10", "value": 33.286}, {"type": "ndcg_at_100", "value": 39.528999999999996}, {"type": "ndcg_at_1000", "value": 42.934}, {"type": "ndcg_at_3", "value": 29.29}, {"type": "ndcg_at_5", "value": 30.615}, {"type": "precision_at_1", "value": 31.019000000000002}, {"type": "precision_at_10", "value": 9.383}, {"type": "precision_at_100", "value": 1.6019999999999999}, {"type": "precision_at_1000", "value": 0.22200000000000003}, {"type": "precision_at_3", "value": 19.753}, {"type": "precision_at_5", "value": 14.815000000000001}, {"type": "recall_at_1", "value": 15.753}, {"type": "recall_at_10", "value": 40.896}, {"type": "recall_at_100", "value": 64.443}, {"type": "recall_at_1000", "value": 85.218}, {"type": "recall_at_3", "value": 26.526}, {"type": "recall_at_5", "value": 32.452999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "766870b35a1b9ca65e67a0d1913899973551fc6c"}, "metrics": [{"type": "map_at_1", "value": 32.153999999999996}, {"type": "map_at_10", "value": 43.651}, {"type": "map_at_100", "value": 44.41}, {"type": "map_at_1000", "value": 44.487}, {"type": "map_at_3", "value": 41.239}, {"type": "map_at_5", "value": 42.659000000000006}, {"type": "mrr_at_1", "value": 64.30799999999999}, {"type": "mrr_at_10", "value": 71.22500000000001}, {"type": "mrr_at_100", "value": 71.57}, {"type": "mrr_at_1000", "value": 71.59100000000001}, {"type": "mrr_at_3", "value": 69.95}, {"type": "mrr_at_5", "value": 70.738}, {"type": "ndcg_at_1", "value": 64.30799999999999}, {"type": "ndcg_at_10", "value": 52.835}, {"type": "ndcg_at_100", "value": 55.840999999999994}, {"type": "ndcg_at_1000", "value": 57.484}, {"type": "ndcg_at_3", "value": 49.014}, {"type": "ndcg_at_5", "value": 51.01599999999999}, {"type": "precision_at_1", "value": 64.30799999999999}, {"type": "precision_at_10", "value": 10.77}, {"type": "precision_at_100", "value": 1.315}, {"type": "precision_at_1000", "value": 0.153}, {"type": "precision_at_3", "value": 30.223}, {"type": "precision_at_5", "value": 19.716}, {"type": "recall_at_1", "value": 32.153999999999996}, {"type": "recall_at_10", "value": 53.849000000000004}, {"type": "recall_at_100", "value": 65.75999999999999}, {"type": "recall_at_1000", "value": 76.705}, {"type": "recall_at_3", "value": 45.334}, {"type": "recall_at_5", "value": 49.291000000000004}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "8d743909f834c38949e8323a8a6ce8721ea6c7f4"}, "metrics": [{"type": "accuracy", "value": 63.5316}, {"type": "ap", "value": 58.90084300359825}, {"type": "f1", "value": 63.35727889030892}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "validation", "revision": "e6838a846e2408f22cf5cc337ebc83e0bcf77849"}, "metrics": [{"type": "map_at_1", "value": 20.566000000000003}, {"type": "map_at_10", "value": 32.229}, {"type": "map_at_100", "value": 33.445}, {"type": "map_at_1000", "value": 33.501}, {"type": "map_at_3", "value": 28.504}, {"type": "map_at_5", "value": 30.681000000000004}, {"type": "mrr_at_1", "value": 21.218}, {"type": "mrr_at_10", "value": 32.816}, {"type": "mrr_at_100", "value": 33.986}, {"type": "mrr_at_1000", "value": 34.035}, {"type": "mrr_at_3", "value": 29.15}, {"type": "mrr_at_5", "value": 31.290000000000003}, {"type": "ndcg_at_1", "value": 21.218}, {"type": "ndcg_at_10", "value": 38.832}, {"type": "ndcg_at_100", "value": 44.743}, {"type": "ndcg_at_1000", "value": 46.138}, {"type": "ndcg_at_3", "value": 31.232}, {"type": "ndcg_at_5", "value": 35.099999999999994}, {"type": "precision_at_1", "value": 21.218}, {"type": "precision_at_10", "value": 6.186}, {"type": "precision_at_100", "value": 0.914}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 13.314}, {"type": "precision_at_5", "value": 9.943}, {"type": "recall_at_1", "value": 20.566000000000003}, {"type": "recall_at_10", "value": 59.192}, {"type": "recall_at_100", "value": 86.626}, {"type": "recall_at_1000", "value": 97.283}, {"type": "recall_at_3", "value": 38.492}, {"type": "recall_at_5", "value": 47.760000000000005}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 92.56269949840402}, {"type": "f1", "value": 92.1020975473988}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 71.8467852257182}, {"type": "f1", "value": 53.652719348592015}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 69.00806993947546}, {"type": "f1", "value": 67.41429618885515}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 75.90114324142569}, {"type": "f1", "value": 76.25183590651454}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "dcefc037ef84348e49b0d29109e891c01067226b"}, "metrics": [{"type": "v_measure", "value": 31.350109978273395}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc"}, "metrics": [{"type": "v_measure", "value": 28.768923695767327}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.716396735210754}, {"type": "mrr", "value": 32.88970538547634}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "7eb63cc0c1eb59324d709ebed25fcab851fa7610"}, "metrics": [{"type": "map_at_1", "value": 5.604}, {"type": "map_at_10", "value": 12.379999999999999}, {"type": "map_at_100", "value": 15.791}, {"type": "map_at_1000", "value": 17.327}, {"type": "map_at_3", "value": 9.15}, {"type": "map_at_5", "value": 10.599}, {"type": "mrr_at_1", "value": 45.201}, {"type": "mrr_at_10", "value": 53.374}, {"type": "mrr_at_100", "value": 54.089}, {"type": "mrr_at_1000", "value": 54.123}, {"type": "mrr_at_3", "value": 51.44499999999999}, {"type": "mrr_at_5", "value": 52.59}, {"type": "ndcg_at_1", "value": 42.879}, {"type": "ndcg_at_10", "value": 33.891}, {"type": "ndcg_at_100", "value": 31.391999999999996}, {"type": "ndcg_at_1000", "value": 40.36}, {"type": "ndcg_at_3", "value": 39.076}, {"type": "ndcg_at_5", "value": 37.047000000000004}, {"type": "precision_at_1", "value": 44.582}, {"type": "precision_at_10", "value": 25.294}, {"type": "precision_at_100", "value": 8.285}, {"type": "precision_at_1000", "value": 2.1479999999999997}, {"type": "precision_at_3", "value": 36.120000000000005}, {"type": "precision_at_5", "value": 31.95}, {"type": "recall_at_1", "value": 5.604}, {"type": "recall_at_10", "value": 16.239}, {"type": "recall_at_100", "value": 32.16}, {"type": "recall_at_1000", "value": 64.513}, {"type": "recall_at_3", "value": 10.406}, {"type": "recall_at_5", "value": 12.684999999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "6062aefc120bfe8ece5897809fb2e53bfe0d128c"}, "metrics": [{"type": "map_at_1", "value": 25.881}, {"type": "map_at_10", "value": 39.501}, {"type": "map_at_100", "value": 40.615}, {"type": "map_at_1000", "value": 40.661}, {"type": "map_at_3", "value": 35.559000000000005}, {"type": "map_at_5", "value": 37.773}, {"type": "mrr_at_1", "value": 29.229}, {"type": "mrr_at_10", "value": 41.955999999999996}, {"type": "mrr_at_100", "value": 42.86}, {"type": "mrr_at_1000", "value": 42.893}, {"type": "mrr_at_3", "value": 38.562000000000005}, {"type": "mrr_at_5", "value": 40.542}, {"type": "ndcg_at_1", "value": 29.2}, {"type": "ndcg_at_10", "value": 46.703}, {"type": "ndcg_at_100", "value": 51.644}, {"type": "ndcg_at_1000", "value": 52.771}, {"type": "ndcg_at_3", "value": 39.141999999999996}, {"type": "ndcg_at_5", "value": 42.892}, {"type": "precision_at_1", "value": 29.2}, {"type": "precision_at_10", "value": 7.920000000000001}, {"type": "precision_at_100", "value": 1.0659999999999998}, {"type": "precision_at_1000", "value": 0.117}, {"type": "precision_at_3", "value": 18.105}, {"type": "precision_at_5", "value": 13.036}, {"type": "recall_at_1", "value": 25.881}, {"type": "recall_at_10", "value": 66.266}, {"type": "recall_at_100", "value": 88.116}, {"type": "recall_at_1000", "value": 96.58200000000001}, {"type": "recall_at_3", "value": 46.526}, {"type": "recall_at_5", "value": 55.154}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "6205996560df11e3a3da9ab4f926788fc30a7db4"}, "metrics": [{"type": "map_at_1", "value": 67.553}, {"type": "map_at_10", "value": 81.34}, {"type": "map_at_100", "value": 82.002}, {"type": "map_at_1000", "value": 82.027}, {"type": "map_at_3", "value": 78.281}, {"type": "map_at_5", "value": 80.149}, {"type": "mrr_at_1", "value": 77.72}, {"type": "mrr_at_10", "value": 84.733}, {"type": "mrr_at_100", "value": 84.878}, {"type": "mrr_at_1000", "value": 84.879}, {"type": "mrr_at_3", "value": 83.587}, {"type": "mrr_at_5", "value": 84.32600000000001}, {"type": "ndcg_at_1", "value": 77.75}, {"type": "ndcg_at_10", "value": 85.603}, {"type": "ndcg_at_100", "value": 87.069}, {"type": "ndcg_at_1000", "value": 87.25}, {"type": "ndcg_at_3", "value": 82.303}, {"type": "ndcg_at_5", "value": 84.03699999999999}, {"type": "precision_at_1", "value": 77.75}, {"type": "precision_at_10", "value": 13.04}, {"type": "precision_at_100", "value": 1.5070000000000001}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 35.903}, {"type": "precision_at_5", "value": 23.738}, {"type": "recall_at_1", "value": 67.553}, {"type": "recall_at_10", "value": 93.903}, {"type": "recall_at_100", "value": 99.062}, {"type": "recall_at_1000", "value": 99.935}, {"type": "recall_at_3", "value": 84.58099999999999}, {"type": "recall_at_5", "value": 89.316}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "b2805658ae38990172679479369a78b86de8c390"}, "metrics": [{"type": "v_measure", "value": 46.46887711230235}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 54.166876298246926}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "5c59ef3e437a0a9651c8fe6fde943e7dce59fba5"}, "metrics": [{"type": "map_at_1", "value": 4.053}, {"type": "map_at_10", "value": 9.693999999999999}, {"type": "map_at_100", "value": 11.387}, {"type": "map_at_1000", "value": 11.654}, {"type": "map_at_3", "value": 7.053}, {"type": "map_at_5", "value": 8.439}, {"type": "mrr_at_1", "value": 19.900000000000002}, {"type": "mrr_at_10", "value": 29.359}, {"type": "mrr_at_100", "value": 30.484}, {"type": "mrr_at_1000", "value": 30.553}, {"type": "mrr_at_3", "value": 26.200000000000003}, {"type": "mrr_at_5", "value": 28.115000000000002}, {"type": "ndcg_at_1", "value": 19.900000000000002}, {"type": "ndcg_at_10", "value": 16.575}, {"type": "ndcg_at_100", "value": 23.655}, {"type": "ndcg_at_1000", "value": 28.853}, {"type": "ndcg_at_3", "value": 15.848}, {"type": "ndcg_at_5", "value": 14.026}, {"type": "precision_at_1", "value": 19.900000000000002}, {"type": "precision_at_10", "value": 8.450000000000001}, {"type": "precision_at_100", "value": 1.872}, {"type": "precision_at_1000", "value": 0.313}, {"type": "precision_at_3", "value": 14.667}, {"type": "precision_at_5", "value": 12.32}, {"type": "recall_at_1", "value": 4.053}, {"type": "recall_at_10", "value": 17.169999999999998}, {"type": "recall_at_100", "value": 38.025}, {"type": "recall_at_1000", "value": 63.571999999999996}, {"type": "recall_at_3", "value": 8.903}, {"type": "recall_at_5", "value": 12.477}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.7548748519677}, {"type": "cos_sim_spearman", "value": 68.19926431966059}, {"type": "euclidean_pearson", "value": 71.69016204991725}, {"type": "euclidean_spearman", "value": 66.98099673026834}, {"type": "manhattan_pearson", "value": 71.62994072488664}, {"type": "manhattan_spearman", "value": 67.03435950744577}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "fdf84275bb8ce4b49c971d02e84dd1abc677a50f"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.91051402657887}, {"type": "cos_sim_spearman", "value": 66.99390786191645}, {"type": "euclidean_pearson", "value": 71.54128036454578}, {"type": "euclidean_spearman", "value": 69.25605675649068}, {"type": "manhattan_pearson", "value": 71.60981030780171}, {"type": "manhattan_spearman", "value": 69.27513670128046}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "1591bfcbe8c69d4bf7fe2a16e2451017832cafb9"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.23835466417793}, {"type": "cos_sim_spearman", "value": 77.57623085766706}, {"type": "euclidean_pearson", "value": 77.5090992200725}, {"type": "euclidean_spearman", "value": 77.88601688144924}, {"type": "manhattan_pearson", "value": 77.39045060647423}, {"type": "manhattan_spearman", "value": 77.77552718279098}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "e2125984e7df8b7871f6ae9949cf6b6795e7c54b"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.91692485139602}, {"type": "cos_sim_spearman", "value": 72.78258293483495}, {"type": "euclidean_pearson", "value": 74.64773017077789}, {"type": "euclidean_spearman", "value": 71.81662299104619}, {"type": "manhattan_pearson", "value": 74.71043337995533}, {"type": "manhattan_spearman", "value": 71.83960860845646}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "1cd7298cac12a96a373b6a2f18738bb3e739a9b6"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.13422113617578}, {"type": "cos_sim_spearman", "value": 82.61707296911949}, {"type": "euclidean_pearson", "value": 81.42487480400861}, {"type": "euclidean_spearman", "value": 82.17970991273835}, {"type": "manhattan_pearson", "value": 81.41985055477845}, {"type": "manhattan_spearman", "value": 82.15823204362937}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "360a0b2dff98700d09e634a01e1cc1624d3e42cd"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.07989542843826}, {"type": "cos_sim_spearman", "value": 80.09839524406284}, {"type": "euclidean_pearson", "value": 76.43186028364195}, {"type": "euclidean_spearman", "value": 76.76720323266471}, {"type": "manhattan_pearson", "value": 76.4674747409161}, {"type": "manhattan_spearman", "value": 76.81797407068667}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.0420983224933}, {"type": "cos_sim_spearman", "value": 87.25017540413702}, {"type": "euclidean_pearson", "value": 84.56384596473421}, {"type": "euclidean_spearman", "value": 84.72557417564886}, {"type": "manhattan_pearson", "value": 84.7329954474549}, {"type": "manhattan_spearman", "value": 84.75071371008909}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 68.47031320016424}, {"type": "cos_sim_spearman", "value": 68.7486910762485}, {"type": "euclidean_pearson", "value": 71.30330985913915}, {"type": "euclidean_spearman", "value": 71.59666258520735}, {"type": "manhattan_pearson", "value": 71.4423884279027}, {"type": "manhattan_spearman", "value": 71.67460706861044}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "8913289635987208e6e7c72789e4be2fe94b6abd"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.79514366062675}, {"type": "cos_sim_spearman", "value": 79.20585637461048}, {"type": "euclidean_pearson", "value": 78.6591557395699}, {"type": "euclidean_spearman", "value": 77.86455794285718}, {"type": "manhattan_pearson", "value": 78.67754806486865}, {"type": "manhattan_spearman", "value": 77.88178687200732}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "56a6d0140cf6356659e2a7c1413286a774468d44"}, "metrics": [{"type": "map", "value": 77.71580844366375}, {"type": "mrr", "value": 93.04215845882513}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "a75ae049398addde9b70f6b268875f5cbce99089"}, "metrics": [{"type": "map_at_1", "value": 56.39999999999999}, {"type": "map_at_10", "value": 65.701}, {"type": "map_at_100", "value": 66.32000000000001}, {"type": "map_at_1000", "value": 66.34100000000001}, {"type": "map_at_3", "value": 62.641999999999996}, {"type": "map_at_5", "value": 64.342}, {"type": "mrr_at_1", "value": 58.667}, {"type": "mrr_at_10", "value": 66.45299999999999}, {"type": "mrr_at_100", "value": 66.967}, {"type": "mrr_at_1000", "value": 66.988}, {"type": "mrr_at_3", "value": 64.11099999999999}, {"type": "mrr_at_5", "value": 65.411}, {"type": "ndcg_at_1", "value": 58.667}, {"type": "ndcg_at_10", "value": 70.165}, {"type": "ndcg_at_100", "value": 72.938}, {"type": "ndcg_at_1000", "value": 73.456}, {"type": "ndcg_at_3", "value": 64.79}, {"type": "ndcg_at_5", "value": 67.28}, {"type": "precision_at_1", "value": 58.667}, {"type": "precision_at_10", "value": 9.4}, {"type": "precision_at_100", "value": 1.087}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 24.889}, {"type": "precision_at_5", "value": 16.667}, {"type": "recall_at_1", "value": 56.39999999999999}, {"type": "recall_at_10", "value": 83.122}, {"type": "recall_at_100", "value": 95.667}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 68.378}, {"type": "recall_at_5", "value": 74.68299999999999}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.76831683168317}, {"type": "cos_sim_ap", "value": 93.47124923047998}, {"type": "cos_sim_f1", "value": 88.06122448979592}, {"type": "cos_sim_precision", "value": 89.89583333333333}, {"type": "cos_sim_recall", "value": 86.3}, {"type": "dot_accuracy", "value": 99.57326732673268}, {"type": "dot_ap", "value": 84.06577868167207}, {"type": "dot_f1", "value": 77.82629791363416}, {"type": "dot_precision", "value": 75.58906691800189}, {"type": "dot_recall", "value": 80.2}, {"type": "euclidean_accuracy", "value": 99.74257425742574}, {"type": "euclidean_ap", "value": 92.1904681653555}, {"type": "euclidean_f1", "value": 86.74821610601427}, {"type": "euclidean_precision", "value": 88.46153846153845}, {"type": "euclidean_recall", "value": 85.1}, {"type": "manhattan_accuracy", "value": 99.74554455445545}, {"type": "manhattan_ap", "value": 92.4337790809948}, {"type": "manhattan_f1", "value": 86.86765457332653}, {"type": "manhattan_precision", "value": 88.81922675026124}, {"type": "manhattan_recall", "value": 85.0}, {"type": "max_accuracy", "value": 99.76831683168317}, {"type": "max_ap", "value": 93.47124923047998}, {"type": "max_f1", "value": 88.06122448979592}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "70a89468f6dccacc6aa2b12a6eac54e74328f235"}, "metrics": [{"type": "v_measure", "value": 59.194098673976484}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "d88009ab563dd0b16cfaf4436abaf97fa3550cf0"}, "metrics": [{"type": "v_measure", "value": 32.5744032578115}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9"}, "metrics": [{"type": "map", "value": 49.61186384154483}, {"type": "mrr", "value": 50.55424253034547}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "8753c2788d36c01fc6f05d03fe3f7268d63f9122"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.027210161713946}, {"type": "cos_sim_spearman", "value": 31.030178065751734}, {"type": "dot_pearson", "value": 30.09179785685587}, {"type": "dot_spearman", "value": 30.408303252207812}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217"}, "metrics": [{"type": "map_at_1", "value": 0.22300000000000003}, {"type": "map_at_10", "value": 1.762}, {"type": "map_at_100", "value": 9.984}, {"type": "map_at_1000", "value": 24.265}, {"type": "map_at_3", "value": 0.631}, {"type": "map_at_5", "value": 0.9950000000000001}, {"type": "mrr_at_1", "value": 88.0}, {"type": "mrr_at_10", "value": 92.833}, {"type": "mrr_at_100", "value": 92.833}, {"type": "mrr_at_1000", "value": 92.833}, {"type": "mrr_at_3", "value": 92.333}, {"type": "mrr_at_5", "value": 92.833}, {"type": "ndcg_at_1", "value": 83.0}, {"type": "ndcg_at_10", "value": 75.17}, {"type": "ndcg_at_100", "value": 55.432}, {"type": "ndcg_at_1000", "value": 49.482}, {"type": "ndcg_at_3", "value": 82.184}, {"type": "ndcg_at_5", "value": 79.712}, {"type": "precision_at_1", "value": 88.0}, {"type": "precision_at_10", "value": 78.60000000000001}, {"type": "precision_at_100", "value": 56.56}, {"type": "precision_at_1000", "value": 22.334}, {"type": "precision_at_3", "value": 86.667}, {"type": "precision_at_5", "value": 83.6}, {"type": "recall_at_1", "value": 0.22300000000000003}, {"type": "recall_at_10", "value": 1.9879999999999998}, {"type": "recall_at_100", "value": 13.300999999999998}, {"type": "recall_at_1000", "value": 46.587}, {"type": "recall_at_3", "value": 0.6629999999999999}, {"type": "recall_at_5", "value": 1.079}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "527b7d77e16e343303e68cb6af11d6e18b9f7b3b"}, "metrics": [{"type": "map_at_1", "value": 3.047}, {"type": "map_at_10", "value": 8.792}, {"type": "map_at_100", "value": 14.631}, {"type": "map_at_1000", "value": 16.127}, {"type": "map_at_3", "value": 4.673}, {"type": "map_at_5", "value": 5.897}, {"type": "mrr_at_1", "value": 38.775999999999996}, {"type": "mrr_at_10", "value": 49.271}, {"type": "mrr_at_100", "value": 50.181}, {"type": "mrr_at_1000", "value": 50.2}, {"type": "mrr_at_3", "value": 44.558}, {"type": "mrr_at_5", "value": 47.925000000000004}, {"type": "ndcg_at_1", "value": 35.714}, {"type": "ndcg_at_10", "value": 23.44}, {"type": "ndcg_at_100", "value": 35.345}, {"type": "ndcg_at_1000", "value": 46.495}, {"type": "ndcg_at_3", "value": 26.146}, {"type": "ndcg_at_5", "value": 24.878}, {"type": "precision_at_1", "value": 38.775999999999996}, {"type": "precision_at_10", "value": 20.816000000000003}, {"type": "precision_at_100", "value": 7.428999999999999}, {"type": "precision_at_1000", "value": 1.494}, {"type": "precision_at_3", "value": 25.85}, {"type": "precision_at_5", "value": 24.082}, {"type": "recall_at_1", "value": 3.047}, {"type": "recall_at_10", "value": 14.975}, {"type": "recall_at_100", "value": 45.943}, {"type": "recall_at_1000", "value": 80.31099999999999}, {"type": "recall_at_3", "value": 5.478000000000001}, {"type": "recall_at_5", "value": 8.294}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 68.84080000000002}, {"type": "ap", "value": 13.135219251019848}, {"type": "f1", "value": 52.849999421995506}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "62146448f05be9e52a36b8ee9936447ea787eede"}, "metrics": [{"type": "accuracy", "value": 56.68647425014149}, {"type": "f1", "value": 56.97981427365949}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "091a54f9a36281ce7d6590ec8c75dd485e7e01d4"}, "metrics": [{"type": "v_measure", "value": 40.8911707239219}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 83.04226023722954}, {"type": "cos_sim_ap", "value": 63.681339908301325}, {"type": "cos_sim_f1", "value": 60.349184470480125}, {"type": "cos_sim_precision", "value": 53.437754271765655}, {"type": "cos_sim_recall", "value": 69.31398416886545}, {"type": "dot_accuracy", "value": 81.46271681468677}, {"type": "dot_ap", "value": 57.78072296265885}, {"type": "dot_f1", "value": 56.28769265132901}, {"type": "dot_precision", "value": 48.7993803253292}, {"type": "dot_recall", "value": 66.49076517150397}, {"type": "euclidean_accuracy", "value": 82.16606067830959}, {"type": "euclidean_ap", "value": 59.974530371203514}, {"type": "euclidean_f1", "value": 56.856023506366306}, {"type": "euclidean_precision", "value": 53.037916857012334}, {"type": "euclidean_recall", "value": 61.2664907651715}, {"type": "manhattan_accuracy", "value": 82.16606067830959}, {"type": "manhattan_ap", "value": 59.98962379571767}, {"type": "manhattan_f1", "value": 56.98153158451947}, {"type": "manhattan_precision", "value": 51.41158989598811}, {"type": "manhattan_recall", "value": 63.90501319261214}, {"type": "max_accuracy", "value": 83.04226023722954}, {"type": "max_ap", "value": 63.681339908301325}, {"type": "max_f1", "value": 60.349184470480125}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.56871191834517}, {"type": "cos_sim_ap", "value": 84.80240716354544}, {"type": "cos_sim_f1", "value": 77.07765285922385}, {"type": "cos_sim_precision", "value": 74.84947406601378}, {"type": "cos_sim_recall", "value": 79.44256236526024}, {"type": "dot_accuracy", "value": 86.00923662048356}, {"type": "dot_ap", "value": 78.6556459012073}, {"type": "dot_f1", "value": 72.7583749109052}, {"type": "dot_precision", "value": 67.72823779193206}, {"type": "dot_recall", "value": 78.59562673236834}, {"type": "euclidean_accuracy", "value": 87.84103698529127}, {"type": "euclidean_ap", "value": 83.50424424952834}, {"type": "euclidean_f1", "value": 75.74496544549307}, {"type": "euclidean_precision", "value": 73.19402556369381}, {"type": "euclidean_recall", "value": 78.48013550970127}, {"type": "manhattan_accuracy", "value": 87.9225365777933}, {"type": "manhattan_ap", "value": 83.49479248597825}, {"type": "manhattan_f1", "value": 75.67748162447101}, {"type": "manhattan_precision", "value": 73.06810035842294}, {"type": "manhattan_recall", "value": 78.48013550970127}, {"type": "max_accuracy", "value": 88.56871191834517}, {"type": "max_ap", "value": 84.80240716354544}, {"type": "max_f1", "value": 77.07765285922385}]}]}]} | Muennighoff/SGPT-2.7B-weightedmean-msmarco-specb-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #region-us
|
# SGPT-2.7B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to the eval folder or our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 124796 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-2.7B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 124796 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #region-us \n",
"# SGPT-2.7B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 124796 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-2.7B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 70456 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 7045,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 7046,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2560, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Muennighoff/SGPT-2.7B-weightedmean-nli-bitfit | null | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us
|
# SGPT-2.7B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to the eval folder or our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 70456 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-2.7B-weightedmean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 70456 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gpt_neo #feature-extraction #sentence-similarity #arxiv-2202.08904 #endpoints_compatible #region-us \n",
"# SGPT-2.7B-weightedmean-nli-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to the eval folder or our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 70456 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "SGPT-5.8B-weightedmean-msmarco-specb-bitfit", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 69.22388059701493}, {"type": "ap", "value": 32.04724673950256}, {"type": "f1", "value": 63.25719825770428}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1"}, "metrics": [{"type": "accuracy", "value": 71.26109999999998}, {"type": "ap", "value": 66.16336378255403}, {"type": "f1", "value": 70.89719145825303}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 39.19199999999999}, {"type": "f1", "value": 38.580766731113826}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3"}, "metrics": [{"type": "map_at_1", "value": 27.311999999999998}, {"type": "map_at_10", "value": 42.620000000000005}, {"type": "map_at_100", "value": 43.707}, {"type": "map_at_1000", "value": 43.714999999999996}, {"type": "map_at_3", "value": 37.624}, {"type": "map_at_5", "value": 40.498}, {"type": "mrr_at_1", "value": 27.667}, {"type": "mrr_at_10", "value": 42.737}, {"type": "mrr_at_100", "value": 43.823}, {"type": "mrr_at_1000", "value": 43.830999999999996}, {"type": "mrr_at_3", "value": 37.743}, {"type": "mrr_at_5", "value": 40.616}, {"type": "ndcg_at_1", "value": 27.311999999999998}, {"type": "ndcg_at_10", "value": 51.37500000000001}, {"type": "ndcg_at_100", "value": 55.778000000000006}, {"type": "ndcg_at_1000", "value": 55.96600000000001}, {"type": "ndcg_at_3", "value": 41.087}, {"type": "ndcg_at_5", "value": 46.269}, {"type": "precision_at_1", "value": 27.311999999999998}, {"type": "precision_at_10", "value": 7.945}, {"type": "precision_at_100", "value": 0.9820000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 17.046}, {"type": "precision_at_5", "value": 12.745000000000001}, {"type": "recall_at_1", "value": 27.311999999999998}, {"type": "recall_at_10", "value": 79.445}, {"type": "recall_at_100", "value": 98.151}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 51.13799999999999}, {"type": "recall_at_5", "value": 63.727000000000004}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "0bbdb47bcbe3a90093699aefeed338a0f28a7ee8"}, "metrics": [{"type": "v_measure", "value": 45.59037428592033}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3"}, "metrics": [{"type": "v_measure", "value": 38.86371701986363}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c"}, "metrics": [{"type": "map", "value": 61.625568691427766}, {"type": "mrr", "value": 75.83256386580486}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "9ee918f184421b6bd48b78f6c714d86546106103"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.96074355094802}, {"type": "cos_sim_spearman", "value": 86.2501580394454}, {"type": "euclidean_pearson", "value": 82.18427440380462}, {"type": "euclidean_spearman", "value": 80.14760935017947}, {"type": "manhattan_pearson", "value": 82.24621578156392}, {"type": "manhattan_spearman", "value": 80.00363016590163}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "44fa15921b4c889113cc5df03dd4901b49161ab7"}, "metrics": [{"type": "accuracy", "value": 84.49350649350649}, {"type": "f1", "value": 84.4249343233736}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55"}, "metrics": [{"type": "v_measure", "value": 36.551459722989385}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "c0fab014e1bcb8d3a5e31b2088972a1e01547dc1"}, "metrics": [{"type": "v_measure", "value": 33.69901851846774}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "2b9f5791698b5be7bc5e10535c8690f20043c3db"}, "metrics": [{"type": "map_at_1", "value": 30.499}, {"type": "map_at_10", "value": 41.208}, {"type": "map_at_100", "value": 42.638}, {"type": "map_at_1000", "value": 42.754}, {"type": "map_at_3", "value": 37.506}, {"type": "map_at_5", "value": 39.422000000000004}, {"type": "mrr_at_1", "value": 37.339}, {"type": "mrr_at_10", "value": 47.051}, {"type": "mrr_at_100", "value": 47.745}, {"type": "mrr_at_1000", "value": 47.786}, {"type": "mrr_at_3", "value": 44.086999999999996}, {"type": "mrr_at_5", "value": 45.711}, {"type": "ndcg_at_1", "value": 37.339}, {"type": "ndcg_at_10", "value": 47.666}, {"type": "ndcg_at_100", "value": 52.994}, {"type": "ndcg_at_1000", "value": 54.928999999999995}, {"type": "ndcg_at_3", "value": 41.982}, {"type": "ndcg_at_5", "value": 44.42}, {"type": "precision_at_1", "value": 37.339}, {"type": "precision_at_10", "value": 9.127}, {"type": "precision_at_100", "value": 1.4749999999999999}, {"type": "precision_at_1000", "value": 0.194}, {"type": "precision_at_3", "value": 20.076}, {"type": "precision_at_5", "value": 14.449000000000002}, {"type": "recall_at_1", "value": 30.499}, {"type": "recall_at_10", "value": 60.328}, {"type": "recall_at_100", "value": 82.57900000000001}, {"type": "recall_at_1000", "value": 95.074}, {"type": "recall_at_3", "value": 44.17}, {"type": "recall_at_5", "value": 50.94}, {"type": "map_at_1", "value": 30.613}, {"type": "map_at_10", "value": 40.781}, {"type": "map_at_100", "value": 42.018}, {"type": "map_at_1000", "value": 42.132999999999996}, {"type": "map_at_3", "value": 37.816}, {"type": "map_at_5", "value": 39.389}, {"type": "mrr_at_1", "value": 38.408}, {"type": "mrr_at_10", "value": 46.631}, {"type": "mrr_at_100", "value": 47.332}, {"type": "mrr_at_1000", "value": 47.368}, {"type": "mrr_at_3", "value": 44.384}, {"type": "mrr_at_5", "value": 45.661}, {"type": "ndcg_at_1", "value": 38.408}, {"type": "ndcg_at_10", "value": 46.379999999999995}, {"type": "ndcg_at_100", "value": 50.81}, {"type": "ndcg_at_1000", "value": 52.663000000000004}, {"type": "ndcg_at_3", "value": 42.18}, {"type": "ndcg_at_5", "value": 43.974000000000004}, {"type": "precision_at_1", "value": 38.408}, {"type": "precision_at_10", "value": 8.656}, {"type": "precision_at_100", "value": 1.3860000000000001}, {"type": "precision_at_1000", "value": 0.184}, {"type": "precision_at_3", "value": 20.276}, {"type": "precision_at_5", "value": 14.241999999999999}, {"type": "recall_at_1", "value": 30.613}, {"type": "recall_at_10", "value": 56.44}, {"type": "recall_at_100", "value": 75.044}, {"type": "recall_at_1000", "value": 86.426}, {"type": "recall_at_3", "value": 43.766}, {"type": "recall_at_5", "value": 48.998000000000005}, {"type": "map_at_1", "value": 37.370999999999995}, {"type": "map_at_10", "value": 49.718}, {"type": "map_at_100", "value": 50.737}, {"type": "map_at_1000", "value": 50.79}, {"type": "map_at_3", "value": 46.231}, {"type": "map_at_5", "value": 48.329}, {"type": "mrr_at_1", "value": 42.884}, {"type": "mrr_at_10", "value": 53.176}, {"type": "mrr_at_100", "value": 53.81700000000001}, {"type": "mrr_at_1000", "value": 53.845}, {"type": "mrr_at_3", "value": 50.199000000000005}, {"type": "mrr_at_5", "value": 52.129999999999995}, {"type": "ndcg_at_1", "value": 42.884}, {"type": "ndcg_at_10", "value": 55.826}, {"type": "ndcg_at_100", "value": 59.93000000000001}, {"type": "ndcg_at_1000", "value": 61.013}, {"type": "ndcg_at_3", "value": 49.764}, {"type": "ndcg_at_5", "value": 53.025999999999996}, {"type": "precision_at_1", "value": 42.884}, {"type": "precision_at_10", "value": 9.046999999999999}, {"type": "precision_at_100", "value": 1.212}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 22.131999999999998}, {"type": "precision_at_5", "value": 15.524}, {"type": "recall_at_1", "value": 37.370999999999995}, {"type": "recall_at_10", "value": 70.482}, {"type": "recall_at_100", "value": 88.425}, {"type": "recall_at_1000", "value": 96.03399999999999}, {"type": "recall_at_3", "value": 54.43}, {"type": "recall_at_5", "value": 62.327999999999996}, {"type": "map_at_1", "value": 22.875999999999998}, {"type": "map_at_10", "value": 31.715}, {"type": "map_at_100", "value": 32.847}, {"type": "map_at_1000", "value": 32.922000000000004}, {"type": "map_at_3", "value": 29.049999999999997}, {"type": "map_at_5", "value": 30.396}, {"type": "mrr_at_1", "value": 24.52}, {"type": "mrr_at_10", "value": 33.497}, {"type": "mrr_at_100", "value": 34.455000000000005}, {"type": "mrr_at_1000", "value": 34.510000000000005}, {"type": "mrr_at_3", "value": 30.791}, {"type": "mrr_at_5", "value": 32.175}, {"type": "ndcg_at_1", "value": 24.52}, {"type": "ndcg_at_10", "value": 36.95}, {"type": "ndcg_at_100", "value": 42.238}, {"type": "ndcg_at_1000", "value": 44.147999999999996}, {"type": "ndcg_at_3", "value": 31.435000000000002}, {"type": "ndcg_at_5", "value": 33.839000000000006}, {"type": "precision_at_1", "value": 24.52}, {"type": "precision_at_10", "value": 5.9319999999999995}, {"type": "precision_at_100", "value": 0.901}, {"type": "precision_at_1000", "value": 0.11}, {"type": "precision_at_3", "value": 13.446}, {"type": "precision_at_5", "value": 9.469}, {"type": "recall_at_1", "value": 22.875999999999998}, {"type": "recall_at_10", "value": 51.38}, {"type": "recall_at_100", "value": 75.31099999999999}, {"type": "recall_at_1000", "value": 89.718}, {"type": "recall_at_3", "value": 36.26}, {"type": "recall_at_5", "value": 42.248999999999995}, {"type": "map_at_1", "value": 14.984}, {"type": "map_at_10", "value": 23.457}, {"type": "map_at_100", "value": 24.723}, {"type": "map_at_1000", "value": 24.846}, {"type": "map_at_3", "value": 20.873}, {"type": "map_at_5", "value": 22.357}, {"type": "mrr_at_1", "value": 18.159}, {"type": "mrr_at_10", "value": 27.431}, {"type": "mrr_at_100", "value": 28.449}, {"type": "mrr_at_1000", "value": 28.52}, {"type": "mrr_at_3", "value": 24.979000000000003}, {"type": "mrr_at_5", "value": 26.447}, {"type": "ndcg_at_1", "value": 18.159}, {"type": "ndcg_at_10", "value": 28.627999999999997}, {"type": "ndcg_at_100", "value": 34.741}, {"type": "ndcg_at_1000", "value": 37.516}, {"type": "ndcg_at_3", "value": 23.902}, {"type": "ndcg_at_5", "value": 26.294}, {"type": "precision_at_1", "value": 18.159}, {"type": "precision_at_10", "value": 5.485}, {"type": "precision_at_100", "value": 0.985}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 11.774}, {"type": "precision_at_5", "value": 8.731}, {"type": "recall_at_1", "value": 14.984}, {"type": "recall_at_10", "value": 40.198}, {"type": "recall_at_100", "value": 67.11500000000001}, {"type": "recall_at_1000", "value": 86.497}, {"type": "recall_at_3", "value": 27.639000000000003}, {"type": "recall_at_5", "value": 33.595000000000006}, {"type": "map_at_1", "value": 29.067}, {"type": "map_at_10", "value": 39.457}, {"type": "map_at_100", "value": 40.83}, {"type": "map_at_1000", "value": 40.94}, {"type": "map_at_3", "value": 35.995}, {"type": "map_at_5", "value": 38.159}, {"type": "mrr_at_1", "value": 34.937000000000005}, {"type": "mrr_at_10", "value": 44.755}, {"type": "mrr_at_100", "value": 45.549}, {"type": "mrr_at_1000", "value": 45.589}, {"type": "mrr_at_3", "value": 41.947}, {"type": "mrr_at_5", "value": 43.733}, {"type": "ndcg_at_1", "value": 34.937000000000005}, {"type": "ndcg_at_10", "value": 45.573}, {"type": "ndcg_at_100", "value": 51.266999999999996}, {"type": "ndcg_at_1000", "value": 53.184}, {"type": "ndcg_at_3", "value": 39.961999999999996}, {"type": "ndcg_at_5", "value": 43.02}, {"type": "precision_at_1", "value": 34.937000000000005}, {"type": "precision_at_10", "value": 8.296000000000001}, {"type": "precision_at_100", "value": 1.32}, {"type": "precision_at_1000", "value": 0.167}, {"type": "precision_at_3", "value": 18.8}, {"type": "precision_at_5", "value": 13.763}, {"type": "recall_at_1", "value": 29.067}, {"type": "recall_at_10", "value": 58.298}, {"type": "recall_at_100", "value": 82.25099999999999}, {"type": "recall_at_1000", "value": 94.476}, {"type": "recall_at_3", "value": 42.984}, {"type": "recall_at_5", "value": 50.658}, {"type": "map_at_1", "value": 25.985999999999997}, {"type": "map_at_10", "value": 35.746}, {"type": "map_at_100", "value": 37.067}, {"type": "map_at_1000", "value": 37.191}, {"type": "map_at_3", "value": 32.599000000000004}, {"type": "map_at_5", "value": 34.239000000000004}, {"type": "mrr_at_1", "value": 31.735000000000003}, {"type": "mrr_at_10", "value": 40.515}, {"type": "mrr_at_100", "value": 41.459}, {"type": "mrr_at_1000", "value": 41.516}, {"type": "mrr_at_3", "value": 37.938}, {"type": "mrr_at_5", "value": 39.25}, {"type": "ndcg_at_1", "value": 31.735000000000003}, {"type": "ndcg_at_10", "value": 41.484}, {"type": "ndcg_at_100", "value": 47.047}, {"type": "ndcg_at_1000", "value": 49.427}, {"type": "ndcg_at_3", "value": 36.254999999999995}, {"type": "ndcg_at_5", "value": 38.375}, {"type": "precision_at_1", "value": 31.735000000000003}, {"type": "precision_at_10", "value": 7.66}, {"type": "precision_at_100", "value": 1.234}, {"type": "precision_at_1000", "value": 0.16}, {"type": "precision_at_3", "value": 17.427999999999997}, {"type": "precision_at_5", "value": 12.328999999999999}, {"type": "recall_at_1", "value": 25.985999999999997}, {"type": "recall_at_10", "value": 53.761}, {"type": "recall_at_100", "value": 77.149}, {"type": "recall_at_1000", "value": 93.342}, {"type": "recall_at_3", "value": 39.068000000000005}, {"type": "recall_at_5", "value": 44.693}, {"type": "map_at_1", "value": 24.949749999999998}, {"type": "map_at_10", "value": 34.04991666666667}, {"type": "map_at_100", "value": 35.26825}, {"type": "map_at_1000", "value": 35.38316666666667}, {"type": "map_at_3", "value": 31.181333333333335}, {"type": "map_at_5", "value": 32.77391666666667}, {"type": "mrr_at_1", "value": 29.402833333333334}, {"type": "mrr_at_10", "value": 38.01633333333333}, {"type": "mrr_at_100", "value": 38.88033333333334}, {"type": "mrr_at_1000", "value": 38.938500000000005}, {"type": "mrr_at_3", "value": 35.5175}, {"type": "mrr_at_5", "value": 36.93808333333333}, {"type": "ndcg_at_1", "value": 29.402833333333334}, {"type": "ndcg_at_10", "value": 39.403166666666664}, {"type": "ndcg_at_100", "value": 44.66408333333333}, {"type": "ndcg_at_1000", "value": 46.96283333333333}, {"type": "ndcg_at_3", "value": 34.46633333333334}, {"type": "ndcg_at_5", "value": 36.78441666666667}, {"type": "precision_at_1", "value": 29.402833333333334}, {"type": "precision_at_10", "value": 6.965833333333333}, {"type": "precision_at_100", "value": 1.1330833333333334}, {"type": "precision_at_1000", "value": 0.15158333333333335}, {"type": "precision_at_3", "value": 15.886666666666665}, {"type": "precision_at_5", "value": 11.360416666666667}, {"type": "recall_at_1", "value": 24.949749999999998}, {"type": "recall_at_10", "value": 51.29325}, {"type": "recall_at_100", "value": 74.3695}, {"type": "recall_at_1000", "value": 90.31299999999999}, {"type": "recall_at_3", "value": 37.580083333333334}, {"type": "recall_at_5", "value": 43.529666666666664}, {"type": "map_at_1", "value": 22.081999999999997}, {"type": "map_at_10", "value": 29.215999999999998}, {"type": "map_at_100", "value": 30.163}, {"type": "map_at_1000", "value": 30.269000000000002}, {"type": "map_at_3", "value": 26.942}, {"type": "map_at_5", "value": 28.236}, {"type": "mrr_at_1", "value": 24.847}, {"type": "mrr_at_10", "value": 31.918999999999997}, {"type": "mrr_at_100", "value": 32.817}, {"type": "mrr_at_1000", "value": 32.897}, {"type": "mrr_at_3", "value": 29.831000000000003}, {"type": "mrr_at_5", "value": 31.019999999999996}, {"type": "ndcg_at_1", "value": 24.847}, {"type": "ndcg_at_10", "value": 33.4}, {"type": "ndcg_at_100", "value": 38.354}, {"type": "ndcg_at_1000", "value": 41.045}, {"type": "ndcg_at_3", "value": 29.236}, {"type": "ndcg_at_5", "value": 31.258000000000003}, {"type": "precision_at_1", "value": 24.847}, {"type": "precision_at_10", "value": 5.353}, {"type": "precision_at_100", "value": 0.853}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 12.679000000000002}, {"type": "precision_at_5", "value": 8.988}, {"type": "recall_at_1", "value": 22.081999999999997}, {"type": "recall_at_10", "value": 43.505}, {"type": "recall_at_100", "value": 66.45400000000001}, {"type": "recall_at_1000", "value": 86.378}, {"type": "recall_at_3", "value": 32.163000000000004}, {"type": "recall_at_5", "value": 37.059999999999995}, {"type": "map_at_1", "value": 15.540000000000001}, {"type": "map_at_10", "value": 22.362000000000002}, {"type": "map_at_100", "value": 23.435}, {"type": "map_at_1000", "value": 23.564}, {"type": "map_at_3", "value": 20.143}, {"type": "map_at_5", "value": 21.324}, {"type": "mrr_at_1", "value": 18.892}, {"type": "mrr_at_10", "value": 25.942999999999998}, {"type": "mrr_at_100", "value": 26.883000000000003}, {"type": "mrr_at_1000", "value": 26.968999999999998}, {"type": "mrr_at_3", "value": 23.727}, {"type": "mrr_at_5", "value": 24.923000000000002}, {"type": "ndcg_at_1", "value": 18.892}, {"type": "ndcg_at_10", "value": 26.811}, {"type": "ndcg_at_100", "value": 32.066}, {"type": "ndcg_at_1000", "value": 35.166}, {"type": "ndcg_at_3", "value": 22.706}, {"type": "ndcg_at_5", "value": 24.508}, {"type": "precision_at_1", "value": 18.892}, {"type": "precision_at_10", "value": 4.942}, {"type": "precision_at_100", "value": 0.878}, {"type": "precision_at_1000", "value": 0.131}, {"type": "precision_at_3", "value": 10.748000000000001}, {"type": "precision_at_5", "value": 7.784000000000001}, {"type": "recall_at_1", "value": 15.540000000000001}, {"type": "recall_at_10", "value": 36.742999999999995}, {"type": "recall_at_100", "value": 60.525}, {"type": "recall_at_1000", "value": 82.57600000000001}, {"type": "recall_at_3", "value": 25.252000000000002}, {"type": "recall_at_5", "value": 29.872}, {"type": "map_at_1", "value": 24.453}, {"type": "map_at_10", "value": 33.363}, {"type": "map_at_100", "value": 34.579}, {"type": "map_at_1000", "value": 34.686}, {"type": "map_at_3", "value": 30.583}, {"type": "map_at_5", "value": 32.118}, {"type": "mrr_at_1", "value": 28.918}, {"type": "mrr_at_10", "value": 37.675}, {"type": "mrr_at_100", "value": 38.567}, {"type": "mrr_at_1000", "value": 38.632}, {"type": "mrr_at_3", "value": 35.260999999999996}, {"type": "mrr_at_5", "value": 36.576}, {"type": "ndcg_at_1", "value": 28.918}, {"type": "ndcg_at_10", "value": 38.736}, {"type": "ndcg_at_100", "value": 44.261}, {"type": "ndcg_at_1000", "value": 46.72}, {"type": "ndcg_at_3", "value": 33.81}, {"type": "ndcg_at_5", "value": 36.009}, {"type": "precision_at_1", "value": 28.918}, {"type": "precision_at_10", "value": 6.586}, {"type": "precision_at_100", "value": 1.047}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 15.360999999999999}, {"type": "precision_at_5", "value": 10.857999999999999}, {"type": "recall_at_1", "value": 24.453}, {"type": "recall_at_10", "value": 50.885999999999996}, {"type": "recall_at_100", "value": 75.03}, {"type": "recall_at_1000", "value": 92.123}, {"type": "recall_at_3", "value": 37.138}, {"type": "recall_at_5", "value": 42.864999999999995}, {"type": "map_at_1", "value": 24.57}, {"type": "map_at_10", "value": 33.672000000000004}, {"type": "map_at_100", "value": 35.244}, {"type": "map_at_1000", "value": 35.467}, {"type": "map_at_3", "value": 30.712}, {"type": "map_at_5", "value": 32.383}, {"type": "mrr_at_1", "value": 29.644}, {"type": "mrr_at_10", "value": 38.344}, {"type": "mrr_at_100", "value": 39.219}, {"type": "mrr_at_1000", "value": 39.282000000000004}, {"type": "mrr_at_3", "value": 35.771}, {"type": "mrr_at_5", "value": 37.273}, {"type": "ndcg_at_1", "value": 29.644}, {"type": "ndcg_at_10", "value": 39.567}, {"type": "ndcg_at_100", "value": 45.097}, {"type": "ndcg_at_1000", "value": 47.923}, {"type": "ndcg_at_3", "value": 34.768}, {"type": "ndcg_at_5", "value": 37.122}, {"type": "precision_at_1", "value": 29.644}, {"type": "precision_at_10", "value": 7.5889999999999995}, {"type": "precision_at_100", "value": 1.478}, {"type": "precision_at_1000", "value": 0.23500000000000001}, {"type": "precision_at_3", "value": 16.337}, {"type": "precision_at_5", "value": 12.055}, {"type": "recall_at_1", "value": 24.57}, {"type": "recall_at_10", "value": 51.00900000000001}, {"type": "recall_at_100", "value": 75.423}, {"type": "recall_at_1000", "value": 93.671}, {"type": "recall_at_3", "value": 36.925999999999995}, {"type": "recall_at_5", "value": 43.245}, {"type": "map_at_1", "value": 21.356}, {"type": "map_at_10", "value": 27.904}, {"type": "map_at_100", "value": 28.938000000000002}, {"type": "map_at_1000", "value": 29.036}, {"type": "map_at_3", "value": 25.726}, {"type": "map_at_5", "value": 26.935}, {"type": "mrr_at_1", "value": 22.551}, {"type": "mrr_at_10", "value": 29.259}, {"type": "mrr_at_100", "value": 30.272}, {"type": "mrr_at_1000", "value": 30.348000000000003}, {"type": "mrr_at_3", "value": 27.295}, {"type": "mrr_at_5", "value": 28.358}, {"type": "ndcg_at_1", "value": 22.551}, {"type": "ndcg_at_10", "value": 31.817}, {"type": "ndcg_at_100", "value": 37.164}, {"type": "ndcg_at_1000", "value": 39.82}, {"type": "ndcg_at_3", "value": 27.595999999999997}, {"type": "ndcg_at_5", "value": 29.568}, {"type": "precision_at_1", "value": 22.551}, {"type": "precision_at_10", "value": 4.917}, {"type": "precision_at_100", "value": 0.828}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_3", "value": 11.583}, {"type": "precision_at_5", "value": 8.133}, {"type": "recall_at_1", "value": 21.356}, {"type": "recall_at_10", "value": 42.489}, {"type": "recall_at_100", "value": 67.128}, {"type": "recall_at_1000", "value": 87.441}, {"type": "recall_at_3", "value": 31.165}, {"type": "recall_at_5", "value": 35.853}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "392b78eb68c07badcd7c2cd8f39af108375dfcce"}, "metrics": [{"type": "map_at_1", "value": 12.306000000000001}, {"type": "map_at_10", "value": 21.523}, {"type": "map_at_100", "value": 23.358}, {"type": "map_at_1000", "value": 23.541}, {"type": "map_at_3", "value": 17.809}, {"type": "map_at_5", "value": 19.631}, {"type": "mrr_at_1", "value": 27.948}, {"type": "mrr_at_10", "value": 40.355000000000004}, {"type": "mrr_at_100", "value": 41.166000000000004}, {"type": "mrr_at_1000", "value": 41.203}, {"type": "mrr_at_3", "value": 36.819}, {"type": "mrr_at_5", "value": 38.958999999999996}, {"type": "ndcg_at_1", "value": 27.948}, {"type": "ndcg_at_10", "value": 30.462}, {"type": "ndcg_at_100", "value": 37.473}, {"type": "ndcg_at_1000", "value": 40.717999999999996}, {"type": "ndcg_at_3", "value": 24.646}, {"type": "ndcg_at_5", "value": 26.642}, {"type": "precision_at_1", "value": 27.948}, {"type": "precision_at_10", "value": 9.648}, {"type": "precision_at_100", "value": 1.7239999999999998}, {"type": "precision_at_1000", "value": 0.232}, {"type": "precision_at_3", "value": 18.48}, {"type": "precision_at_5", "value": 14.293}, {"type": "recall_at_1", "value": 12.306000000000001}, {"type": "recall_at_10", "value": 37.181}, {"type": "recall_at_100", "value": 61.148}, {"type": "recall_at_1000", "value": 79.401}, {"type": "recall_at_3", "value": 22.883}, {"type": "recall_at_5", "value": 28.59}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "f097057d03ed98220bc7309ddb10b71a54d667d6"}, "metrics": [{"type": "map_at_1", "value": 9.357}, {"type": "map_at_10", "value": 18.849}, {"type": "map_at_100", "value": 25.369000000000003}, {"type": "map_at_1000", "value": 26.950000000000003}, {"type": "map_at_3", "value": 13.625000000000002}, {"type": "map_at_5", "value": 15.956999999999999}, {"type": "mrr_at_1", "value": 67.75}, {"type": "mrr_at_10", "value": 74.734}, {"type": "mrr_at_100", "value": 75.1}, {"type": "mrr_at_1000", "value": 75.10900000000001}, {"type": "mrr_at_3", "value": 73.542}, {"type": "mrr_at_5", "value": 74.167}, {"type": "ndcg_at_1", "value": 55.375}, {"type": "ndcg_at_10", "value": 39.873999999999995}, {"type": "ndcg_at_100", "value": 43.098}, {"type": "ndcg_at_1000", "value": 50.69200000000001}, {"type": "ndcg_at_3", "value": 44.856}, {"type": "ndcg_at_5", "value": 42.138999999999996}, {"type": "precision_at_1", "value": 67.75}, {"type": "precision_at_10", "value": 31.1}, {"type": "precision_at_100", "value": 9.303}, {"type": "precision_at_1000", "value": 2.0060000000000002}, {"type": "precision_at_3", "value": 48.25}, {"type": "precision_at_5", "value": 40.949999999999996}, {"type": "recall_at_1", "value": 9.357}, {"type": "recall_at_10", "value": 23.832}, {"type": "recall_at_100", "value": 47.906}, {"type": "recall_at_1000", "value": 71.309}, {"type": "recall_at_3", "value": 14.512}, {"type": "recall_at_5", "value": 18.3}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "829147f8f75a25f005913200eb5ed41fae320aa1"}, "metrics": [{"type": "accuracy", "value": 49.655}, {"type": "f1", "value": 45.51976190938951}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "1429cf27e393599b8b359b9b72c666f96b2525f9"}, "metrics": [{"type": "map_at_1", "value": 62.739999999999995}, {"type": "map_at_10", "value": 73.07000000000001}, {"type": "map_at_100", "value": 73.398}, {"type": "map_at_1000", "value": 73.41}, {"type": "map_at_3", "value": 71.33800000000001}, {"type": "map_at_5", "value": 72.423}, {"type": "mrr_at_1", "value": 67.777}, {"type": "mrr_at_10", "value": 77.873}, {"type": "mrr_at_100", "value": 78.091}, {"type": "mrr_at_1000", "value": 78.094}, {"type": "mrr_at_3", "value": 76.375}, {"type": "mrr_at_5", "value": 77.316}, {"type": "ndcg_at_1", "value": 67.777}, {"type": "ndcg_at_10", "value": 78.24}, {"type": "ndcg_at_100", "value": 79.557}, {"type": "ndcg_at_1000", "value": 79.814}, {"type": "ndcg_at_3", "value": 75.125}, {"type": "ndcg_at_5", "value": 76.834}, {"type": "precision_at_1", "value": 67.777}, {"type": "precision_at_10", "value": 9.832}, {"type": "precision_at_100", "value": 1.061}, {"type": "precision_at_1000", "value": 0.11}, {"type": "precision_at_3", "value": 29.433}, {"type": "precision_at_5", "value": 18.665000000000003}, {"type": "recall_at_1", "value": 62.739999999999995}, {"type": "recall_at_10", "value": 89.505}, {"type": "recall_at_100", "value": 95.102}, {"type": "recall_at_1000", "value": 96.825}, {"type": "recall_at_3", "value": 81.028}, {"type": "recall_at_5", "value": 85.28099999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "41b686a7f28c59bcaaa5791efd47c67c8ebe28be"}, "metrics": [{"type": "map_at_1", "value": 18.467}, {"type": "map_at_10", "value": 30.020999999999997}, {"type": "map_at_100", "value": 31.739}, {"type": "map_at_1000", "value": 31.934}, {"type": "map_at_3", "value": 26.003}, {"type": "map_at_5", "value": 28.338}, {"type": "mrr_at_1", "value": 35.339999999999996}, {"type": "mrr_at_10", "value": 44.108999999999995}, {"type": "mrr_at_100", "value": 44.993}, {"type": "mrr_at_1000", "value": 45.042}, {"type": "mrr_at_3", "value": 41.667}, {"type": "mrr_at_5", "value": 43.14}, {"type": "ndcg_at_1", "value": 35.339999999999996}, {"type": "ndcg_at_10", "value": 37.202}, {"type": "ndcg_at_100", "value": 43.852999999999994}, {"type": "ndcg_at_1000", "value": 47.235}, {"type": "ndcg_at_3", "value": 33.5}, {"type": "ndcg_at_5", "value": 34.985}, {"type": "precision_at_1", "value": 35.339999999999996}, {"type": "precision_at_10", "value": 10.247}, {"type": "precision_at_100", "value": 1.7149999999999999}, {"type": "precision_at_1000", "value": 0.232}, {"type": "precision_at_3", "value": 22.222}, {"type": "precision_at_5", "value": 16.573999999999998}, {"type": "recall_at_1", "value": 18.467}, {"type": "recall_at_10", "value": 44.080999999999996}, {"type": "recall_at_100", "value": 68.72200000000001}, {"type": "recall_at_1000", "value": 89.087}, {"type": "recall_at_3", "value": 30.567}, {"type": "recall_at_5", "value": 36.982}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "766870b35a1b9ca65e67a0d1913899973551fc6c"}, "metrics": [{"type": "map_at_1", "value": 35.726}, {"type": "map_at_10", "value": 50.207}, {"type": "map_at_100", "value": 51.05499999999999}, {"type": "map_at_1000", "value": 51.12799999999999}, {"type": "map_at_3", "value": 47.576}, {"type": "map_at_5", "value": 49.172}, {"type": "mrr_at_1", "value": 71.452}, {"type": "mrr_at_10", "value": 77.41900000000001}, {"type": "mrr_at_100", "value": 77.711}, {"type": "mrr_at_1000", "value": 77.723}, {"type": "mrr_at_3", "value": 76.39399999999999}, {"type": "mrr_at_5", "value": 77.00099999999999}, {"type": "ndcg_at_1", "value": 71.452}, {"type": "ndcg_at_10", "value": 59.260999999999996}, {"type": "ndcg_at_100", "value": 62.424}, {"type": "ndcg_at_1000", "value": 63.951}, {"type": "ndcg_at_3", "value": 55.327000000000005}, {"type": "ndcg_at_5", "value": 57.416999999999994}, {"type": "precision_at_1", "value": 71.452}, {"type": "precision_at_10", "value": 12.061}, {"type": "precision_at_100", "value": 1.455}, {"type": "precision_at_1000", "value": 0.166}, {"type": "precision_at_3", "value": 34.36}, {"type": "precision_at_5", "value": 22.266}, {"type": "recall_at_1", "value": 35.726}, {"type": "recall_at_10", "value": 60.304}, {"type": "recall_at_100", "value": 72.75500000000001}, {"type": "recall_at_1000", "value": 82.978}, {"type": "recall_at_3", "value": 51.54}, {"type": "recall_at_5", "value": 55.665}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "8d743909f834c38949e8323a8a6ce8721ea6c7f4"}, "metrics": [{"type": "accuracy", "value": 66.63759999999999}, {"type": "ap", "value": 61.48938261286748}, {"type": "f1", "value": 66.35089269264965}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "validation", "revision": "e6838a846e2408f22cf5cc337ebc83e0bcf77849"}, "metrics": [{"type": "map_at_1", "value": 20.842}, {"type": "map_at_10", "value": 32.992}, {"type": "map_at_100", "value": 34.236}, {"type": "map_at_1000", "value": 34.286}, {"type": "map_at_3", "value": 29.049000000000003}, {"type": "map_at_5", "value": 31.391999999999996}, {"type": "mrr_at_1", "value": 21.375}, {"type": "mrr_at_10", "value": 33.581}, {"type": "mrr_at_100", "value": 34.760000000000005}, {"type": "mrr_at_1000", "value": 34.803}, {"type": "mrr_at_3", "value": 29.704000000000004}, {"type": "mrr_at_5", "value": 32.015}, {"type": "ndcg_at_1", "value": 21.375}, {"type": "ndcg_at_10", "value": 39.905}, {"type": "ndcg_at_100", "value": 45.843}, {"type": "ndcg_at_1000", "value": 47.083999999999996}, {"type": "ndcg_at_3", "value": 31.918999999999997}, {"type": "ndcg_at_5", "value": 36.107}, {"type": "precision_at_1", "value": 21.375}, {"type": "precision_at_10", "value": 6.393}, {"type": "precision_at_100", "value": 0.935}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 13.663}, {"type": "precision_at_5", "value": 10.324}, {"type": "recall_at_1", "value": 20.842}, {"type": "recall_at_10", "value": 61.17}, {"type": "recall_at_100", "value": 88.518}, {"type": "recall_at_1000", "value": 97.993}, {"type": "recall_at_3", "value": 39.571}, {"type": "recall_at_5", "value": 49.653999999999996}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 93.46557227542178}, {"type": "f1", "value": 92.87345917772146}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 72.42134062927497}, {"type": "f1", "value": 55.03624810959269}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 70.3866845998655}, {"type": "f1", "value": 68.9674519872921}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 76.27774041694687}, {"type": "f1", "value": 76.72936190462792}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "dcefc037ef84348e49b0d29109e891c01067226b"}, "metrics": [{"type": "v_measure", "value": 31.511745925773337}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc"}, "metrics": [{"type": "v_measure", "value": 28.764235987575365}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 32.29353136386601}, {"type": "mrr", "value": 33.536774455851685}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "7eb63cc0c1eb59324d709ebed25fcab851fa7610"}, "metrics": [{"type": "map_at_1", "value": 5.702}, {"type": "map_at_10", "value": 13.642000000000001}, {"type": "map_at_100", "value": 17.503}, {"type": "map_at_1000", "value": 19.126}, {"type": "map_at_3", "value": 9.748}, {"type": "map_at_5", "value": 11.642}, {"type": "mrr_at_1", "value": 45.82}, {"type": "mrr_at_10", "value": 54.821}, {"type": "mrr_at_100", "value": 55.422000000000004}, {"type": "mrr_at_1000", "value": 55.452999999999996}, {"type": "mrr_at_3", "value": 52.373999999999995}, {"type": "mrr_at_5", "value": 53.937000000000005}, {"type": "ndcg_at_1", "value": 44.272}, {"type": "ndcg_at_10", "value": 36.213}, {"type": "ndcg_at_100", "value": 33.829}, {"type": "ndcg_at_1000", "value": 42.557}, {"type": "ndcg_at_3", "value": 40.814}, {"type": "ndcg_at_5", "value": 39.562000000000005}, {"type": "precision_at_1", "value": 45.511}, {"type": "precision_at_10", "value": 27.214}, {"type": "precision_at_100", "value": 8.941}, {"type": "precision_at_1000", "value": 2.1870000000000003}, {"type": "precision_at_3", "value": 37.874}, {"type": "precision_at_5", "value": 34.489}, {"type": "recall_at_1", "value": 5.702}, {"type": "recall_at_10", "value": 17.638}, {"type": "recall_at_100", "value": 34.419}, {"type": "recall_at_1000", "value": 66.41}, {"type": "recall_at_3", "value": 10.914}, {"type": "recall_at_5", "value": 14.032}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "6062aefc120bfe8ece5897809fb2e53bfe0d128c"}, "metrics": [{"type": "map_at_1", "value": 30.567}, {"type": "map_at_10", "value": 45.01}, {"type": "map_at_100", "value": 46.091}, {"type": "map_at_1000", "value": 46.126}, {"type": "map_at_3", "value": 40.897}, {"type": "map_at_5", "value": 43.301}, {"type": "mrr_at_1", "value": 34.56}, {"type": "mrr_at_10", "value": 47.725}, {"type": "mrr_at_100", "value": 48.548}, {"type": "mrr_at_1000", "value": 48.571999999999996}, {"type": "mrr_at_3", "value": 44.361}, {"type": "mrr_at_5", "value": 46.351}, {"type": "ndcg_at_1", "value": 34.531}, {"type": "ndcg_at_10", "value": 52.410000000000004}, {"type": "ndcg_at_100", "value": 56.999}, {"type": "ndcg_at_1000", "value": 57.830999999999996}, {"type": "ndcg_at_3", "value": 44.734}, {"type": "ndcg_at_5", "value": 48.701}, {"type": "precision_at_1", "value": 34.531}, {"type": "precision_at_10", "value": 8.612}, {"type": "precision_at_100", "value": 1.118}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 20.307}, {"type": "precision_at_5", "value": 14.519000000000002}, {"type": "recall_at_1", "value": 30.567}, {"type": "recall_at_10", "value": 72.238}, {"type": "recall_at_100", "value": 92.154}, {"type": "recall_at_1000", "value": 98.375}, {"type": "recall_at_3", "value": 52.437999999999995}, {"type": "recall_at_5", "value": 61.516999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "6205996560df11e3a3da9ab4f926788fc30a7db4"}, "metrics": [{"type": "map_at_1", "value": 65.98}, {"type": "map_at_10", "value": 80.05600000000001}, {"type": "map_at_100", "value": 80.76299999999999}, {"type": "map_at_1000", "value": 80.786}, {"type": "map_at_3", "value": 76.848}, {"type": "map_at_5", "value": 78.854}, {"type": "mrr_at_1", "value": 75.86}, {"type": "mrr_at_10", "value": 83.397}, {"type": "mrr_at_100", "value": 83.555}, {"type": "mrr_at_1000", "value": 83.557}, {"type": "mrr_at_3", "value": 82.033}, {"type": "mrr_at_5", "value": 82.97}, {"type": "ndcg_at_1", "value": 75.88000000000001}, {"type": "ndcg_at_10", "value": 84.58099999999999}, {"type": "ndcg_at_100", "value": 86.151}, {"type": "ndcg_at_1000", "value": 86.315}, {"type": "ndcg_at_3", "value": 80.902}, {"type": "ndcg_at_5", "value": 82.953}, {"type": "precision_at_1", "value": 75.88000000000001}, {"type": "precision_at_10", "value": 12.986}, {"type": "precision_at_100", "value": 1.5110000000000001}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 35.382999999999996}, {"type": "precision_at_5", "value": 23.555999999999997}, {"type": "recall_at_1", "value": 65.98}, {"type": "recall_at_10", "value": 93.716}, {"type": "recall_at_100", "value": 99.21799999999999}, {"type": "recall_at_1000", "value": 99.97}, {"type": "recall_at_3", "value": 83.551}, {"type": "recall_at_5", "value": 88.998}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "b2805658ae38990172679479369a78b86de8c390"}, "metrics": [{"type": "v_measure", "value": 40.45148482612238}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 55.749490673039126}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "5c59ef3e437a0a9651c8fe6fde943e7dce59fba5"}, "metrics": [{"type": "map_at_1", "value": 4.903}, {"type": "map_at_10", "value": 11.926}, {"type": "map_at_100", "value": 13.916999999999998}, {"type": "map_at_1000", "value": 14.215}, {"type": "map_at_3", "value": 8.799999999999999}, {"type": "map_at_5", "value": 10.360999999999999}, {"type": "mrr_at_1", "value": 24.099999999999998}, {"type": "mrr_at_10", "value": 34.482}, {"type": "mrr_at_100", "value": 35.565999999999995}, {"type": "mrr_at_1000", "value": 35.619}, {"type": "mrr_at_3", "value": 31.433}, {"type": "mrr_at_5", "value": 33.243}, {"type": "ndcg_at_1", "value": 24.099999999999998}, {"type": "ndcg_at_10", "value": 19.872999999999998}, {"type": "ndcg_at_100", "value": 27.606}, {"type": "ndcg_at_1000", "value": 32.811}, {"type": "ndcg_at_3", "value": 19.497999999999998}, {"type": "ndcg_at_5", "value": 16.813}, {"type": "precision_at_1", "value": 24.099999999999998}, {"type": "precision_at_10", "value": 10.08}, {"type": "precision_at_100", "value": 2.122}, {"type": "precision_at_1000", "value": 0.337}, {"type": "precision_at_3", "value": 18.2}, {"type": "precision_at_5", "value": 14.62}, {"type": "recall_at_1", "value": 4.903}, {"type": "recall_at_10", "value": 20.438000000000002}, {"type": "recall_at_100", "value": 43.043}, {"type": "recall_at_1000", "value": 68.41000000000001}, {"type": "recall_at_3", "value": 11.068}, {"type": "recall_at_5", "value": 14.818000000000001}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 78.58086597995997}, {"type": "cos_sim_spearman", "value": 69.63214182814991}, {"type": "euclidean_pearson", "value": 72.76175489042691}, {"type": "euclidean_spearman", "value": 67.84965161872971}, {"type": "manhattan_pearson", "value": 72.73812689782592}, {"type": "manhattan_spearman", "value": 67.83610439531277}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "fdf84275bb8ce4b49c971d02e84dd1abc677a50f"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.13970861325006}, {"type": "cos_sim_spearman", "value": 67.5020551515597}, {"type": "euclidean_pearson", "value": 66.33415412418276}, {"type": "euclidean_spearman", "value": 66.82145056673268}, {"type": "manhattan_pearson", "value": 66.55489484006415}, {"type": "manhattan_spearman", "value": 66.95147433279057}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "1591bfcbe8c69d4bf7fe2a16e2451017832cafb9"}, "metrics": [{"type": "cos_sim_pearson", "value": 78.85850536483447}, {"type": "cos_sim_spearman", "value": 79.1633350177206}, {"type": "euclidean_pearson", "value": 72.74090561408477}, {"type": "euclidean_spearman", "value": 73.57374448302961}, {"type": "manhattan_pearson", "value": 72.92980654233226}, {"type": "manhattan_spearman", "value": 73.72777155112588}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "e2125984e7df8b7871f6ae9949cf6b6795e7c54b"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.51125593897028}, {"type": "cos_sim_spearman", "value": 74.46048326701329}, {"type": "euclidean_pearson", "value": 70.87726087052985}, {"type": "euclidean_spearman", "value": 67.7721470654411}, {"type": "manhattan_pearson", "value": 71.05892792135637}, {"type": "manhattan_spearman", "value": 67.93472619779037}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "1cd7298cac12a96a373b6a2f18738bb3e739a9b6"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.8299348880489}, {"type": "cos_sim_spearman", "value": 84.47194637929275}, {"type": "euclidean_pearson", "value": 78.68768462480418}, {"type": "euclidean_spearman", "value": 79.80526323901917}, {"type": "manhattan_pearson", "value": 78.6810718151946}, {"type": "manhattan_spearman", "value": 79.7820584821254}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "360a0b2dff98700d09e634a01e1cc1624d3e42cd"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.99206664843005}, {"type": "cos_sim_spearman", "value": 80.96089203722137}, {"type": "euclidean_pearson", "value": 71.31216213716365}, {"type": "euclidean_spearman", "value": 71.45258140049407}, {"type": "manhattan_pearson", "value": 71.26140340402836}, {"type": "manhattan_spearman", "value": 71.3896894666943}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.35697089594868}, {"type": "cos_sim_spearman", "value": 87.78202647220289}, {"type": "euclidean_pearson", "value": 84.20969668786667}, {"type": "euclidean_spearman", "value": 83.91876425459982}, {"type": "manhattan_pearson", "value": 84.24429755612542}, {"type": "manhattan_spearman", "value": 83.98826315103398}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.06962775868384}, {"type": "cos_sim_spearman", "value": 69.34889515492327}, {"type": "euclidean_pearson", "value": 69.28108180412313}, {"type": "euclidean_spearman", "value": 69.6437114853659}, {"type": "manhattan_pearson", "value": 69.39974983734993}, {"type": "manhattan_spearman", "value": 69.69057284482079}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "8913289635987208e6e7c72789e4be2fe94b6abd"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.42553734213958}, {"type": "cos_sim_spearman", "value": 81.38977341532744}, {"type": "euclidean_pearson", "value": 76.47494587945522}, {"type": "euclidean_spearman", "value": 75.92794860531089}, {"type": "manhattan_pearson", "value": 76.4768777169467}, {"type": "manhattan_spearman", "value": 75.9252673228599}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "56a6d0140cf6356659e2a7c1413286a774468d44"}, "metrics": [{"type": "map", "value": 80.78825425914722}, {"type": "mrr", "value": 94.60017197762296}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "a75ae049398addde9b70f6b268875f5cbce99089"}, "metrics": [{"type": "map_at_1", "value": 60.633}, {"type": "map_at_10", "value": 70.197}, {"type": "map_at_100", "value": 70.758}, {"type": "map_at_1000", "value": 70.765}, {"type": "map_at_3", "value": 67.082}, {"type": "map_at_5", "value": 69.209}, {"type": "mrr_at_1", "value": 63.333}, {"type": "mrr_at_10", "value": 71.17}, {"type": "mrr_at_100", "value": 71.626}, {"type": "mrr_at_1000", "value": 71.633}, {"type": "mrr_at_3", "value": 68.833}, {"type": "mrr_at_5", "value": 70.6}, {"type": "ndcg_at_1", "value": 63.333}, {"type": "ndcg_at_10", "value": 74.697}, {"type": "ndcg_at_100", "value": 76.986}, {"type": "ndcg_at_1000", "value": 77.225}, {"type": "ndcg_at_3", "value": 69.527}, {"type": "ndcg_at_5", "value": 72.816}, {"type": "precision_at_1", "value": 63.333}, {"type": "precision_at_10", "value": 9.9}, {"type": "precision_at_100", "value": 1.103}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 26.889000000000003}, {"type": "precision_at_5", "value": 18.2}, {"type": "recall_at_1", "value": 60.633}, {"type": "recall_at_10", "value": 87.36699999999999}, {"type": "recall_at_100", "value": 97.333}, {"type": "recall_at_1000", "value": 99.333}, {"type": "recall_at_3", "value": 73.656}, {"type": "recall_at_5", "value": 82.083}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.76633663366337}, {"type": "cos_sim_ap", "value": 93.84024096781063}, {"type": "cos_sim_f1", "value": 88.08080808080808}, {"type": "cos_sim_precision", "value": 88.9795918367347}, {"type": "cos_sim_recall", "value": 87.2}, {"type": "dot_accuracy", "value": 99.46336633663367}, {"type": "dot_ap", "value": 75.78127156965245}, {"type": "dot_f1", "value": 71.41403865717193}, {"type": "dot_precision", "value": 72.67080745341616}, {"type": "dot_recall", "value": 70.19999999999999}, {"type": "euclidean_accuracy", "value": 99.67524752475248}, {"type": "euclidean_ap", "value": 88.61274955249769}, {"type": "euclidean_f1", "value": 82.30852211434735}, {"type": "euclidean_precision", "value": 89.34426229508196}, {"type": "euclidean_recall", "value": 76.3}, {"type": "manhattan_accuracy", "value": 99.67722772277227}, {"type": "manhattan_ap", "value": 88.77516158012779}, {"type": "manhattan_f1", "value": 82.36536430834212}, {"type": "manhattan_precision", "value": 87.24832214765101}, {"type": "manhattan_recall", "value": 78.0}, {"type": "max_accuracy", "value": 99.76633663366337}, {"type": "max_ap", "value": 93.84024096781063}, {"type": "max_f1", "value": 88.08080808080808}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "70a89468f6dccacc6aa2b12a6eac54e74328f235"}, "metrics": [{"type": "v_measure", "value": 59.20812266121527}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "d88009ab563dd0b16cfaf4436abaf97fa3550cf0"}, "metrics": [{"type": "v_measure", "value": 33.954248554638056}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9"}, "metrics": [{"type": "map", "value": 51.52800990025549}, {"type": "mrr", "value": 52.360394915541974}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "8753c2788d36c01fc6f05d03fe3f7268d63f9122"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.737881131277355}, {"type": "cos_sim_spearman", "value": 31.45979323917254}, {"type": "dot_pearson", "value": 26.24686017962023}, {"type": "dot_spearman", "value": 25.006732878791745}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217"}, "metrics": [{"type": "map_at_1", "value": 0.253}, {"type": "map_at_10", "value": 2.1399999999999997}, {"type": "map_at_100", "value": 12.873000000000001}, {"type": "map_at_1000", "value": 31.002000000000002}, {"type": "map_at_3", "value": 0.711}, {"type": "map_at_5", "value": 1.125}, {"type": "mrr_at_1", "value": 96.0}, {"type": "mrr_at_10", "value": 98.0}, {"type": "mrr_at_100", "value": 98.0}, {"type": "mrr_at_1000", "value": 98.0}, {"type": "mrr_at_3", "value": 98.0}, {"type": "mrr_at_5", "value": 98.0}, {"type": "ndcg_at_1", "value": 94.0}, {"type": "ndcg_at_10", "value": 84.881}, {"type": "ndcg_at_100", "value": 64.694}, {"type": "ndcg_at_1000", "value": 56.85}, {"type": "ndcg_at_3", "value": 90.061}, {"type": "ndcg_at_5", "value": 87.155}, {"type": "precision_at_1", "value": 96.0}, {"type": "precision_at_10", "value": 88.8}, {"type": "precision_at_100", "value": 65.7}, {"type": "precision_at_1000", "value": 25.080000000000002}, {"type": "precision_at_3", "value": 92.667}, {"type": "precision_at_5", "value": 90.0}, {"type": "recall_at_1", "value": 0.253}, {"type": "recall_at_10", "value": 2.292}, {"type": "recall_at_100", "value": 15.78}, {"type": "recall_at_1000", "value": 53.015}, {"type": "recall_at_3", "value": 0.7270000000000001}, {"type": "recall_at_5", "value": 1.162}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "527b7d77e16e343303e68cb6af11d6e18b9f7b3b"}, "metrics": [{"type": "map_at_1", "value": 2.116}, {"type": "map_at_10", "value": 9.625}, {"type": "map_at_100", "value": 15.641}, {"type": "map_at_1000", "value": 17.127}, {"type": "map_at_3", "value": 4.316}, {"type": "map_at_5", "value": 6.208}, {"type": "mrr_at_1", "value": 32.653}, {"type": "mrr_at_10", "value": 48.083999999999996}, {"type": "mrr_at_100", "value": 48.631}, {"type": "mrr_at_1000", "value": 48.649}, {"type": "mrr_at_3", "value": 42.857}, {"type": "mrr_at_5", "value": 46.224}, {"type": "ndcg_at_1", "value": 29.592000000000002}, {"type": "ndcg_at_10", "value": 25.430999999999997}, {"type": "ndcg_at_100", "value": 36.344}, {"type": "ndcg_at_1000", "value": 47.676}, {"type": "ndcg_at_3", "value": 26.144000000000002}, {"type": "ndcg_at_5", "value": 26.304}, {"type": "precision_at_1", "value": 32.653}, {"type": "precision_at_10", "value": 24.082}, {"type": "precision_at_100", "value": 7.714}, {"type": "precision_at_1000", "value": 1.5310000000000001}, {"type": "precision_at_3", "value": 26.531}, {"type": "precision_at_5", "value": 26.939}, {"type": "recall_at_1", "value": 2.116}, {"type": "recall_at_10", "value": 16.794}, {"type": "recall_at_100", "value": 47.452}, {"type": "recall_at_1000", "value": 82.312}, {"type": "recall_at_3", "value": 5.306}, {"type": "recall_at_5", "value": 9.306000000000001}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 67.709}, {"type": "ap", "value": 13.541535578501716}, {"type": "f1", "value": 52.569619919446794}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "62146448f05be9e52a36b8ee9936447ea787eede"}, "metrics": [{"type": "accuracy", "value": 56.850594227504246}, {"type": "f1", "value": 57.233377364910574}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "091a54f9a36281ce7d6590ec8c75dd485e7e01d4"}, "metrics": [{"type": "v_measure", "value": 39.463722986090474}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 84.09131549144662}, {"type": "cos_sim_ap", "value": 66.86677647503386}, {"type": "cos_sim_f1", "value": 62.94631710362049}, {"type": "cos_sim_precision", "value": 59.73933649289099}, {"type": "cos_sim_recall", "value": 66.51715039577837}, {"type": "dot_accuracy", "value": 80.27656911247541}, {"type": "dot_ap", "value": 54.291720398612085}, {"type": "dot_f1", "value": 54.77150537634409}, {"type": "dot_precision", "value": 47.58660957571039}, {"type": "dot_recall", "value": 64.5118733509235}, {"type": "euclidean_accuracy", "value": 82.76211480002385}, {"type": "euclidean_ap", "value": 62.430397690753296}, {"type": "euclidean_f1", "value": 59.191590539356774}, {"type": "euclidean_precision", "value": 56.296119971435374}, {"type": "euclidean_recall", "value": 62.401055408970976}, {"type": "manhattan_accuracy", "value": 82.7561542588067}, {"type": "manhattan_ap", "value": 62.41882051995577}, {"type": "manhattan_f1", "value": 59.32101002778785}, {"type": "manhattan_precision", "value": 54.71361711611321}, {"type": "manhattan_recall", "value": 64.77572559366754}, {"type": "max_accuracy", "value": 84.09131549144662}, {"type": "max_ap", "value": 66.86677647503386}, {"type": "max_f1", "value": 62.94631710362049}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.79574649745798}, {"type": "cos_sim_ap", "value": 85.28960532524223}, {"type": "cos_sim_f1", "value": 77.98460043358001}, {"type": "cos_sim_precision", "value": 75.78090948714224}, {"type": "cos_sim_recall", "value": 80.32029565753002}, {"type": "dot_accuracy", "value": 85.5939767920208}, {"type": "dot_ap", "value": 76.14131706694056}, {"type": "dot_f1", "value": 72.70246298696868}, {"type": "dot_precision", "value": 65.27012127894156}, {"type": "dot_recall", "value": 82.04496458269172}, {"type": "euclidean_accuracy", "value": 86.72332828812046}, {"type": "euclidean_ap", "value": 80.84854809178995}, {"type": "euclidean_f1", "value": 72.47657499809551}, {"type": "euclidean_precision", "value": 71.71717171717171}, {"type": "euclidean_recall", "value": 73.25223283030489}, {"type": "manhattan_accuracy", "value": 86.7563162184189}, {"type": "manhattan_ap", "value": 80.87598895575626}, {"type": "manhattan_f1", "value": 72.54617892068092}, {"type": "manhattan_precision", "value": 68.49268225960881}, {"type": "manhattan_recall", "value": 77.10963966738528}, {"type": "max_accuracy", "value": 88.79574649745798}, {"type": "max_ap", "value": 85.28960532524223}, {"type": "max_f1", "value": 77.98460043358001}]}]}]} | Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit | null | [
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gptj #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us
|
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 249592 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-5.8B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 249592 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gptj #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us \n",
"# SGPT-5.8B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 249592 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "SGPT-5.8B-weightedmean-nli-bitfit", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 74.07462686567165}, {"type": "ap", "value": 37.44692407529112}, {"type": "f1", "value": 68.28971003916419}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 66.63811563169165}, {"type": "ap", "value": 78.57252079915924}, {"type": "f1", "value": 64.5543087846584}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 77.21889055472263}, {"type": "ap", "value": 25.663426367826712}, {"type": "f1", "value": 64.26265688503176}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "2d8a100785abf0ae21420d2a55b0c56e3e1ea996"}, "metrics": [{"type": "accuracy", "value": 58.06209850107067}, {"type": "ap", "value": 14.028219107023915}, {"type": "f1", "value": 48.10387189660778}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1"}, "metrics": [{"type": "accuracy", "value": 82.30920000000002}, {"type": "ap", "value": 76.88786578621213}, {"type": "f1", "value": 82.15455656065011}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 41.584}, {"type": "f1", "value": 41.203137944390114}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 35.288000000000004}, {"type": "f1", "value": 34.672995558518096}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 38.34}, {"type": "f1", "value": 37.608755629529455}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 37.839999999999996}, {"type": "f1", "value": 36.86898201563507}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 30.936000000000003}, {"type": "f1", "value": 30.49401738527071}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "c379a6705fec24a2493fa68e011692605f44e119"}, "metrics": [{"type": "accuracy", "value": 33.75}, {"type": "f1", "value": 33.38338946025617}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3"}, "metrics": [{"type": "map_at_1", "value": 13.727}, {"type": "map_at_10", "value": 26.740000000000002}, {"type": "map_at_100", "value": 28.218}, {"type": "map_at_1000", "value": 28.246}, {"type": "map_at_3", "value": 21.728}, {"type": "map_at_5", "value": 24.371000000000002}, {"type": "ndcg_at_1", "value": 13.727}, {"type": "ndcg_at_10", "value": 35.07}, {"type": "ndcg_at_100", "value": 41.947}, {"type": "ndcg_at_1000", "value": 42.649}, {"type": "ndcg_at_3", "value": 24.484}, {"type": "ndcg_at_5", "value": 29.282999999999998}, {"type": "precision_at_1", "value": 13.727}, {"type": "precision_at_10", "value": 6.223}, {"type": "precision_at_100", "value": 0.9369999999999999}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 10.835}, {"type": "precision_at_5", "value": 8.848}, {"type": "recall_at_1", "value": 13.727}, {"type": "recall_at_10", "value": 62.233000000000004}, {"type": "recall_at_100", "value": 93.67}, {"type": "recall_at_1000", "value": 99.14699999999999}, {"type": "recall_at_3", "value": 32.504}, {"type": "recall_at_5", "value": 44.239}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "0bbdb47bcbe3a90093699aefeed338a0f28a7ee8"}, "metrics": [{"type": "v_measure", "value": 40.553923271901695}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3"}, "metrics": [{"type": "v_measure", "value": 32.49323183712211}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c"}, "metrics": [{"type": "map", "value": 55.89811361443445}, {"type": "mrr", "value": 70.16235764850724}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "9ee918f184421b6bd48b78f6c714d86546106103"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.50506557805856}, {"type": "cos_sim_spearman", "value": 79.50000423261176}, {"type": "euclidean_pearson", "value": 75.76190885392926}, {"type": "euclidean_spearman", "value": 76.7330737163434}, {"type": "manhattan_pearson", "value": 75.825318036112}, {"type": "manhattan_spearman", "value": 76.7415076434559}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (de-en)", "type": "mteb/bucc-bitext-mining", "config": "de-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 75.49060542797494}, {"type": "f1", "value": 75.15379262352123}, {"type": "precision", "value": 74.99391092553932}, {"type": "recall", "value": 75.49060542797494}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (fr-en)", "type": "mteb/bucc-bitext-mining", "config": "fr-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 0.4182258419546555}, {"type": "f1", "value": 0.4182258419546555}, {"type": "precision", "value": 0.4182258419546555}, {"type": "recall", "value": 0.4182258419546555}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (ru-en)", "type": "mteb/bucc-bitext-mining", "config": "ru-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 0.013855213023900243}, {"type": "f1", "value": 0.0115460108532502}, {"type": "precision", "value": 0.010391409767925183}, {"type": "recall", "value": 0.013855213023900243}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (zh-en)", "type": "mteb/bucc-bitext-mining", "config": "zh-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 0.315955766192733}, {"type": "f1", "value": 0.315955766192733}, {"type": "precision", "value": 0.315955766192733}, {"type": "recall", "value": 0.315955766192733}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "44fa15921b4c889113cc5df03dd4901b49161ab7"}, "metrics": [{"type": "accuracy", "value": 81.74025974025973}, {"type": "f1", "value": 81.66568824876}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55"}, "metrics": [{"type": "v_measure", "value": 33.59451202614059}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "c0fab014e1bcb8d3a5e31b2088972a1e01547dc1"}, "metrics": [{"type": "v_measure", "value": 29.128241446157165}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "2b9f5791698b5be7bc5e10535c8690f20043c3db"}, "metrics": [{"type": "map_at_1", "value": 26.715}, {"type": "map_at_10", "value": 35.007}, {"type": "map_at_100", "value": 36.352000000000004}, {"type": "map_at_1000", "value": 36.51}, {"type": "map_at_3", "value": 32.257999999999996}, {"type": "map_at_5", "value": 33.595000000000006}, {"type": "ndcg_at_1", "value": 33.906}, {"type": "ndcg_at_10", "value": 40.353}, {"type": "ndcg_at_100", "value": 45.562999999999995}, {"type": "ndcg_at_1000", "value": 48.454}, {"type": "ndcg_at_3", "value": 36.349}, {"type": "ndcg_at_5", "value": 37.856}, {"type": "precision_at_1", "value": 33.906}, {"type": "precision_at_10", "value": 7.854}, {"type": "precision_at_100", "value": 1.29}, {"type": "precision_at_1000", "value": 0.188}, {"type": "precision_at_3", "value": 17.549}, {"type": "precision_at_5", "value": 12.561}, {"type": "recall_at_1", "value": 26.715}, {"type": "recall_at_10", "value": 49.508}, {"type": "recall_at_100", "value": 71.76599999999999}, {"type": "recall_at_1000", "value": 91.118}, {"type": "recall_at_3", "value": 37.356}, {"type": "recall_at_5", "value": 41.836}, {"type": "map_at_1", "value": 19.663}, {"type": "map_at_10", "value": 27.086}, {"type": "map_at_100", "value": 28.066999999999997}, {"type": "map_at_1000", "value": 28.18}, {"type": "map_at_3", "value": 24.819}, {"type": "map_at_5", "value": 26.332}, {"type": "ndcg_at_1", "value": 25.732}, {"type": "ndcg_at_10", "value": 31.613999999999997}, {"type": "ndcg_at_100", "value": 35.757}, {"type": "ndcg_at_1000", "value": 38.21}, {"type": "ndcg_at_3", "value": 28.332}, {"type": "ndcg_at_5", "value": 30.264000000000003}, {"type": "precision_at_1", "value": 25.732}, {"type": "precision_at_10", "value": 6.038}, {"type": "precision_at_100", "value": 1.034}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 13.864}, {"type": "precision_at_5", "value": 10.241999999999999}, {"type": "recall_at_1", "value": 19.663}, {"type": "recall_at_10", "value": 39.585}, {"type": "recall_at_100", "value": 57.718}, {"type": "recall_at_1000", "value": 74.26700000000001}, {"type": "recall_at_3", "value": 29.845}, {"type": "recall_at_5", "value": 35.105}, {"type": "map_at_1", "value": 30.125}, {"type": "map_at_10", "value": 39.824}, {"type": "map_at_100", "value": 40.935}, {"type": "map_at_1000", "value": 41.019}, {"type": "map_at_3", "value": 37.144}, {"type": "map_at_5", "value": 38.647999999999996}, {"type": "ndcg_at_1", "value": 34.922}, {"type": "ndcg_at_10", "value": 45.072}, {"type": "ndcg_at_100", "value": 50.046}, {"type": "ndcg_at_1000", "value": 51.895}, {"type": "ndcg_at_3", "value": 40.251}, {"type": "ndcg_at_5", "value": 42.581}, {"type": "precision_at_1", "value": 34.922}, {"type": "precision_at_10", "value": 7.303999999999999}, {"type": "precision_at_100", "value": 1.0739999999999998}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 17.994}, {"type": "precision_at_5", "value": 12.475999999999999}, {"type": "recall_at_1", "value": 30.125}, {"type": "recall_at_10", "value": 57.253}, {"type": "recall_at_100", "value": 79.35799999999999}, {"type": "recall_at_1000", "value": 92.523}, {"type": "recall_at_3", "value": 44.088}, {"type": "recall_at_5", "value": 49.893}, {"type": "map_at_1", "value": 16.298000000000002}, {"type": "map_at_10", "value": 21.479}, {"type": "map_at_100", "value": 22.387}, {"type": "map_at_1000", "value": 22.483}, {"type": "map_at_3", "value": 19.743}, {"type": "map_at_5", "value": 20.444000000000003}, {"type": "ndcg_at_1", "value": 17.740000000000002}, {"type": "ndcg_at_10", "value": 24.887}, {"type": "ndcg_at_100", "value": 29.544999999999998}, {"type": "ndcg_at_1000", "value": 32.417}, {"type": "ndcg_at_3", "value": 21.274}, {"type": "ndcg_at_5", "value": 22.399}, {"type": "precision_at_1", "value": 17.740000000000002}, {"type": "precision_at_10", "value": 3.932}, {"type": "precision_at_100", "value": 0.666}, {"type": "precision_at_1000", "value": 0.094}, {"type": "precision_at_3", "value": 8.927}, {"type": "precision_at_5", "value": 6.056}, {"type": "recall_at_1", "value": 16.298000000000002}, {"type": "recall_at_10", "value": 34.031}, {"type": "recall_at_100", "value": 55.769000000000005}, {"type": "recall_at_1000", "value": 78.19500000000001}, {"type": "recall_at_3", "value": 23.799999999999997}, {"type": "recall_at_5", "value": 26.562}, {"type": "map_at_1", "value": 10.958}, {"type": "map_at_10", "value": 16.999}, {"type": "map_at_100", "value": 17.979}, {"type": "map_at_1000", "value": 18.112000000000002}, {"type": "map_at_3", "value": 15.010000000000002}, {"type": "map_at_5", "value": 16.256999999999998}, {"type": "ndcg_at_1", "value": 14.179}, {"type": "ndcg_at_10", "value": 20.985}, {"type": "ndcg_at_100", "value": 26.216}, {"type": "ndcg_at_1000", "value": 29.675}, {"type": "ndcg_at_3", "value": 17.28}, {"type": "ndcg_at_5", "value": 19.301}, {"type": "precision_at_1", "value": 14.179}, {"type": "precision_at_10", "value": 3.968}, {"type": "precision_at_100", "value": 0.784}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 8.541}, {"type": "precision_at_5", "value": 6.468}, {"type": "recall_at_1", "value": 10.958}, {"type": "recall_at_10", "value": 29.903000000000002}, {"type": "recall_at_100", "value": 53.413}, {"type": "recall_at_1000", "value": 78.74799999999999}, {"type": "recall_at_3", "value": 19.717000000000002}, {"type": "recall_at_5", "value": 24.817}, {"type": "map_at_1", "value": 21.217}, {"type": "map_at_10", "value": 29.677}, {"type": "map_at_100", "value": 30.928}, {"type": "map_at_1000", "value": 31.063000000000002}, {"type": "map_at_3", "value": 26.611}, {"type": "map_at_5", "value": 28.463}, {"type": "ndcg_at_1", "value": 26.083000000000002}, {"type": "ndcg_at_10", "value": 35.217}, {"type": "ndcg_at_100", "value": 40.715}, {"type": "ndcg_at_1000", "value": 43.559}, {"type": "ndcg_at_3", "value": 30.080000000000002}, {"type": "ndcg_at_5", "value": 32.701}, {"type": "precision_at_1", "value": 26.083000000000002}, {"type": "precision_at_10", "value": 6.622}, {"type": "precision_at_100", "value": 1.115}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 14.629}, {"type": "precision_at_5", "value": 10.837}, {"type": "recall_at_1", "value": 21.217}, {"type": "recall_at_10", "value": 47.031}, {"type": "recall_at_100", "value": 70.378}, {"type": "recall_at_1000", "value": 89.704}, {"type": "recall_at_3", "value": 32.427}, {"type": "recall_at_5", "value": 39.31}, {"type": "map_at_1", "value": 19.274}, {"type": "map_at_10", "value": 26.398}, {"type": "map_at_100", "value": 27.711000000000002}, {"type": "map_at_1000", "value": 27.833000000000002}, {"type": "map_at_3", "value": 24.294}, {"type": "map_at_5", "value": 25.385}, {"type": "ndcg_at_1", "value": 24.886}, {"type": "ndcg_at_10", "value": 30.909}, {"type": "ndcg_at_100", "value": 36.941}, {"type": "ndcg_at_1000", "value": 39.838}, {"type": "ndcg_at_3", "value": 27.455000000000002}, {"type": "ndcg_at_5", "value": 28.828}, {"type": "precision_at_1", "value": 24.886}, {"type": "precision_at_10", "value": 5.6739999999999995}, {"type": "precision_at_100", "value": 1.0290000000000001}, {"type": "precision_at_1000", "value": 0.146}, {"type": "precision_at_3", "value": 13.242}, {"type": "precision_at_5", "value": 9.292}, {"type": "recall_at_1", "value": 19.274}, {"type": "recall_at_10", "value": 39.643}, {"type": "recall_at_100", "value": 66.091}, {"type": "recall_at_1000", "value": 86.547}, {"type": "recall_at_3", "value": 29.602}, {"type": "recall_at_5", "value": 33.561}, {"type": "map_at_1", "value": 18.653666666666666}, {"type": "map_at_10", "value": 25.606666666666666}, {"type": "map_at_100", "value": 26.669333333333334}, {"type": "map_at_1000", "value": 26.795833333333334}, {"type": "map_at_3", "value": 23.43433333333333}, {"type": "map_at_5", "value": 24.609666666666666}, {"type": "ndcg_at_1", "value": 22.742083333333333}, {"type": "ndcg_at_10", "value": 29.978333333333335}, {"type": "ndcg_at_100", "value": 34.89808333333333}, {"type": "ndcg_at_1000", "value": 37.806583333333336}, {"type": "ndcg_at_3", "value": 26.223666666666674}, {"type": "ndcg_at_5", "value": 27.91033333333333}, {"type": "precision_at_1", "value": 22.742083333333333}, {"type": "precision_at_10", "value": 5.397083333333334}, {"type": "precision_at_100", "value": 0.9340000000000002}, {"type": "precision_at_1000", "value": 0.13691666666666663}, {"type": "precision_at_3", "value": 12.331083333333332}, {"type": "precision_at_5", "value": 8.805499999999999}, {"type": "recall_at_1", "value": 18.653666666666666}, {"type": "recall_at_10", "value": 39.22625000000001}, {"type": "recall_at_100", "value": 61.31049999999999}, {"type": "recall_at_1000", "value": 82.19058333333334}, {"type": "recall_at_3", "value": 28.517333333333333}, {"type": "recall_at_5", "value": 32.9565}, {"type": "map_at_1", "value": 16.07}, {"type": "map_at_10", "value": 21.509}, {"type": "map_at_100", "value": 22.335}, {"type": "map_at_1000", "value": 22.437}, {"type": "map_at_3", "value": 19.717000000000002}, {"type": "map_at_5", "value": 20.574}, {"type": "ndcg_at_1", "value": 18.865000000000002}, {"type": "ndcg_at_10", "value": 25.135999999999996}, {"type": "ndcg_at_100", "value": 29.483999999999998}, {"type": "ndcg_at_1000", "value": 32.303}, {"type": "ndcg_at_3", "value": 21.719}, {"type": "ndcg_at_5", "value": 23.039}, {"type": "precision_at_1", "value": 18.865000000000002}, {"type": "precision_at_10", "value": 4.263999999999999}, {"type": "precision_at_100", "value": 0.696}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 9.866999999999999}, {"type": "precision_at_5", "value": 6.902}, {"type": "recall_at_1", "value": 16.07}, {"type": "recall_at_10", "value": 33.661}, {"type": "recall_at_100", "value": 54.001999999999995}, {"type": "recall_at_1000", "value": 75.564}, {"type": "recall_at_3", "value": 23.956}, {"type": "recall_at_5", "value": 27.264}, {"type": "map_at_1", "value": 10.847}, {"type": "map_at_10", "value": 15.518}, {"type": "map_at_100", "value": 16.384}, {"type": "map_at_1000", "value": 16.506}, {"type": "map_at_3", "value": 14.093}, {"type": "map_at_5", "value": 14.868}, {"type": "ndcg_at_1", "value": 13.764999999999999}, {"type": "ndcg_at_10", "value": 18.766}, {"type": "ndcg_at_100", "value": 23.076}, {"type": "ndcg_at_1000", "value": 26.344}, {"type": "ndcg_at_3", "value": 16.150000000000002}, {"type": "ndcg_at_5", "value": 17.373}, {"type": "precision_at_1", "value": 13.764999999999999}, {"type": "precision_at_10", "value": 3.572}, {"type": "precision_at_100", "value": 0.6779999999999999}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 7.88}, {"type": "precision_at_5", "value": 5.712}, {"type": "recall_at_1", "value": 10.847}, {"type": "recall_at_10", "value": 25.141999999999996}, {"type": "recall_at_100", "value": 44.847}, {"type": "recall_at_1000", "value": 68.92099999999999}, {"type": "recall_at_3", "value": 17.721999999999998}, {"type": "recall_at_5", "value": 20.968999999999998}, {"type": "map_at_1", "value": 18.377}, {"type": "map_at_10", "value": 26.005}, {"type": "map_at_100", "value": 26.996}, {"type": "map_at_1000", "value": 27.116}, {"type": "map_at_3", "value": 23.712}, {"type": "map_at_5", "value": 24.859}, {"type": "ndcg_at_1", "value": 22.201}, {"type": "ndcg_at_10", "value": 30.635}, {"type": "ndcg_at_100", "value": 35.623}, {"type": "ndcg_at_1000", "value": 38.551}, {"type": "ndcg_at_3", "value": 26.565}, {"type": "ndcg_at_5", "value": 28.28}, {"type": "precision_at_1", "value": 22.201}, {"type": "precision_at_10", "value": 5.41}, {"type": "precision_at_100", "value": 0.88}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 12.531}, {"type": "precision_at_5", "value": 8.806}, {"type": "recall_at_1", "value": 18.377}, {"type": "recall_at_10", "value": 40.908}, {"type": "recall_at_100", "value": 63.563}, {"type": "recall_at_1000", "value": 84.503}, {"type": "recall_at_3", "value": 29.793999999999997}, {"type": "recall_at_5", "value": 34.144999999999996}, {"type": "map_at_1", "value": 20.246}, {"type": "map_at_10", "value": 27.528000000000002}, {"type": "map_at_100", "value": 28.78}, {"type": "map_at_1000", "value": 29.002}, {"type": "map_at_3", "value": 25.226}, {"type": "map_at_5", "value": 26.355}, {"type": "ndcg_at_1", "value": 25.099}, {"type": "ndcg_at_10", "value": 32.421}, {"type": "ndcg_at_100", "value": 37.2}, {"type": "ndcg_at_1000", "value": 40.693}, {"type": "ndcg_at_3", "value": 28.768}, {"type": "ndcg_at_5", "value": 30.23}, {"type": "precision_at_1", "value": 25.099}, {"type": "precision_at_10", "value": 6.245}, {"type": "precision_at_100", "value": 1.269}, {"type": "precision_at_1000", "value": 0.218}, {"type": "precision_at_3", "value": 13.767999999999999}, {"type": "precision_at_5", "value": 9.881}, {"type": "recall_at_1", "value": 20.246}, {"type": "recall_at_10", "value": 41.336}, {"type": "recall_at_100", "value": 63.098}, {"type": "recall_at_1000", "value": 86.473}, {"type": "recall_at_3", "value": 30.069000000000003}, {"type": "recall_at_5", "value": 34.262}, {"type": "map_at_1", "value": 14.054}, {"type": "map_at_10", "value": 20.25}, {"type": "map_at_100", "value": 21.178}, {"type": "map_at_1000", "value": 21.288999999999998}, {"type": "map_at_3", "value": 18.584999999999997}, {"type": "map_at_5", "value": 19.536}, {"type": "ndcg_at_1", "value": 15.527}, {"type": "ndcg_at_10", "value": 23.745}, {"type": "ndcg_at_100", "value": 28.610999999999997}, {"type": "ndcg_at_1000", "value": 31.740000000000002}, {"type": "ndcg_at_3", "value": 20.461}, {"type": "ndcg_at_5", "value": 22.072}, {"type": "precision_at_1", "value": 15.527}, {"type": "precision_at_10", "value": 3.882}, {"type": "precision_at_100", "value": 0.6930000000000001}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 9.181000000000001}, {"type": "precision_at_5", "value": 6.433}, {"type": "recall_at_1", "value": 14.054}, {"type": "recall_at_10", "value": 32.714}, {"type": "recall_at_100", "value": 55.723}, {"type": "recall_at_1000", "value": 79.72399999999999}, {"type": "recall_at_3", "value": 23.832}, {"type": "recall_at_5", "value": 27.754}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "392b78eb68c07badcd7c2cd8f39af108375dfcce"}, "metrics": [{"type": "map_at_1", "value": 6.122}, {"type": "map_at_10", "value": 11.556}, {"type": "map_at_100", "value": 12.998000000000001}, {"type": "map_at_1000", "value": 13.202}, {"type": "map_at_3", "value": 9.657}, {"type": "map_at_5", "value": 10.585}, {"type": "ndcg_at_1", "value": 15.049000000000001}, {"type": "ndcg_at_10", "value": 17.574}, {"type": "ndcg_at_100", "value": 24.465999999999998}, {"type": "ndcg_at_1000", "value": 28.511999999999997}, {"type": "ndcg_at_3", "value": 13.931}, {"type": "ndcg_at_5", "value": 15.112}, {"type": "precision_at_1", "value": 15.049000000000001}, {"type": "precision_at_10", "value": 5.831}, {"type": "precision_at_100", "value": 1.322}, {"type": "precision_at_1000", "value": 0.20500000000000002}, {"type": "precision_at_3", "value": 10.749}, {"type": "precision_at_5", "value": 8.365}, {"type": "recall_at_1", "value": 6.122}, {"type": "recall_at_10", "value": 22.207}, {"type": "recall_at_100", "value": 47.08}, {"type": "recall_at_1000", "value": 70.182}, {"type": "recall_at_3", "value": 13.416}, {"type": "recall_at_5", "value": 16.672}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "f097057d03ed98220bc7309ddb10b71a54d667d6"}, "metrics": [{"type": "map_at_1", "value": 4.672}, {"type": "map_at_10", "value": 10.534}, {"type": "map_at_100", "value": 14.798}, {"type": "map_at_1000", "value": 15.927}, {"type": "map_at_3", "value": 7.317}, {"type": "map_at_5", "value": 8.726}, {"type": "ndcg_at_1", "value": 36.5}, {"type": "ndcg_at_10", "value": 26.098}, {"type": "ndcg_at_100", "value": 29.215999999999998}, {"type": "ndcg_at_1000", "value": 36.254999999999995}, {"type": "ndcg_at_3", "value": 29.247}, {"type": "ndcg_at_5", "value": 27.692}, {"type": "precision_at_1", "value": 47.25}, {"type": "precision_at_10", "value": 22.625}, {"type": "precision_at_100", "value": 7.042}, {"type": "precision_at_1000", "value": 1.6129999999999998}, {"type": "precision_at_3", "value": 34.083000000000006}, {"type": "precision_at_5", "value": 29.5}, {"type": "recall_at_1", "value": 4.672}, {"type": "recall_at_10", "value": 15.638}, {"type": "recall_at_100", "value": 36.228}, {"type": "recall_at_1000", "value": 58.831}, {"type": "recall_at_3", "value": 8.578}, {"type": "recall_at_5", "value": 11.18}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "829147f8f75a25f005913200eb5ed41fae320aa1"}, "metrics": [{"type": "accuracy", "value": 49.919999999999995}, {"type": "f1", "value": 45.37973678791632}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "1429cf27e393599b8b359b9b72c666f96b2525f9"}, "metrics": [{"type": "map_at_1", "value": 25.801000000000002}, {"type": "map_at_10", "value": 33.941}, {"type": "map_at_100", "value": 34.73}, {"type": "map_at_1000", "value": 34.793}, {"type": "map_at_3", "value": 31.705}, {"type": "map_at_5", "value": 33.047}, {"type": "ndcg_at_1", "value": 27.933000000000003}, {"type": "ndcg_at_10", "value": 38.644}, {"type": "ndcg_at_100", "value": 42.594}, {"type": "ndcg_at_1000", "value": 44.352000000000004}, {"type": "ndcg_at_3", "value": 34.199}, {"type": "ndcg_at_5", "value": 36.573}, {"type": "precision_at_1", "value": 27.933000000000003}, {"type": "precision_at_10", "value": 5.603000000000001}, {"type": "precision_at_100", "value": 0.773}, {"type": "precision_at_1000", "value": 0.094}, {"type": "precision_at_3", "value": 14.171}, {"type": "precision_at_5", "value": 9.786999999999999}, {"type": "recall_at_1", "value": 25.801000000000002}, {"type": "recall_at_10", "value": 50.876}, {"type": "recall_at_100", "value": 69.253}, {"type": "recall_at_1000", "value": 82.907}, {"type": "recall_at_3", "value": 38.879000000000005}, {"type": "recall_at_5", "value": 44.651999999999994}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "41b686a7f28c59bcaaa5791efd47c67c8ebe28be"}, "metrics": [{"type": "map_at_1", "value": 9.142}, {"type": "map_at_10", "value": 13.841999999999999}, {"type": "map_at_100", "value": 14.960999999999999}, {"type": "map_at_1000", "value": 15.187000000000001}, {"type": "map_at_3", "value": 11.966000000000001}, {"type": "map_at_5", "value": 12.921}, {"type": "ndcg_at_1", "value": 18.364}, {"type": "ndcg_at_10", "value": 18.590999999999998}, {"type": "ndcg_at_100", "value": 24.153}, {"type": "ndcg_at_1000", "value": 29.104000000000003}, {"type": "ndcg_at_3", "value": 16.323}, {"type": "ndcg_at_5", "value": 17.000999999999998}, {"type": "precision_at_1", "value": 18.364}, {"type": "precision_at_10", "value": 5.216}, {"type": "precision_at_100", "value": 1.09}, {"type": "precision_at_1000", "value": 0.193}, {"type": "precision_at_3", "value": 10.751}, {"type": "precision_at_5", "value": 7.932}, {"type": "recall_at_1", "value": 9.142}, {"type": "recall_at_10", "value": 22.747}, {"type": "recall_at_100", "value": 44.585}, {"type": "recall_at_1000", "value": 75.481}, {"type": "recall_at_3", "value": 14.602}, {"type": "recall_at_5", "value": 17.957}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "766870b35a1b9ca65e67a0d1913899973551fc6c"}, "metrics": [{"type": "map_at_1", "value": 18.677}, {"type": "map_at_10", "value": 26.616}, {"type": "map_at_100", "value": 27.605}, {"type": "map_at_1000", "value": 27.711999999999996}, {"type": "map_at_3", "value": 24.396}, {"type": "map_at_5", "value": 25.627}, {"type": "ndcg_at_1", "value": 37.352999999999994}, {"type": "ndcg_at_10", "value": 33.995}, {"type": "ndcg_at_100", "value": 38.423}, {"type": "ndcg_at_1000", "value": 40.947}, {"type": "ndcg_at_3", "value": 29.885}, {"type": "ndcg_at_5", "value": 31.874999999999996}, {"type": "precision_at_1", "value": 37.352999999999994}, {"type": "precision_at_10", "value": 7.539999999999999}, {"type": "precision_at_100", "value": 1.107}, {"type": "precision_at_1000", "value": 0.145}, {"type": "precision_at_3", "value": 18.938}, {"type": "precision_at_5", "value": 12.943}, {"type": "recall_at_1", "value": 18.677}, {"type": "recall_at_10", "value": 37.698}, {"type": "recall_at_100", "value": 55.354000000000006}, {"type": "recall_at_1000", "value": 72.255}, {"type": "recall_at_3", "value": 28.406}, {"type": "recall_at_5", "value": 32.357}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "8d743909f834c38949e8323a8a6ce8721ea6c7f4"}, "metrics": [{"type": "accuracy", "value": 74.3292}, {"type": "ap", "value": 68.30186110189658}, {"type": "f1", "value": 74.20709636944783}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "validation", "revision": "e6838a846e2408f22cf5cc337ebc83e0bcf77849"}, "metrics": [{"type": "map_at_1", "value": 6.889000000000001}, {"type": "map_at_10", "value": 12.321}, {"type": "map_at_100", "value": 13.416}, {"type": "map_at_1000", "value": 13.525}, {"type": "map_at_3", "value": 10.205}, {"type": "map_at_5", "value": 11.342}, {"type": "ndcg_at_1", "value": 7.092}, {"type": "ndcg_at_10", "value": 15.827}, {"type": "ndcg_at_100", "value": 21.72}, {"type": "ndcg_at_1000", "value": 24.836}, {"type": "ndcg_at_3", "value": 11.393}, {"type": "ndcg_at_5", "value": 13.462}, {"type": "precision_at_1", "value": 7.092}, {"type": "precision_at_10", "value": 2.7969999999999997}, {"type": "precision_at_100", "value": 0.583}, {"type": "precision_at_1000", "value": 0.08499999999999999}, {"type": "precision_at_3", "value": 5.019}, {"type": "precision_at_5", "value": 4.06}, {"type": "recall_at_1", "value": 6.889000000000001}, {"type": "recall_at_10", "value": 26.791999999999998}, {"type": "recall_at_100", "value": 55.371}, {"type": "recall_at_1000", "value": 80.12899999999999}, {"type": "recall_at_3", "value": 14.573}, {"type": "recall_at_5", "value": 19.557}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 89.6374829001368}, {"type": "f1", "value": 89.20878379358307}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 84.54212454212454}, {"type": "f1", "value": 82.81080100037023}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 86.46430953969313}, {"type": "f1", "value": 86.00019824223267}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 81.31850923896022}, {"type": "f1", "value": 81.07860454762863}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 58.23234134098243}, {"type": "f1", "value": 56.63845098081841}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "a7e2a951126a26fc8c6a69f835f33a346ba259e3"}, "metrics": [{"type": "accuracy", "value": 72.28571428571429}, {"type": "f1", "value": 70.95796714592039}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 70.68171454628363}, {"type": "f1", "value": 52.57188062729139}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 60.521273598196665}, {"type": "f1", "value": 42.70492970339204}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 64.32288192128087}, {"type": "f1", "value": 45.97360620220273}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 58.67209520826808}, {"type": "f1", "value": 42.82844991304579}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 41.95769092864826}, {"type": "f1", "value": 28.914127631431263}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "6299947a7777084cc2d4b64235bf7190381ce755"}, "metrics": [{"type": "accuracy", "value": 55.28390596745027}, {"type": "f1", "value": 38.33899250561289}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "072a486a144adf7f4479a4a0dddb2152e161e1ea"}, "metrics": [{"type": "accuracy", "value": 70.00336247478144}, {"type": "f1", "value": 68.72041942191649}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 75.0268997982515}, {"type": "f1", "value": 75.29844481506652}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "dcefc037ef84348e49b0d29109e891c01067226b"}, "metrics": [{"type": "v_measure", "value": 30.327566856300813}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc"}, "metrics": [{"type": "v_measure", "value": 28.01650210863619}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.11041256752524}, {"type": "mrr", "value": 32.14172939750204}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "7eb63cc0c1eb59324d709ebed25fcab851fa7610"}, "metrics": [{"type": "map_at_1", "value": 3.527}, {"type": "map_at_10", "value": 9.283}, {"type": "map_at_100", "value": 11.995000000000001}, {"type": "map_at_1000", "value": 13.33}, {"type": "map_at_3", "value": 6.223}, {"type": "map_at_5", "value": 7.68}, {"type": "ndcg_at_1", "value": 36.223}, {"type": "ndcg_at_10", "value": 28.255999999999997}, {"type": "ndcg_at_100", "value": 26.355}, {"type": "ndcg_at_1000", "value": 35.536}, {"type": "ndcg_at_3", "value": 31.962000000000003}, {"type": "ndcg_at_5", "value": 30.61}, {"type": "precision_at_1", "value": 37.771}, {"type": "precision_at_10", "value": 21.889}, {"type": "precision_at_100", "value": 7.1080000000000005}, {"type": "precision_at_1000", "value": 1.989}, {"type": "precision_at_3", "value": 30.857}, {"type": "precision_at_5", "value": 27.307}, {"type": "recall_at_1", "value": 3.527}, {"type": "recall_at_10", "value": 14.015}, {"type": "recall_at_100", "value": 28.402}, {"type": "recall_at_1000", "value": 59.795}, {"type": "recall_at_3", "value": 7.5969999999999995}, {"type": "recall_at_5", "value": 10.641}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "6062aefc120bfe8ece5897809fb2e53bfe0d128c"}, "metrics": [{"type": "map_at_1", "value": 11.631}, {"type": "map_at_10", "value": 19.532}, {"type": "map_at_100", "value": 20.821}, {"type": "map_at_1000", "value": 20.910999999999998}, {"type": "map_at_3", "value": 16.597}, {"type": "map_at_5", "value": 18.197}, {"type": "ndcg_at_1", "value": 13.413}, {"type": "ndcg_at_10", "value": 24.628}, {"type": "ndcg_at_100", "value": 30.883}, {"type": "ndcg_at_1000", "value": 33.216}, {"type": "ndcg_at_3", "value": 18.697}, {"type": "ndcg_at_5", "value": 21.501}, {"type": "precision_at_1", "value": 13.413}, {"type": "precision_at_10", "value": 4.571}, {"type": "precision_at_100", "value": 0.812}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 8.845}, {"type": "precision_at_5", "value": 6.889000000000001}, {"type": "recall_at_1", "value": 11.631}, {"type": "recall_at_10", "value": 38.429}, {"type": "recall_at_100", "value": 67.009}, {"type": "recall_at_1000", "value": 84.796}, {"type": "recall_at_3", "value": 22.74}, {"type": "recall_at_5", "value": 29.266}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "6205996560df11e3a3da9ab4f926788fc30a7db4"}, "metrics": [{"type": "map_at_1", "value": 66.64}, {"type": "map_at_10", "value": 80.394}, {"type": "map_at_100", "value": 81.099}, {"type": "map_at_1000", "value": 81.122}, {"type": "map_at_3", "value": 77.289}, {"type": "map_at_5", "value": 79.25999999999999}, {"type": "ndcg_at_1", "value": 76.85}, {"type": "ndcg_at_10", "value": 84.68}, {"type": "ndcg_at_100", "value": 86.311}, {"type": "ndcg_at_1000", "value": 86.49900000000001}, {"type": "ndcg_at_3", "value": 81.295}, {"type": "ndcg_at_5", "value": 83.199}, {"type": "precision_at_1", "value": 76.85}, {"type": "precision_at_10", "value": 12.928999999999998}, {"type": "precision_at_100", "value": 1.51}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 35.557}, {"type": "precision_at_5", "value": 23.576}, {"type": "recall_at_1", "value": 66.64}, {"type": "recall_at_10", "value": 93.059}, {"type": "recall_at_100", "value": 98.922}, {"type": "recall_at_1000", "value": 99.883}, {"type": "recall_at_3", "value": 83.49499999999999}, {"type": "recall_at_5", "value": 88.729}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "b2805658ae38990172679479369a78b86de8c390"}, "metrics": [{"type": "v_measure", "value": 42.17131361041068}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "385e3cb46b4cfa89021f56c4380204149d0efe33"}, "metrics": [{"type": "v_measure", "value": 48.01815621479994}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "5c59ef3e437a0a9651c8fe6fde943e7dce59fba5"}, "metrics": [{"type": "map_at_1", "value": 3.198}, {"type": "map_at_10", "value": 7.550999999999999}, {"type": "map_at_100", "value": 9.232}, {"type": "map_at_1000", "value": 9.51}, {"type": "map_at_3", "value": 5.2940000000000005}, {"type": "map_at_5", "value": 6.343999999999999}, {"type": "ndcg_at_1", "value": 15.8}, {"type": "ndcg_at_10", "value": 13.553999999999998}, {"type": "ndcg_at_100", "value": 20.776}, {"type": "ndcg_at_1000", "value": 26.204}, {"type": "ndcg_at_3", "value": 12.306000000000001}, {"type": "ndcg_at_5", "value": 10.952}, {"type": "precision_at_1", "value": 15.8}, {"type": "precision_at_10", "value": 7.180000000000001}, {"type": "precision_at_100", "value": 1.762}, {"type": "precision_at_1000", "value": 0.307}, {"type": "precision_at_3", "value": 11.333}, {"type": "precision_at_5", "value": 9.62}, {"type": "recall_at_1", "value": 3.198}, {"type": "recall_at_10", "value": 14.575}, {"type": "recall_at_100", "value": 35.758}, {"type": "recall_at_1000", "value": 62.317}, {"type": "recall_at_3", "value": 6.922000000000001}, {"type": "recall_at_5", "value": 9.767000000000001}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "20a6d6f312dd54037fe07a32d58e5e168867909d"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.5217161312271}, {"type": "cos_sim_spearman", "value": 79.58562467776268}, {"type": "euclidean_pearson", "value": 76.69364353942403}, {"type": "euclidean_spearman", "value": 74.68959282070473}, {"type": "manhattan_pearson", "value": 76.81159265133732}, {"type": "manhattan_spearman", "value": 74.7519444048176}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "fdf84275bb8ce4b49c971d02e84dd1abc677a50f"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.70403706922605}, {"type": "cos_sim_spearman", "value": 74.28502198729447}, {"type": "euclidean_pearson", "value": 83.32719404608066}, {"type": "euclidean_spearman", "value": 75.92189433460788}, {"type": "manhattan_pearson", "value": 83.35841543005293}, {"type": "manhattan_spearman", "value": 75.94458615451978}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "1591bfcbe8c69d4bf7fe2a16e2451017832cafb9"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.94127878986795}, {"type": "cos_sim_spearman", "value": 85.35148434923192}, {"type": "euclidean_pearson", "value": 81.71127467071571}, {"type": "euclidean_spearman", "value": 82.88240481546771}, {"type": "manhattan_pearson", "value": 81.72826221967252}, {"type": "manhattan_spearman", "value": 82.90725064625128}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "e2125984e7df8b7871f6ae9949cf6b6795e7c54b"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.1474704168523}, {"type": "cos_sim_spearman", "value": 79.20612995350827}, {"type": "euclidean_pearson", "value": 78.85993329596555}, {"type": "euclidean_spearman", "value": 78.91956572744715}, {"type": "manhattan_pearson", "value": 78.89999720522347}, {"type": "manhattan_spearman", "value": 78.93956842550107}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "1cd7298cac12a96a373b6a2f18738bb3e739a9b6"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.81255514055894}, {"type": "cos_sim_spearman", "value": 85.5217140762934}, {"type": "euclidean_pearson", "value": 82.15024353784499}, {"type": "euclidean_spearman", "value": 83.04155334389833}, {"type": "manhattan_pearson", "value": 82.18598945053624}, {"type": "manhattan_spearman", "value": 83.07248357693301}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "360a0b2dff98700d09e634a01e1cc1624d3e42cd"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.63248465157822}, {"type": "cos_sim_spearman", "value": 82.53853238521991}, {"type": "euclidean_pearson", "value": 78.33936863828221}, {"type": "euclidean_spearman", "value": 79.16305579487414}, {"type": "manhattan_pearson", "value": 78.3888359870894}, {"type": "manhattan_spearman", "value": 79.18504473136467}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "9fc37e8c632af1c87a3d23e685d49552a02582a0"}, "metrics": [{"type": "cos_sim_pearson", "value": 90.09066290639687}, {"type": "cos_sim_spearman", "value": 90.43893699357069}, {"type": "euclidean_pearson", "value": 82.39520777222396}, {"type": "euclidean_spearman", "value": 81.23948185395952}, {"type": "manhattan_pearson", "value": 82.35529784653383}, {"type": "manhattan_spearman", "value": 81.12681522483975}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "2de6ce8c1921b71a755b262c6b57fef195dd7906"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.52752323046846}, {"type": "cos_sim_spearman", "value": 63.19719780439462}, {"type": "euclidean_pearson", "value": 58.29085490641428}, {"type": "euclidean_spearman", "value": 58.975178656335046}, {"type": "manhattan_pearson", "value": 58.183542772416985}, {"type": "manhattan_spearman", "value": 59.190630462178994}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "8913289635987208e6e7c72789e4be2fe94b6abd"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.45100366635687}, {"type": "cos_sim_spearman", "value": 85.66816193002651}, {"type": "euclidean_pearson", "value": 81.87976731329091}, {"type": "euclidean_spearman", "value": 82.01382867690964}, {"type": "manhattan_pearson", "value": 81.88260155706726}, {"type": "manhattan_spearman", "value": 82.05258597906492}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "56a6d0140cf6356659e2a7c1413286a774468d44"}, "metrics": [{"type": "map", "value": 77.53549990038017}, {"type": "mrr", "value": 93.37474163454556}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "a75ae049398addde9b70f6b268875f5cbce99089"}, "metrics": [{"type": "map_at_1", "value": 31.167}, {"type": "map_at_10", "value": 40.778}, {"type": "map_at_100", "value": 42.063}, {"type": "map_at_1000", "value": 42.103}, {"type": "map_at_3", "value": 37.12}, {"type": "map_at_5", "value": 39.205}, {"type": "ndcg_at_1", "value": 33.667}, {"type": "ndcg_at_10", "value": 46.662}, {"type": "ndcg_at_100", "value": 51.995999999999995}, {"type": "ndcg_at_1000", "value": 53.254999999999995}, {"type": "ndcg_at_3", "value": 39.397999999999996}, {"type": "ndcg_at_5", "value": 42.934}, {"type": "precision_at_1", "value": 33.667}, {"type": "precision_at_10", "value": 7.1}, {"type": "precision_at_100", "value": 0.993}, {"type": "precision_at_1000", "value": 0.11}, {"type": "precision_at_3", "value": 16.111}, {"type": "precision_at_5", "value": 11.600000000000001}, {"type": "recall_at_1", "value": 31.167}, {"type": "recall_at_10", "value": 63.744}, {"type": "recall_at_100", "value": 87.156}, {"type": "recall_at_1000", "value": 97.556}, {"type": "recall_at_3", "value": 44.0}, {"type": "recall_at_5", "value": 52.556000000000004}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.55148514851486}, {"type": "cos_sim_ap", "value": 80.535236573428}, {"type": "cos_sim_f1", "value": 75.01331912626532}, {"type": "cos_sim_precision", "value": 80.27366020524515}, {"type": "cos_sim_recall", "value": 70.39999999999999}, {"type": "dot_accuracy", "value": 99.04851485148515}, {"type": "dot_ap", "value": 28.505358821499726}, {"type": "dot_f1", "value": 36.36363636363637}, {"type": "dot_precision", "value": 37.160751565762006}, {"type": "dot_recall", "value": 35.6}, {"type": "euclidean_accuracy", "value": 99.4990099009901}, {"type": "euclidean_ap", "value": 74.95819047075476}, {"type": "euclidean_f1", "value": 71.15489874110564}, {"type": "euclidean_precision", "value": 78.59733978234583}, {"type": "euclidean_recall", "value": 65.0}, {"type": "manhattan_accuracy", "value": 99.50198019801981}, {"type": "manhattan_ap", "value": 75.02070096015086}, {"type": "manhattan_f1", "value": 71.20535714285712}, {"type": "manhattan_precision", "value": 80.55555555555556}, {"type": "manhattan_recall", "value": 63.800000000000004}, {"type": "max_accuracy", "value": 99.55148514851486}, {"type": "max_ap", "value": 80.535236573428}, {"type": "max_f1", "value": 75.01331912626532}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "70a89468f6dccacc6aa2b12a6eac54e74328f235"}, "metrics": [{"type": "v_measure", "value": 54.13314692311623}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "d88009ab563dd0b16cfaf4436abaf97fa3550cf0"}, "metrics": [{"type": "v_measure", "value": 31.115181648287145}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9"}, "metrics": [{"type": "map", "value": 44.771112666694336}, {"type": "mrr", "value": 45.30415764790765}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "8753c2788d36c01fc6f05d03fe3f7268d63f9122"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.849429597669374}, {"type": "cos_sim_spearman", "value": 30.384175038360194}, {"type": "dot_pearson", "value": 29.030383429536823}, {"type": "dot_spearman", "value": 28.03273624951732}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217"}, "metrics": [{"type": "map_at_1", "value": 0.19499999999999998}, {"type": "map_at_10", "value": 1.0959999999999999}, {"type": "map_at_100", "value": 5.726}, {"type": "map_at_1000", "value": 13.611999999999998}, {"type": "map_at_3", "value": 0.45399999999999996}, {"type": "map_at_5", "value": 0.67}, {"type": "ndcg_at_1", "value": 71.0}, {"type": "ndcg_at_10", "value": 55.352999999999994}, {"type": "ndcg_at_100", "value": 40.797}, {"type": "ndcg_at_1000", "value": 35.955999999999996}, {"type": "ndcg_at_3", "value": 63.263000000000005}, {"type": "ndcg_at_5", "value": 60.14000000000001}, {"type": "precision_at_1", "value": 78.0}, {"type": "precision_at_10", "value": 56.99999999999999}, {"type": "precision_at_100", "value": 41.199999999999996}, {"type": "precision_at_1000", "value": 16.154}, {"type": "precision_at_3", "value": 66.667}, {"type": "precision_at_5", "value": 62.8}, {"type": "recall_at_1", "value": 0.19499999999999998}, {"type": "recall_at_10", "value": 1.3639999999999999}, {"type": "recall_at_100", "value": 9.317}, {"type": "recall_at_1000", "value": 33.629999999999995}, {"type": "recall_at_3", "value": 0.49300000000000005}, {"type": "recall_at_5", "value": 0.756}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "527b7d77e16e343303e68cb6af11d6e18b9f7b3b"}, "metrics": [{"type": "map_at_1", "value": 1.335}, {"type": "map_at_10", "value": 6.293}, {"type": "map_at_100", "value": 10.928}, {"type": "map_at_1000", "value": 12.359}, {"type": "map_at_3", "value": 3.472}, {"type": "map_at_5", "value": 4.935}, {"type": "ndcg_at_1", "value": 19.387999999999998}, {"type": "ndcg_at_10", "value": 16.178}, {"type": "ndcg_at_100", "value": 28.149}, {"type": "ndcg_at_1000", "value": 39.845000000000006}, {"type": "ndcg_at_3", "value": 19.171}, {"type": "ndcg_at_5", "value": 17.864}, {"type": "precision_at_1", "value": 20.408}, {"type": "precision_at_10", "value": 14.49}, {"type": "precision_at_100", "value": 6.306000000000001}, {"type": "precision_at_1000", "value": 1.3860000000000001}, {"type": "precision_at_3", "value": 21.088}, {"type": "precision_at_5", "value": 18.367}, {"type": "recall_at_1", "value": 1.335}, {"type": "recall_at_10", "value": 10.825999999999999}, {"type": "recall_at_100", "value": 39.251000000000005}, {"type": "recall_at_1000", "value": 74.952}, {"type": "recall_at_3", "value": 4.9110000000000005}, {"type": "recall_at_5", "value": 7.312}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "edfaf9da55d3dd50d43143d90c1ac476895ae6de"}, "metrics": [{"type": "accuracy", "value": 69.93339999999999}, {"type": "ap", "value": 13.87476602492533}, {"type": "f1", "value": 53.867357615848555}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "62146448f05be9e52a36b8ee9936447ea787eede"}, "metrics": [{"type": "accuracy", "value": 62.43916242218449}, {"type": "f1", "value": 62.870386304954685}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "091a54f9a36281ce7d6590ec8c75dd485e7e01d4"}, "metrics": [{"type": "v_measure", "value": 37.202082549859796}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 83.65023544137807}, {"type": "cos_sim_ap", "value": 65.99787692764193}, {"type": "cos_sim_f1", "value": 62.10650887573965}, {"type": "cos_sim_precision", "value": 56.30901287553648}, {"type": "cos_sim_recall", "value": 69.23482849604221}, {"type": "dot_accuracy", "value": 79.10830303391549}, {"type": "dot_ap", "value": 48.80109642320246}, {"type": "dot_f1", "value": 51.418744625967314}, {"type": "dot_precision", "value": 40.30253107683091}, {"type": "dot_recall", "value": 71.00263852242745}, {"type": "euclidean_accuracy", "value": 82.45812719794957}, {"type": "euclidean_ap", "value": 60.09969493259607}, {"type": "euclidean_f1", "value": 57.658573789246226}, {"type": "euclidean_precision", "value": 55.62913907284768}, {"type": "euclidean_recall", "value": 59.84168865435356}, {"type": "manhattan_accuracy", "value": 82.46408773916671}, {"type": "manhattan_ap", "value": 60.116199786815116}, {"type": "manhattan_f1", "value": 57.683903860160235}, {"type": "manhattan_precision", "value": 53.41726618705036}, {"type": "manhattan_recall", "value": 62.69129287598945}, {"type": "max_accuracy", "value": 83.65023544137807}, {"type": "max_ap", "value": 65.99787692764193}, {"type": "max_f1", "value": 62.10650887573965}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.34943920518494}, {"type": "cos_sim_ap", "value": 84.5428891020442}, {"type": "cos_sim_f1", "value": 77.09709933923172}, {"type": "cos_sim_precision", "value": 74.83150952967607}, {"type": "cos_sim_recall", "value": 79.50415768401602}, {"type": "dot_accuracy", "value": 84.53448208949432}, {"type": "dot_ap", "value": 73.96328242371995}, {"type": "dot_f1", "value": 70.00553786515299}, {"type": "dot_precision", "value": 63.58777665995976}, {"type": "dot_recall", "value": 77.86418232214352}, {"type": "euclidean_accuracy", "value": 86.87662514068381}, {"type": "euclidean_ap", "value": 81.45499631520235}, {"type": "euclidean_f1", "value": 73.46567109816063}, {"type": "euclidean_precision", "value": 69.71037533697381}, {"type": "euclidean_recall", "value": 77.6485987064983}, {"type": "manhattan_accuracy", "value": 86.88244654014825}, {"type": "manhattan_ap", "value": 81.47180273946366}, {"type": "manhattan_f1", "value": 73.44624393136418}, {"type": "manhattan_precision", "value": 70.80385852090032}, {"type": "manhattan_recall", "value": 76.29350169387126}, {"type": "max_accuracy", "value": 88.34943920518494}, {"type": "max_ap", "value": 84.5428891020442}, {"type": "max_f1", "value": 77.09709933923172}]}]}]} | Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit | null | [
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.08904"
] | [] | TAGS
#sentence-transformers #pytorch #gptj #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us
|
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: URL
## Evaluation Results
For eval results, refer to our paper: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 249592 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# SGPT-5.8B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 249592 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #gptj #feature-extraction #sentence-similarity #mteb #arxiv-2202.08904 #model-index #endpoints_compatible #has_space #region-us \n",
"# SGPT-5.8B-weightedmean-msmarco-specb-bitfit",
"## Usage\n\nFor usage instructions, refer to our codebase: URL",
"## Evaluation Results\n\nFor eval results, refer to our paper: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 249592 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-classification | transformers | My First Model
- for classification of wolf | {} | Mulin/my_wolf_model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| My First Model
- for classification of wolf | [] | [
"TAGS\n#transformers #tf #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 0k (uncased)
Seed 0 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-0k')
model = BertModel.from_pretrained("multiberts-seed-0-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-0k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 0k (uncased)
Seed 0 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 0k (uncased)\nSeed 0 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 0k (uncased)\nSeed 0 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1000k (uncased)
Seed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1000k')
model = BertModel.from_pretrained("multiberts-seed-0-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1000k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1000k (uncased)
Seed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1000k (uncased)\nSeed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1000k (uncased)\nSeed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 100k (uncased)
Seed 0 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-100k')
model = BertModel.from_pretrained("multiberts-seed-0-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-100k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 100k (uncased)
Seed 0 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 100k (uncased)\nSeed 0 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 100k (uncased)\nSeed 0 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1100k (uncased)
Seed 0 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1100k')
model = BertModel.from_pretrained("multiberts-seed-0-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1100k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1100k (uncased)
Seed 0 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1100k (uncased)\nSeed 0 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1100k (uncased)\nSeed 0 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1200k (uncased)
Seed 0 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1200k')
model = BertModel.from_pretrained("multiberts-seed-0-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1200k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1200k (uncased)
Seed 0 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1200k (uncased)\nSeed 0 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1200k (uncased)\nSeed 0 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 120k (uncased)
Seed 0 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-120k')
model = BertModel.from_pretrained("multiberts-seed-0-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-120k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 120k (uncased)
Seed 0 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 120k (uncased)\nSeed 0 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 120k (uncased)\nSeed 0 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1300k (uncased)
Seed 0 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1300k')
model = BertModel.from_pretrained("multiberts-seed-0-1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1300k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1300k (uncased)
Seed 0 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1300k (uncased)\nSeed 0 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1300k (uncased)\nSeed 0 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1400k (uncased)
Seed 0 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1400k')
model = BertModel.from_pretrained("multiberts-seed-0-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1400k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1400k (uncased)
Seed 0 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1400k (uncased)\nSeed 0 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1400k (uncased)\nSeed 0 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 140k (uncased)
Seed 0 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-140k')
model = BertModel.from_pretrained("multiberts-seed-0-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-140k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 140k (uncased)
Seed 0 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 140k (uncased)\nSeed 0 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 140k (uncased)\nSeed 0 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1500k (uncased)
Seed 0 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1500k')
model = BertModel.from_pretrained("multiberts-seed-0-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1500k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1500k (uncased)
Seed 0 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1500k (uncased)\nSeed 0 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1500k (uncased)\nSeed 0 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1600k (uncased)
Seed 0 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1600k')
model = BertModel.from_pretrained("multiberts-seed-0-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1600k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1600k (uncased)
Seed 0 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1600k (uncased)\nSeed 0 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1600k (uncased)\nSeed 0 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 160k (uncased)
Seed 0 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-160k')
model = BertModel.from_pretrained("multiberts-seed-0-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-160k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 160k (uncased)
Seed 0 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 160k (uncased)\nSeed 0 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 160k (uncased)\nSeed 0 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1700k (uncased)
Seed 0 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1700k')
model = BertModel.from_pretrained("multiberts-seed-0-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1700k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1700k (uncased)
Seed 0 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1700k (uncased)\nSeed 0 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1700k (uncased)\nSeed 0 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1800k (uncased)
Seed 0 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1800k')
model = BertModel.from_pretrained("multiberts-seed-0-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1800k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1800k (uncased)
Seed 0 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1800k (uncased)\nSeed 0 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1800k (uncased)\nSeed 0 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 180k (uncased)
Seed 0 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-180k')
model = BertModel.from_pretrained("multiberts-seed-0-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-180k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 180k (uncased)
Seed 0 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 180k (uncased)\nSeed 0 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 180k (uncased)\nSeed 0 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 1900k (uncased)
Seed 0 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1900k')
model = BertModel.from_pretrained("multiberts-seed-0-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-1900k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 1900k (uncased)
Seed 0 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 1900k (uncased)\nSeed 0 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 1900k (uncased)\nSeed 0 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 2000k (uncased)
Seed 0 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-2000k')
model = BertModel.from_pretrained("multiberts-seed-0-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-2000k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 2000k (uncased)
Seed 0 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 2000k (uncased)\nSeed 0 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 2000k (uncased)\nSeed 0 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 200k (uncased)
Seed 0 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-200k')
model = BertModel.from_pretrained("multiberts-seed-0-200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-200k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 200k (uncased)
Seed 0 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 200k (uncased)\nSeed 0 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 200k (uncased)\nSeed 0 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 20k (uncased)
Seed 0 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-20k')
model = BertModel.from_pretrained("multiberts-seed-0-20k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-20k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 20k (uncased)
Seed 0 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 20k (uncased)\nSeed 0 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 20k (uncased)\nSeed 0 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 300k (uncased)
Seed 0 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-300k')
model = BertModel.from_pretrained("multiberts-seed-0-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-300k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 300k (uncased)
Seed 0 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 300k (uncased)\nSeed 0 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 300k (uncased)\nSeed 0 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 400k (uncased)
Seed 0 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-400k')
model = BertModel.from_pretrained("multiberts-seed-0-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-400k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 400k (uncased)
Seed 0 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 400k (uncased)\nSeed 0 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 400k (uncased)\nSeed 0 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 40k (uncased)
Seed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-40k')
model = BertModel.from_pretrained("multiberts-seed-0-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-40k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 40k (uncased)
Seed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 40k (uncased)\nSeed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 40k (uncased)\nSeed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 500k (uncased)
Seed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-500k')
model = BertModel.from_pretrained("multiberts-seed-0-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-500k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 500k (uncased)
Seed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 500k (uncased)\nSeed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 500k (uncased)\nSeed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 600k (uncased)
Seed 0 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-600k')
model = BertModel.from_pretrained("multiberts-seed-0-600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-600k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 600k (uncased)
Seed 0 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 600k (uncased)\nSeed 0 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 600k (uncased)\nSeed 0 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 60k (uncased)
Seed 0 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-60k')
model = BertModel.from_pretrained("multiberts-seed-0-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-60k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 60k (uncased)
Seed 0 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 60k (uncased)\nSeed 0 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 60k (uncased)\nSeed 0 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 700k (uncased)
Seed 0 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-700k')
model = BertModel.from_pretrained("multiberts-seed-0-700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-700k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 700k (uncased)
Seed 0 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 700k (uncased)\nSeed 0 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 700k (uncased)\nSeed 0 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 800k (uncased)
Seed 0 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-800k')
model = BertModel.from_pretrained("multiberts-seed-0-800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-800k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 800k (uncased)
Seed 0 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 800k (uncased)\nSeed 0 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 800k (uncased)\nSeed 0 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 80k (uncased)
Seed 0 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-80k')
model = BertModel.from_pretrained("multiberts-seed-0-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-80k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 80k (uncased)
Seed 0 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 80k (uncased)\nSeed 0 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 80k (uncased)\nSeed 0 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 Checkpoint 900k (uncased)
Seed 0 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-900k')
model = BertModel.from_pretrained("multiberts-seed-0-900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0-900k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 Checkpoint 900k (uncased)
Seed 0 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 Checkpoint 900k (uncased)\nSeed 0 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-0 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 Checkpoint 900k (uncased)\nSeed 0 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-0. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-0 | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 0k (uncased)
Seed 1 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-0k')
model = BertModel.from_pretrained("multiberts-seed-1-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-0k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 0k (uncased)
Seed 1 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 0k (uncased)\nSeed 1 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 0k (uncased)\nSeed 1 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1000k (uncased)
Seed 1 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1000k')
model = BertModel.from_pretrained("multiberts-seed-1-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1000k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1000k (uncased)
Seed 1 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1000k (uncased)\nSeed 1 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1000k (uncased)\nSeed 1 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 100k (uncased)
Seed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-100k')
model = BertModel.from_pretrained("multiberts-seed-1-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-100k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 100k (uncased)
Seed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 100k (uncased)\nSeed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 100k (uncased)\nSeed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1100k (uncased)
Seed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1100k')
model = BertModel.from_pretrained("multiberts-seed-1-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1100k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1100k (uncased)
Seed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1100k (uncased)\nSeed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1100k (uncased)\nSeed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1200k (uncased)
Seed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1200k')
model = BertModel.from_pretrained("multiberts-seed-1-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1200k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1200k (uncased)
Seed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1200k (uncased)\nSeed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1200k (uncased)\nSeed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 120k (uncased)
Seed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-120k')
model = BertModel.from_pretrained("multiberts-seed-1-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-120k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 120k (uncased)
Seed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 120k (uncased)\nSeed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 120k (uncased)\nSeed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1300k (uncased)
Seed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1300k')
model = BertModel.from_pretrained("multiberts-seed-1-1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1300k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1300k (uncased)
Seed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1300k (uncased)\nSeed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1300k (uncased)\nSeed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1400k (uncased)
Seed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1400k')
model = BertModel.from_pretrained("multiberts-seed-1-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1400k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1400k (uncased)
Seed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1400k (uncased)\nSeed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1400k (uncased)\nSeed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 140k (uncased)
Seed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-140k')
model = BertModel.from_pretrained("multiberts-seed-1-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-140k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 140k (uncased)
Seed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 140k (uncased)\nSeed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 140k (uncased)\nSeed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1500k (uncased)
Seed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1500k')
model = BertModel.from_pretrained("multiberts-seed-1-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1500k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1500k (uncased)
Seed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1500k (uncased)\nSeed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1500k (uncased)\nSeed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1600k (uncased)
Seed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1600k')
model = BertModel.from_pretrained("multiberts-seed-1-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1600k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1600k (uncased)
Seed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1600k (uncased)\nSeed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1600k (uncased)\nSeed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-160k')
model = BertModel.from_pretrained("multiberts-seed-1-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-160k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 160k (uncased)\nSeed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 160k (uncased)\nSeed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1700k')
model = BertModel.from_pretrained("multiberts-seed-1-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1700k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1700k (uncased)\nSeed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1700k (uncased)\nSeed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1800k (uncased)
Seed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1800k')
model = BertModel.from_pretrained("multiberts-seed-1-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1800k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1800k (uncased)
Seed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1800k (uncased)\nSeed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1800k (uncased)\nSeed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 180k (uncased)
Seed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-180k')
model = BertModel.from_pretrained("multiberts-seed-1-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-180k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 180k (uncased)
Seed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 180k (uncased)\nSeed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 180k (uncased)\nSeed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
null | transformers | # MultiBERTs Seed 1 Checkpoint 1900k (uncased)
Seed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1900k')
model = BertModel.from_pretrained("multiberts-seed-1-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | MultiBertGunjanPatrick/multiberts-seed-1-1900k | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1900k (uncased)
Seed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1900k (uncased)\nSeed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1900k (uncased)\nSeed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.