pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | rasika00/testing-pushing | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T11:37:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# events-mem-large
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-large", "model-index": [{"name": "events-mem-large", "results": []}]} | eddieman78/events-mem-large | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T11:38:23+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| events-mem-large
================
This model is a fine-tuned version of google/flan-t5-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: nan
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 2
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 1
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 2\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 2\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit
This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mlx"], "datasets": ["databricks/databricks-dolly-15k", "llm-jp/databricks-dolly-15k-ja", "llm-jp/oasst1-21k-en", "llm-jp/oasst1-21k-ja", "llm-jp/oasst2-33k-en", "llm-jp/oasst2-33k-ja"], "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"], "pipeline_tag": "text-generation", "inference": false} | mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"dataset:llm-jp/oasst2-33k-en",
"dataset:llm-jp/oasst2-33k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T11:38:58+00:00 | [] | [
"en",
"ja"
] | TAGS
#transformers #safetensors #llama #text-generation #mlx #conversational #en #ja #dataset-databricks/databricks-dolly-15k #dataset-llm-jp/databricks-dolly-15k-ja #dataset-llm-jp/oasst1-21k-en #dataset-llm-jp/oasst1-21k-ja #dataset-llm-jp/oasst2-33k-en #dataset-llm-jp/oasst2-33k-ja #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
# mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit
This model was converted to MLX format from ['llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0']() using mlx-lm version 0.12.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit\nThis model was converted to MLX format from ['llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mlx #conversational #en #ja #dataset-databricks/databricks-dolly-15k #dataset-llm-jp/databricks-dolly-15k-ja #dataset-llm-jp/oasst1-21k-en #dataset-llm-jp/oasst1-21k-ja #dataset-llm-jp/oasst2-33k-en #dataset-llm-jp/oasst2-33k-ja #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"# mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit\nThis model was converted to MLX format from ['llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2", "results": []}]} | nsugianto/tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2 | null | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T11:39:40+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #table-transformer #object-detection #generated_from_trainer #endpoints_compatible #region-us
|
# tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1500\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.0.1\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #table-transformer #object-detection #generated_from_trainer #endpoints_compatible #region-us \n",
"# tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1500\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.0.1\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ilanasto/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | ilanasto/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-30T11:39:56+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null | null |
## Example model file tracked with [git-theta](https://github.com/r-three/git-theta)

| {"license": "apache-2.0", "tags": ["git-theta"]} | julien-c/git-theta-small-pytorch | null | [
"pytorch",
"git-theta",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T11:41:39+00:00 | [] | [] | TAGS
#pytorch #git-theta #license-apache-2.0 #region-us
|
## Example model file tracked with git-theta
!URL
| [
"## Example model file tracked with git-theta\n\n!URL"
] | [
"TAGS\n#pytorch #git-theta #license-apache-2.0 #region-us \n",
"## Example model file tracked with git-theta\n\n!URL"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ilanasto/taxi-RL", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-RL", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.74", "name": "mean_reward", "verified": false}]}]}]} | ilanasto/taxi-RL | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-30T11:41:48+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | Andreas95/test | null | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T11:44:01+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | codevang/fast_small_14 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T11:45:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomistral-7b-dpo-full-sft-wo-kqa_silver_wogold
This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft](https://huggingface.co/Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3493
- Rewards/chosen: -0.0378
- Rewards/rejected: -1.3358
- Rewards/accuracies: 1.0
- Rewards/margins: 1.2980
- Logps/rejected: -740.3141
- Logps/chosen: -69.0360
- Logits/rejected: -3.3845
- Logits/chosen: -3.7481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0911 | 0.83 | 100 | 0.3514 | -0.0382 | -1.3191 | 1.0 | 1.2809 | -738.6374 | -69.0788 | -3.3885 | -3.7525 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft", "model-index": [{"name": "biomistral-7b-dpo-full-sft-wo-kqa_silver_wogold", "results": []}]} | Minbyul/biomistral-7b-dpo-full-sft-wo-kqa_silver_wogold | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T11:47:51+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| biomistral-7b-dpo-full-sft-wo-kqa\_silver\_wogold
=================================================
This model is a fine-tuned version of Minbyul/biomistral-7b-wo-kqa\_silver\_wogold-sft on the HuggingFaceH4/ultrafeedback\_binarized dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3493
* Rewards/chosen: -0.0378
* Rewards/rejected: -1.3358
* Rewards/accuracies: 1.0
* Rewards/margins: 1.2980
* Logps/rejected: -740.3141
* Logps/chosen: -69.0360
* Logits/rejected: -3.3845
* Logits/chosen: -3.7481
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-07
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - zhenjili/cat_dreambooth
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "CompVis/stable-diffusion-v1-4", "inference": true, "instance_prompt": "a photo of sks cat"} | zhenjili/cat_dreambooth | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-30T11:48:31+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - zhenjili/cat_dreambooth
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - zhenjili/cat_dreambooth\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: True.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - zhenjili/cat_dreambooth\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: True.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DevsDoCode/Llama-3-8B-Instruct-1048k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "DevsDoCode/Llama-3-8B-Instruct-1048k", "quantized_by": "mradermacher"} | mradermacher/Llama-3-8B-Instruct-1048k-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:DevsDoCode/Llama-3-8B-Instruct-1048k",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T11:53:41+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-DevsDoCode/Llama-3-8B-Instruct-1048k #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-DevsDoCode/Llama-3-8B-Instruct-1048k #endpoints_compatible #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_test_trainer
This model is a fine-tuned version of [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00017
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-v1.6-mistral-7b-hf", "model-index": [{"name": "first_test_trainer", "results": []}]} | matchmaking/first_test_trainer | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-v1.6-mistral-7b-hf",
"region:us"
] | null | 2024-04-30T11:55:02+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-v1.6-mistral-7b-hf #region-us
|
# first_test_trainer
This model is a fine-tuned version of llava-hf/llava-v1.6-mistral-7b-hf on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00017
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# first_test_trainer\n\nThis model is a fine-tuned version of llava-hf/llava-v1.6-mistral-7b-hf on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00017\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 20\n- total_train_batch_size: 20\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-v1.6-mistral-7b-hf #region-us \n",
"# first_test_trainer\n\nThis model is a fine-tuned version of llava-hf/llava-v1.6-mistral-7b-hf on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00017\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 20\n- total_train_batch_size: 20\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-classification | setfit |
# SetFit with intfloat/multilingual-e5-large
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6 | <ul><li>'What kind of promotions generally lead to higher cannibalization?'</li><li>'Which Skus has higher Canninibalization in Natural Juices for 2023?'</li><li>'Which two Product can have simultaneous Promotions?'</li></ul> |
| 2 | <ul><li>'Which Promotions contributred the most lift Change between 2022 and 2023?'</li><li>'Which category x brand has seen major decline in Volume Lift for 2023?'</li><li>'What actions were taken to increase the volume lift for MEGAMART in 2023?'</li></ul> |
| 3 | <ul><li>'What types of promotions within the FIZZY DRINKS category are best suited for offering high discounts?'</li><li>'Which promotion types are better for high discounts in Hydra category for 2022?'</li><li>'Which promotion types in are better for low discounts in FIZZY DRINKS category?'</li></ul> |
| 5 | <ul><li>'How will increasing the discount by 50 percent on Brand BREEZEFIZZ affect the incremental volume lift?'</li><li>'How will the introduction of a 20% discount promotion for Rice Krispies in August affect incremental volume and ROI?'</li><li>'If I raise the discount by 20% on Brand BREEZEFIZZ, what will be the incremental roi?'</li></ul> |
| 0 | <ul><li>'For which category MULTISAVING type of promotions worked best for WorldMart in 2022?'</li><li>'What type of promotions worked best for WorldMart in 2022?'</li><li>'Which subcategory have the highest ROI in 2022?'</li></ul> |
| 4 | <ul><li>'Suggest a better investment strategy to gain better ROI in 2023 for FIZZY DRINKS'</li><li>'Which promotions have scope for higher investment to drive more ROIs in UrbanHub ?'</li><li>'What promotions in FIZZY DRINKS have shown declining effectiveneHydra and can be discontinued?'</li></ul> |
| 1 | <ul><li>'How do the performance metrics of brands in the FIZZY DRINKS category compare to those in HYDRA and NATURAL JUICES concerning ROI change between 2021 to 2022?'</li><li>'Can you identify the specific factors or challenges that contributed to the decline in ROI within ULTRASTORE in 2022 compared to 2021?'</li><li>'What are the main reasons for ROI decline in 2022 compared to 2021?'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vgarg/promo_prescriptive_gpt_30_04_2024")
# Run inference
preds = model("Which promotion types are better for low discounts for Zucaritas ?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 7 | 14.6667 | 27 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 10 |
| 2 | 10 |
| 3 | 10 |
| 4 | 10 |
| 5 | 10 |
| 6 | 9 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0058 | 1 | 0.3528 | - |
| 0.2890 | 50 | 0.0485 | - |
| 0.5780 | 100 | 0.0052 | - |
| 0.8671 | 150 | 0.0014 | - |
| 1.1561 | 200 | 0.0006 | - |
| 1.4451 | 250 | 0.0004 | - |
| 1.7341 | 300 | 0.0005 | - |
| 2.0231 | 350 | 0.0004 | - |
| 2.3121 | 400 | 0.0004 | - |
| 2.6012 | 450 | 0.0005 | - |
| 2.8902 | 500 | 0.0004 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "intfloat/multilingual-e5-large", "widget": [{"text": "What promotions in RTEC have shown declining effectiveness and can be discontinued?"}, {"text": "What are my priority brands in RTEC to get positive Lift Change in 2022?"}, {"text": "What would be the expected incremental volume lift if the discount on Brand Zucaritas is raised by 5%?"}, {"text": "Which promotion types are better for low discounts for Zucaritas ?"}, {"text": "Which Promotions contributred the most ROI Change between 2022 and 2023?"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with intfloat/multilingual-e5-large", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | vgarg/promo_prescriptive_gpt_30_04_2024 | null | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:intfloat/multilingual-e5-large",
"model-index",
"region:us"
] | null | 2024-04-30T11:55:15+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #xlm-roberta #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-intfloat/multilingual-e5-large #model-index #region-us
| SetFit with intfloat/multilingual-e5-large
==========================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses intfloat/multilingual-e5-large as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: intfloat/multilingual-e5-large
* Classification head: a LogisticRegression instance
* Maximum Sequence Length: 512 tokens
* Number of Classes: 7 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 16)
* num\_epochs: (3, 3)
* max\_steps: -1
* sampling\_strategy: oversampling
* num\_iterations: 20
* body\_learning\_rate: (2e-05, 2e-05)
* head\_learning\_rate: 2e-05
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.12
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* Transformers: 4.40.0
* PyTorch: 2.2.1+cu121
* Datasets: 2.19.0
* Tokenizers: 0.19.1
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: intfloat/multilingual-e5-large\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 7 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (3, 3)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #xlm-roberta #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-intfloat/multilingual-e5-large #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: intfloat/multilingual-e5-large\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 7 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (3, 3)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] |
sentence-similarity | sentence-transformers |
# luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5885 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-08
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2648,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3 | null | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T11:55:35+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 5885 with parameters:
Loss:
'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "260.68 +/- 22.20", "name": "mean_reward", "verified": false}]}]}]} | David0702/ppo-LunarLander-v2-1 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-30T11:57:08+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers |
# mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit
This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mlx"], "datasets": ["databricks/databricks-dolly-15k", "llm-jp/databricks-dolly-15k-ja", "llm-jp/oasst1-21k-en", "llm-jp/oasst1-21k-ja", "llm-jp/oasst2-33k-en", "llm-jp/oasst2-33k-ja"], "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"], "pipeline_tag": "text-generation", "inference": false} | mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"dataset:llm-jp/oasst2-33k-en",
"dataset:llm-jp/oasst2-33k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T11:58:35+00:00 | [] | [
"en",
"ja"
] | TAGS
#transformers #safetensors #llama #text-generation #mlx #conversational #en #ja #dataset-databricks/databricks-dolly-15k #dataset-llm-jp/databricks-dolly-15k-ja #dataset-llm-jp/oasst1-21k-en #dataset-llm-jp/oasst1-21k-ja #dataset-llm-jp/oasst2-33k-en #dataset-llm-jp/oasst2-33k-ja #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
# mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit
This model was converted to MLX format from ['llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0']() using mlx-lm version 0.12.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit\nThis model was converted to MLX format from ['llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mlx #conversational #en #ja #dataset-databricks/databricks-dolly-15k #dataset-llm-jp/databricks-dolly-15k-ja #dataset-llm-jp/oasst1-21k-en #dataset-llm-jp/oasst1-21k-ja #dataset-llm-jp/oasst2-33k-en #dataset-llm-jp/oasst2-33k-ja #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"# mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit\nThis model was converted to MLX format from ['llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation | transformers | # This is a [DADA tuned](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-DADA-8x7B) version of [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B)
### As always, experimental model, results may be unpredictable.

The intermediate merge model was finetuned specifically for the original model and then merged onto it via 50/50 SLERP merge.
As far as RP goes my current testing setup is having difficulties involving end of sequence and I found the prompt templates provided with Poppy Porpoise were not yielding desirable results on either the original model or the DADA tune so I opted instead to use Libra style Llama-3 prompt templates and simple sampling. The results, albeit subjective, I thought were good.
## Whether or not the tune had any effect on multi-modal capabilities was not part of the test and instead focus was entirely on creative writing.
Test: A simple instruct prompt utilizing determinsitic samplers.
**PROMPT**
```
Write me a 'beauty and the beast' gothic metal duet about a couple fighting over which brand of ketchup to purchase.
```
**Poppy Porpoise Original:**
```
(Verse 1 - Beauty)
In the land of condiments, where flavors reign supreme,
A tale of love and strife, a battle of the brands it seems.
I stand before the shelves, my heart aflame with desire,
For the perfect ketchup to complement my fries.
(Chorus)
But alas, my love, you stand beside me, with a different view,
Your heart set on [Brand 1 redacted], while mine beats for [Brand 2 redacted], it's true.
We clash in this aisle, our passions running high,
A Gothic metal duet, our love and hate collide.
(Verse 2 - Beast)
I've tasted both, and I know, [Brand 1 redacted] is the one for me,
Its tangy sweetness, a symphony, a harmony.
But you, my love, you cling to Hunt's, its flavor so bland,
A cacophony of mediocrity, a discordant band.
(Chorus)
But alas, my love, you stand beside me, with a different view,
Your heart set on [Brand 2 redacted], while mine beats for [Brand 1 redacted], it's true.
We clash in this aisle, our passions running high,
A Gothic metal duet, our love and hate collide.
(Bridge)
We're lost in this supermarket, a Gothic metal dream,
Our love and hate entwined, a twisted, tortured theme.
The ketchup bottles loom, like tombstones in the night,
A monument to our love, a testament to our fight.
(Chorus)
But alas, my love, you stand beside me, with a different view,
Your heart set on [Brand 2 redacted], while mine beats for [Brand 1 redacted], it's true.
We clash in this aisle, our passions running high,
A Gothic metal duet, our love and hate collide.
(Outro)
And so we stand, in this aisle of despair,
Our love and hate, a Gothic metal prayer.
For in the end, it's not the ketchup that matters,
But the love we share, in this twisted, tortured heart.
```
**Poppy Porpoise DADA**
```
(Verse 1 - Beauty)
In the land of condiments, where the sauces reign supreme,
A battle rages on, a war of ketchup dreams.
I stand by [Brand 1 redacted], the classic, the original, the best,
But my love, he's a rebel, he's a [Brand 2 redacted] pest.
(Chorus)
Oh, the ketchup wars, they rage on and on,
A tale of love and sauce, a battle to be won.
Will our love prevail, or will it be undone?
By the brand of ketchup, our hearts are torn asunder.
(Verse 2 - Beast)
[Brand 2 redacted], the new kid, the one with the flair,
A sweeter taste, a richer flavor, beyond compare.
But Beauty, she's a traditionalist, she won't budge,
She clings to [Brand 1 redacted], her heart, her soul, her blood.
(Chorus)
Oh, the ketchup wars, they rage on and on,
A tale of love and sauce, a battle to be won.
Will our love prevail, or will it be undone?
By the brand of ketchup, our hearts are torn asunder.
(Bridge - Both)
We stand, divided, by our condiment creed,
Our love, a casualty, in this ketchup war we lead.
But can we find a middle ground, a compromise, a truce?
Or will our love be lost, in this battle of the sauces?
(Chorus)
Oh, the ketchup wars, they rage on and on,
A tale of love and sauce, a battle to be won.
Will our love prevail, or will it be undone?
By the brand of ketchup, our hearts are torn asunder.
(Outro - Both)
In the end, we find a way, to put aside our pride,
And together, we stand, united, side by side.
For love, for ketchup, for the sauce that binds,
We'll find a way, to make our love, forever shine.
```
As you can clearly see the cadence and overall word selection by the DADA model is more copacetic with what would be expected from a gothic metal duet indicating that the IQ of the model has increased by at least 100 points.
Training was done using [qlora-pipe](https://github.com/tdrussell/qlora-pipe)
GGUFs care of [Quant Cartel](https://huggingface.co/Quant-Cartel/Poppy_Porpoise-DADA-8B-iMat-GGUF) | {"license": "cc-by-nc-4.0"} | Envoid/Poppy_Porpoise-DADA-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T11:59:00+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # This is a DADA tuned version of ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
### As always, experimental model, results may be unpredictable.
 (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Anas989898/llava-mistral-7b-finetuned | null | [
"transformers",
"safetensors",
"llava_mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:01:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llava_mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llava_mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "252.38 +/- 20.26", "name": "mean_reward", "verified": false}]}]}]} | ArnavModanwal/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-30T12:02:29+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adapterhappy2sad-1k-search-noshuffle-50-0.003 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:02:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
## [Circuitbentphotography](https://d14i3advvh2bvd.cloudfront.net/d7ec7ae7eb05398000dac393417202696950ae2f3f9e96a6f624a28e70fbd5cd.tar)
LoRA trained on [Eden.art](https://eden.art) by [dawnia](https://app.eden.art/creators/dawnia) on 11 images.
* [How to train Concepts (LoRAs) on Eden](https://docs.eden.art/docs/guides/concepts)
* [How to export LoRAs from Eden](https://docs.eden.art/docs/guides/concepts#exporting-loras-for-use-in-other-tools)
 | {"language": ["en"], "thumbnail": "https://d14i3advvh2bvd.cloudfront.net/ef24f72598a41df92f3a369d2230eb7e1b9f39b1ed98d2f7bacdc9c9854e06f1.jpg", "base_model": "stabilityai/stable-diffusion-xl-base-1.0"} | eden-art/Circuitbentphotography | null | [
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null | 2024-04-30T12:03:14+00:00 | [] | [
"en"
] | TAGS
#en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
## Circuitbentphotography
LoRA trained on URL by dawnia on 11 images.
* How to train Concepts (LoRAs) on Eden
* How to export LoRAs from Eden
!Samples | [
"## Circuitbentphotography\n\nLoRA trained on URL by dawnia on 11 images.\n\n* How to train Concepts (LoRAs) on Eden\n* How to export LoRAs from Eden\n\n!Samples"
] | [
"TAGS\n#en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"## Circuitbentphotography\n\nLoRA trained on URL by dawnia on 11 images.\n\n* How to train Concepts (LoRAs) on Eden\n* How to export LoRAs from Eden\n\n!Samples"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0486
- Accuracy: 0.9902
- F1: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "tweet_sentiment", "results": []}]} | nischalsingh/tweet_sentiment | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:03:49+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# tweet_sentiment
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0486
- Accuracy: 0.9902
- F1: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# tweet_sentiment\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0486\n- Accuracy: 0.9902\n- F1: 0.9863",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# tweet_sentiment\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0486\n- Accuracy: 0.9902\n- F1: 0.9863",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jorgefg03/mdeberta-v3-base-autext2024 | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:04:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
feature-extraction | transformers |
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("phospho-small-f75baf6")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
| {"language": "en", "license": "apache-2.0"} | phospho-app/phospho-small-f75baf6 | null | [
"transformers",
"safetensors",
"mpnet",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:04:44+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us
|
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: URL
| [
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] | [
"TAGS\n#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | salmanjara/llavatuned | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:05:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_finetued_on_scigen_v3
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 16384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral_finetued_on_scigen_v3", "results": []}]} | moetezsa/mistral_finetued_on_scigen_v3 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T12:05:35+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
|
# mistral_finetued_on_scigen_v3
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 16384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# mistral_finetued_on_scigen_v3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 128\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 128\n- total_train_batch_size: 16384\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"# mistral_finetued_on_scigen_v3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 128\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 128\n- total_train_batch_size: 16384\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model details
This is a quick experiment on llamafied phi-3 with only 1000 orpo steps from an azureml translated german orca binarized-dataset (johannhartmann/mistralorpo), with original phi-3 prompt template. The immediate result is not really good, but also not bad enough to disencourage further experiments.
# Benchmark results
This was an experiment on a german dataset snippet which, as expected, worsened results on english benchmarks:
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.40|
|AI2 Reasoning Challenge (25-Shot)|60.41|
|HellaSwag (10-Shot) |78.37|
|MMLU (5-Shot) |65.26|
|TruthfulQA (0-shot) |49.76|
|Winogrande (5-shot) |70.24|
|GSM8k (5-shot) |62.32|
On german EQ-Bench (v2_de) 51.82 (insignificant over 51.41 for original llamafied but significantly better than intermediate cstr/phi-3-orpo-v8_16 which after initial 150 test steps achieved 46.38) but with still only 164/171 correctly parsed.
Note: We can improve the correctness of parsing, i.a., by only a few SFT steps, as shown with cas/phi3-mini-4k-llamafied-sft-v3 (170/171 correct but with then only 39.46 score in v2_de, which was also an experiment in changing the prompt template).
All that was quickly done with bnb and q4 quants only, which might, in theory, affect especially such small dense models significantly.
But it served the intention for both proof-of-concept-experiments at least. Probably it would easily be possible to further improve results, but that would take some time and compute.
# Training setup
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
| {"language": ["en", "de"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "cstr/phi-3-orpo-v8_16"} | cstr/phi-3-orpo-v9_16 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"orpo",
"conversational",
"en",
"de",
"base_model:cstr/phi-3-orpo-v8_16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:05:40+00:00 | [] | [
"en",
"de"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #orpo #conversational #en #de #base_model-cstr/phi-3-orpo-v8_16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Model details
=============
This is a quick experiment on llamafied phi-3 with only 1000 orpo steps from an azureml translated german orca binarized-dataset (johannhartmann/mistralorpo), with original phi-3 prompt template. The immediate result is not really good, but also not bad enough to disencourage further experiments.
Benchmark results
=================
This was an experiment on a german dataset snippet which, as expected, worsened results on english benchmarks:
On german EQ-Bench (v2\_de) 51.82 (insignificant over 51.41 for original llamafied but significantly better than intermediate cstr/phi-3-orpo-v8\_16 which after initial 150 test steps achieved 46.38) but with still only 164/171 correctly parsed.
Note: We can improve the correctness of parsing, i.a., by only a few SFT steps, as shown with cas/phi3-mini-4k-llamafied-sft-v3 (170/171 correct but with then only 39.46 score in v2\_de, which was also an experiment in changing the prompt template).
All that was quickly done with bnb and q4 quants only, which might, in theory, affect especially such small dense models significantly.
But it served the intention for both proof-of-concept-experiments at least. Probably it would easily be possible to further improve results, but that would take some time and compute.
Training setup
==============
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #orpo #conversational #en #de #base_model-cstr/phi-3-orpo-v8_16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_serbian
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0786
- Precision: 0.8138
- Recall: 0.8795
- F1: 0.8454
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.1030 | 0.7317 | 0.7706 | 0.7507 | 0.9667 |
| No log | 2.0 | 208 | 0.0814 | 0.8001 | 0.8642 | 0.8309 | 0.9757 |
| No log | 3.0 | 312 | 0.0786 | 0.8138 | 0.8795 | 0.8454 | 0.9774 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "trained_serbian", "results": []}]} | annamariagnat/trained_serbian | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:05:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| trained\_serbian
================
This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0786
* Precision: 0.8138
* Recall: 0.8795
* F1: 0.8454
* Accuracy: 0.9774
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2+cpu
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
## Exllama v2 Quantizations of Llama-3-8B-Ultra-Instruct
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/elinas/Llama-3-8B-Ultra-Instruct
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2 Llama-3-8B-Ultra-Instruct-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/Llama-3-8B-Ultra-Instruct-exl2 --revision 6_5 --local-dir Llama-3-8B-Ultra-Instruct-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/Llama-3-8B-Ultra-Instruct-exl2 --revision 6_5 --local-dir Llama-3-8B-Ultra-Instruct-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Undi95/Meta-Llama-3-8B-Instruct-hf"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"} | bartowski/Llama-3-8B-Ultra-Instruct-exl2 | null | [
"transformers",
"mergekit",
"merge",
"text-generation",
"base_model:Undi95/Meta-Llama-3-8B-Instruct-hf",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:06:43+00:00 | [] | [] | TAGS
#transformers #mergekit #merge #text-generation #base_model-Undi95/Meta-Llama-3-8B-Instruct-hf #license-llama3 #endpoints_compatible #region-us
| Exllama v2 Quantizations of Llama-3-8B-Ultra-Instruct
-----------------------------------------------------
Using <a href="URL ExLlamaV2 v0.0.20 for quantization.
**The "main" branch only contains the URL, download one of the other branches for the model (see below)**
Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions.
Original model: URL
Prompt format
-------------
Available sizes
---------------
Download instructions
---------------------
With git:
With huggingface hub (credit to TheBloke for instructions):
To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch:
Linux:
Windows (which apparently doesn't like \_ in folders sometimes?):
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#transformers #mergekit #merge #text-generation #base_model-Undi95/Meta-Llama-3-8B-Instruct-hf #license-llama3 #endpoints_compatible #region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/elinas/Llama-3-8B-Ultra-Instruct
You should use `--override-kv tokenizer.ggml.pre=str:llama3` and a current llama.cpp version to work around a bug in llama.cpp that made these quants. (see https://old.reddit.com/r/LocalLLaMA/comments/1cg0z1i/bpe_pretokenization_support_is_now_merged_llamacpp/?share_id=5dBFB9x0cOJi8vbr-Murh)
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "elinas/Llama-3-8B-Ultra-Instruct", "quantized_by": "mradermacher"} | mradermacher/Llama-3-8B-Ultra-Instruct-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:elinas/Llama-3-8B-Ultra-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:07:34+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-elinas/Llama-3-8B-Ultra-Instruct #license-llama3 #endpoints_compatible #region-us
| About
-----
static quants of URL
You should use '--override-kv URL=str:llama3' and a current URL version to work around a bug in URL that made these quants. (see URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-elinas/Llama-3-8B-Ultra-Instruct #license-llama3 #endpoints_compatible #region-us \n"
] |
null | transformers |
# Uploaded model
- **Developed by:** CarlosFersoft
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | CarlosFersoft/GPBusiness0001_LoRA | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:08:06+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: CarlosFersoft
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: CarlosFersoft\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: CarlosFersoft\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers | Quantizations of https://huggingface.co/PygmalionAI/mythalion-13b
# From original readme
## Prompting
This model can be prompted using both the Alpaca and [Pygmalion formatting](https://huggingface.co/PygmalionAI/pygmalion-2-13b#prompting).
**Alpaca formatting**:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
**Pygmalion/Metharme formatting**:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history. | {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "mythalion-13b"], "pipeline_tag": "text-generation", "inference": false} | duyntnet/mythalion-13b-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"mythalion-13b",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-30T12:08:24+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #imatrix #mythalion-13b #text-generation #en #license-other #region-us
| Quantizations of URL
# From original readme
## Prompting
This model can be prompted using both the Alpaca and Pygmalion formatting.
Alpaca formatting:
Pygmalion/Metharme formatting:
The model has been trained on prompts using three different roles, which are denoted by the following tokens: '<|system|>', '<|user|>' and '<|model|>'.
The '<|system|>' prompt can be used to inject out-of-channel information behind the scenes, while the '<|user|>' prompt should be used to indicate user input.
The '<|model|>' token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history. | [
"# From original readme",
"## Prompting\n\nThis model can be prompted using both the Alpaca and Pygmalion formatting.\n\nAlpaca formatting:\n\n\nPygmalion/Metharme formatting:\n\n\n\nThe model has been trained on prompts using three different roles, which are denoted by the following tokens: '<|system|>', '<|user|>' and '<|model|>'.\n\nThe '<|system|>' prompt can be used to inject out-of-channel information behind the scenes, while the '<|user|>' prompt should be used to indicate user input.\nThe '<|model|>' token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to\nform a conversation history."
] | [
"TAGS\n#transformers #gguf #imatrix #mythalion-13b #text-generation #en #license-other #region-us \n",
"# From original readme",
"## Prompting\n\nThis model can be prompted using both the Alpaca and Pygmalion formatting.\n\nAlpaca formatting:\n\n\nPygmalion/Metharme formatting:\n\n\n\nThe model has been trained on prompts using three different roles, which are denoted by the following tokens: '<|system|>', '<|user|>' and '<|model|>'.\n\nThe '<|system|>' prompt can be used to inject out-of-channel information behind the scenes, while the '<|user|>' prompt should be used to indicate user input.\nThe '<|model|>' token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to\nform a conversation history."
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden", "quantized_by": "mradermacher"} | mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF | null | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:09:16+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #alignment-handbook #trl #dpo #generated_from_trainer #en #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden #license-llama2 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #alignment-handbook #trl #dpo #generated_from_trainer #en #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden #license-llama2 #endpoints_compatible #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 | {"library_name": "peft", "base_model": "unsloth/llama-3-8b"} | sravaniayyagari/lora_model | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b",
"region:us"
] | null | 2024-04-30T12:09:20+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/llama-3-8b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.1 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/llama-3-8b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava-1.5-7b-hf-ft-museum-updated
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the The Metropolitan
Museum of Art(https://www.metmuseum.org/art/collection) dataset.
## Model description
This project designed to enrich museum experiences by enabling users to explore and comprehend museum artifacts through image-based queries.
## Intended uses & limitations
**Intended uses:**
1. Enhance museum visitor experiences through interactive exploration via image-based queries.
2. Aid researchers and curators in retrieving information from museum artifacts.
3. Democratize access to museum knowledge by catering to a broader audience.
**Limitation:**
The model is fine tuned on a subset of data from the Met's collection that pertains to Indian heritage.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-museum-updated", "results": []}]} | somnathsingh31/llava-1.5-7b-hf-ft-museum | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"has_space",
"region:us"
] | null | 2024-04-30T12:09:56+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #has_space #region-us
|
# llava-1.5-7b-hf-ft-museum-updated
This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on the The Metropolitan
Museum of Art(URL dataset.
## Model description
This project designed to enrich museum experiences by enabling users to explore and comprehend museum artifacts through image-based queries.
## Intended uses & limitations
Intended uses:
1. Enhance museum visitor experiences through interactive exploration via image-based queries.
2. Aid researchers and curators in retrieving information from museum artifacts.
3. Democratize access to museum knowledge by catering to a broader audience.
Limitation:
The model is fine tuned on a subset of data from the Met's collection that pertains to Indian heritage.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 | [
"# llava-1.5-7b-hf-ft-museum-updated\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on the The Metropolitan\nMuseum of Art(URL dataset.",
"## Model description\n\nThis project designed to enrich museum experiences by enabling users to explore and comprehend museum artifacts through image-based queries.",
"## Intended uses & limitations\nIntended uses:\n1. Enhance museum visitor experiences through interactive exploration via image-based queries.\n2. Aid researchers and curators in retrieving information from museum artifacts.\n3. Democratize access to museum knowledge by catering to a broader audience.\n\nLimitation:\nThe model is fine tuned on a subset of data from the Met's collection that pertains to Indian heritage.",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #has_space #region-us \n",
"# llava-1.5-7b-hf-ft-museum-updated\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on the The Metropolitan\nMuseum of Art(URL dataset.",
"## Model description\n\nThis project designed to enrich museum experiences by enabling users to explore and comprehend museum artifacts through image-based queries.",
"## Intended uses & limitations\nIntended uses:\n1. Enhance museum visitor experiences through interactive exploration via image-based queries.\n2. Aid researchers and curators in retrieving information from museum artifacts.\n3. Democratize access to museum knowledge by catering to a broader audience.\n\nLimitation:\nThe model is fine tuned on a subset of data from the Met's collection that pertains to Indian heritage.",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
feature-extraction | transformers |
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("phospho-small-2502093")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
| {"language": "en", "license": "apache-2.0"} | phospho-app/phospho-small-2502093 | null | [
"transformers",
"safetensors",
"mpnet",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:10:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us
|
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: URL
| [
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] | [
"TAGS\n#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] |
feature-extraction | transformers |
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("phospho-small-bcfa83c")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
| {"language": "en", "license": "apache-2.0"} | phospho-app/phospho-small-bcfa83c | null | [
"transformers",
"safetensors",
"mpnet",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:10:16+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us
|
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: URL
| [
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] | [
"TAGS\n#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** cstr
- **License:** apache-2.0
- **Finetuned from model :** cstr/phi-3-orpo-v8_16
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "cstr/phi-3-orpo-v8_16"} | cstr/phi-3-orpo-v9_4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"orpo",
"conversational",
"en",
"base_model:cstr/phi-3-orpo-v8_16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-04-30T12:13:31+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #conversational #en #base_model-cstr/phi-3-orpo-v8_16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: cstr
- License: apache-2.0
- Finetuned from model : cstr/phi-3-orpo-v8_16
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: cstr\n- License: apache-2.0\n- Finetuned from model : cstr/phi-3-orpo-v8_16\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #conversational #en #base_model-cstr/phi-3-orpo-v8_16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: cstr\n- License: apache-2.0\n- Finetuned from model : cstr/phi-3-orpo-v8_16\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Narkantak/phi3-Intent-entity-Classifier-Ashu | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:13:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6546 | 0.92 | 6 | 1.7189 |
| 1.2076 | 2.0 | 13 | 0.8973 |
| 0.7157 | 2.92 | 19 | 0.5511 |
| 0.4138 | 4.0 | 26 | 0.4499 |
| 0.4018 | 4.92 | 32 | 0.4044 |
| 0.3034 | 6.0 | 39 | 0.3793 |
| 0.3186 | 6.92 | 45 | 0.3645 |
| 0.2451 | 8.0 | 52 | 0.3590 |
| 0.2556 | 8.92 | 58 | 0.3660 |
| 0.1937 | 10.0 | 65 | 0.3825 |
| 0.1993 | 10.92 | 71 | 0.3782 |
| 0.1511 | 12.0 | 78 | 0.4275 |
| 0.1487 | 12.92 | 84 | 0.4234 |
| 0.1098 | 14.0 | 91 | 0.4876 |
| 0.1121 | 14.92 | 97 | 0.4675 |
| 0.0846 | 16.0 | 104 | 0.5187 |
| 0.0869 | 16.92 | 110 | 0.5365 |
| 0.0677 | 18.0 | 117 | 0.5372 |
| 0.0729 | 18.92 | 123 | 0.5639 |
| 0.0587 | 20.0 | 130 | 0.5773 |
| 0.0623 | 20.92 | 136 | 0.6006 |
| 0.0524 | 22.0 | 143 | 0.6098 |
| 0.0599 | 22.92 | 149 | 0.6101 |
| 0.0495 | 24.0 | 156 | 0.6204 |
| 0.0571 | 24.92 | 162 | 0.6297 |
| 0.0475 | 26.0 | 169 | 0.6353 |
| 0.0551 | 26.92 | 175 | 0.6374 |
| 0.0455 | 27.69 | 180 | 0.6374 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-128k-instruct", "model-index": [{"name": "working", "results": []}]} | Narkantak/working | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2024-04-30T12:13:56+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-128k-instruct #license-mit #region-us
| working
=======
This model is a fine-tuned version of microsoft/Phi-3-mini-128k-instruct on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6374
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 6
* eval\_batch\_size: 6
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 24
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-128k-instruct #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
image-to-3d | fasttext | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["fr", "en"], "license": "apache-2.0", "library_name": "fasttext", "tags": ["biology"], "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["charcut_mt"], "pipeline_tag": "image-to-3d"} | hjebuoebduede/BRICK | null | [
"fasttext",
"biology",
"image-to-3d",
"fr",
"en",
"dataset:HuggingFaceFW/fineweb",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T12:14:43+00:00 | [
"1910.09700"
] | [
"fr",
"en"
] | TAGS
#fasttext #biology #image-to-3d #fr #en #dataset-HuggingFaceFW/fineweb #arxiv-1910.09700 #license-apache-2.0 #region-us
| # Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#fasttext #biology #image-to-3d #fr #en #dataset-HuggingFaceFW/fineweb #arxiv-1910.09700 #license-apache-2.0 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jiuhai/llama-3-1425 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:14:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["yo"], "library_name": "transformers"} | oyemade/mms-tts-yor-fine-tuned | null | [
"transformers",
"safetensors",
"vits",
"yo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:17:47+00:00 | [
"1910.09700"
] | [
"yo"
] | TAGS
#transformers #safetensors #vits #yo #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vits #yo #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4074123501777649
f1_macro: 0.8845669198610375
f1_micro: 0.8388888888888889
f1_weighted: 0.8391621032797504
precision_macro: 0.884004884004884
precision_micro: 0.8388888888888889
precision_weighted: 0.8399267399267399
recall_macro: 0.88548463356974
recall_micro: 0.8388888888888889
recall_weighted: 0.8388888888888889
accuracy: 0.8388888888888889
trained with
train: df_1to1000_bewertet_train_and_postitiv_TRAIN_HF
val: df_1too1200_val_HF
| {"tags": ["autotrain", "text-classification"], "datasets": ["post-auto-v4/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]} | Zerithas/post-auto-v4 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:post-auto-v4/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:18:46+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #autotrain #dataset-post-auto-v4/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4074123501777649
f1_macro: 0.8845669198610375
f1_micro: 0.8388888888888889
f1_weighted: 0.8391621032797504
precision_macro: 0.884004884004884
precision_micro: 0.8388888888888889
precision_weighted: 0.8399267399267399
recall_macro: 0.88548463356974
recall_micro: 0.8388888888888889
recall_weighted: 0.8388888888888889
accuracy: 0.8388888888888889
trained with
train: df_1to1000_bewertet_train_and_postitiv_TRAIN_HF
val: df_1too1200_val_HF
| [
"# Model Trained Using AutoTrain\n\n- Problem type: Text Classification",
"## Validation Metrics\nloss: 0.4074123501777649\n\nf1_macro: 0.8845669198610375\n\nf1_micro: 0.8388888888888889\n\nf1_weighted: 0.8391621032797504\n\nprecision_macro: 0.884004884004884\n\nprecision_micro: 0.8388888888888889\n\nprecision_weighted: 0.8399267399267399\n\nrecall_macro: 0.88548463356974\n\nrecall_micro: 0.8388888888888889\n\nrecall_weighted: 0.8388888888888889\n\naccuracy: 0.8388888888888889\n\n\ntrained with\ntrain: df_1to1000_bewertet_train_and_postitiv_TRAIN_HF\n val: df_1too1200_val_HF"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #autotrain #dataset-post-auto-v4/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\n- Problem type: Text Classification",
"## Validation Metrics\nloss: 0.4074123501777649\n\nf1_macro: 0.8845669198610375\n\nf1_micro: 0.8388888888888889\n\nf1_weighted: 0.8391621032797504\n\nprecision_macro: 0.884004884004884\n\nprecision_micro: 0.8388888888888889\n\nprecision_weighted: 0.8399267399267399\n\nrecall_macro: 0.88548463356974\n\nrecall_micro: 0.8388888888888889\n\nrecall_weighted: 0.8388888888888889\n\naccuracy: 0.8388888888888889\n\n\ntrained with\ntrain: df_1to1000_bewertet_train_and_postitiv_TRAIN_HF\n val: df_1too1200_val_HF"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/finalnew3 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:20:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | EldritchLavender/SponkiTwo | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:20:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AwanLLM/Llama-3-8B-Cumulus-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "base_model": "AwanLLM/Llama-3-8B-Cumulus-v0.1", "quantized_by": "mradermacher"} | mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:AwanLLM/Llama-3-8B-Cumulus-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:21:57+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-AwanLLM/Llama-3-8B-Cumulus-v0.1 #license-llama3 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-AwanLLM/Llama-3-8B-Cumulus-v0.1 #license-llama3 #endpoints_compatible #region-us \n"
] |
null | null |
## Example of [Safetensors](https://huggingface.co/docs/safetensors) model file tracked with [git-theta](https://github.com/r-three/git-theta)

| {"license": "apache-2.0", "tags": ["git-theta"]} | julien-c/git-theta-safetensors | null | [
"safetensors",
"git-theta",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T12:22:35+00:00 | [] | [] | TAGS
#safetensors #git-theta #license-apache-2.0 #region-us
|
## Example of Safetensors model file tracked with git-theta
!URL
| [
"## Example of Safetensors model file tracked with git-theta\n\n!URL"
] | [
"TAGS\n#safetensors #git-theta #license-apache-2.0 #region-us \n",
"## Example of Safetensors model file tracked with git-theta\n\n!URL"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - zhenjili/cat_dreambooth_lora_xl
<Gallery />
## Model description
These are zhenjili/cat_dreambooth_lora_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](zhenjili/cat_dreambooth_lora_xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sks cat", "widget": []} | zhenjili/cat_dreambooth_lora_xl | null | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-30T12:22:39+00:00 | [] | [] | TAGS
#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - zhenjili/cat_dreambooth_lora_xl
<Gallery />
## Model description
These are zhenjili/cat_dreambooth_lora_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - zhenjili/cat_dreambooth_lora_xl\n\n<Gallery />",
"## Model description\n\nThese are zhenjili/cat_dreambooth_lora_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of sks cat to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - zhenjili/cat_dreambooth_lora_xl\n\n<Gallery />",
"## Model description\n\nThese are zhenjili/cat_dreambooth_lora_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of sks cat to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384_R-simplewiki-fineweb-100k_en-med_512-vN
This model is a fine-tuned version of [pszemraj/MiniLMv2-L6-H384_R-simplewiki](https://huggingface.co/pszemraj/MiniLMv2-L6-H384_R-simplewiki) on the BEE-spoke-data/fineweb-100k_en-med dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0206
- Accuracy: 0.3783
- Num Input Tokens Seen: 162790400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1792
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|
| 4.6583 | 0.1208 | 150 | 4.5052 | 0.3406 | 9830400 |
| 4.5365 | 0.2415 | 300 | 4.3712 | 0.3525 | 19660800 |
| 4.4621 | 0.3623 | 450 | 4.2810 | 0.3575 | 29491200 |
| 4.4116 | 0.4831 | 600 | 4.2466 | 0.3615 | 39321600 |
| 4.3487 | 0.6038 | 750 | 4.1795 | 0.3661 | 49152000 |
| 4.338 | 0.7246 | 900 | 4.1874 | 0.3663 | 58982400 |
| 4.342 | 0.8454 | 1050 | 4.1475 | 0.3695 | 68812800 |
| 4.268 | 0.9661 | 1200 | 4.1215 | 0.3714 | 78643200 |
| 4.2185 | 1.0869 | 1350 | 4.1032 | 0.3725 | 88472576 |
| 4.2645 | 1.2077 | 1500 | 4.0859 | 0.3757 | 98302976 |
| 4.2542 | 1.3284 | 1650 | 4.0730 | 0.3750 | 108133376 |
| 4.2614 | 1.4492 | 1800 | 4.0682 | 0.3749 | 117963776 |
| 4.1928 | 1.5700 | 1950 | 4.0596 | 0.3758 | 127794176 |
| 4.1971 | 1.6907 | 2100 | 4.0505 | 0.3777 | 137624576 |
| 4.1966 | 1.8115 | 2250 | 4.0163 | 0.3787 | 147454976 |
| 4.16 | 1.9323 | 2400 | 4.0352 | 0.3774 | 157285376 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["BEE-spoke-data/fineweb-100k_en-med"], "metrics": ["accuracy"], "base_model": "pszemraj/MiniLMv2-L6-H384_R-simplewiki"} | pszemraj/MiniLMv2-L6-H384_R-fineweb-100k | null | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"en",
"dataset:BEE-spoke-data/fineweb-100k_en-med",
"base_model:pszemraj/MiniLMv2-L6-H384_R-simplewiki",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:24:11+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #roberta #fill-mask #generated_from_trainer #en #dataset-BEE-spoke-data/fineweb-100k_en-med #base_model-pszemraj/MiniLMv2-L6-H384_R-simplewiki #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| MiniLMv2-L6-H384\_R-simplewiki-fineweb-100k\_en-med\_512-vN
===========================================================
This model is a fine-tuned version of pszemraj/MiniLMv2-L6-H384\_R-simplewiki on the BEE-spoke-data/fineweb-100k\_en-med dataset.
It achieves the following results on the evaluation set:
* Loss: 4.0206
* Accuracy: 0.3783
* Num Input Tokens Seen: 162790400
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 8e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 1792
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
* lr\_scheduler\_type: inverse\_sqrt
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 2.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 1792\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 2.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #roberta #fill-mask #generated_from_trainer #en #dataset-BEE-spoke-data/fineweb-100k_en-med #base_model-pszemraj/MiniLMv2-L6-H384_R-simplewiki #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 1792\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 2.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | mohammedriza-rahman/unsloth_techzonebot | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:24:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_default_kl0.01-epoch1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:25:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** mohammedriza-rahman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | mohammedriza-rahman/lora_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:26:55+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: mohammedriza-rahman
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: mohammedriza-rahman\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: mohammedriza-rahman\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-plm-nsp-100000
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1655 | 1.0 | 782 | 0.1216 |
| 0.0776 | 2.0 | 1564 | 0.1362 |
| 0.0523 | 3.0 | 2346 | 0.1356 |
| 0.0394 | 4.0 | 3128 | 0.1604 |
| 0.0338 | 5.0 | 3910 | 0.1738 |
| 0.0315 | 6.0 | 4692 | 0.2066 |
| 0.0305 | 7.0 | 5474 | 0.1757 |
| 0.0267 | 8.0 | 6256 | 0.2052 |
| 0.0228 | 9.0 | 7038 | 0.2014 |
| 0.0206 | 10.0 | 7820 | 0.2132 |
| 0.0184 | 11.0 | 8602 | 0.1986 |
| 0.0183 | 12.0 | 9384 | 0.2099 |
| 0.0171 | 13.0 | 10166 | 0.2311 |
| 0.0157 | 14.0 | 10948 | 0.2681 |
| 0.0141 | 15.0 | 11730 | 0.2320 |
| 0.0132 | 16.0 | 12512 | 0.3015 |
| 0.0128 | 17.0 | 13294 | 0.2730 |
| 0.012 | 18.0 | 14076 | 0.3014 |
| 0.0119 | 19.0 | 14858 | 0.3021 |
| 0.0117 | 20.0 | 15640 | 0.3012 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-large", "model-index": [{"name": "roberta-large-plm-nsp-100000", "results": []}]} | mhr2004/roberta-large-plm-nsp-100000 | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:27:00+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
| roberta-large-plm-nsp-100000
============================
This model is a fine-tuned version of roberta-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3012
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 256
* eval\_batch\_size: 256
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdp-multi-classifier-weighted
This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8564
- Accuracy: 0.9716
- F1: 0.8484
- Precision: 0.7788
- Recall: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0497 | 1.0 | 11302 | 1.5640 | 0.9621 | 0.8011 | 0.7244 | 0.8958 |
| 0.9103 | 2.0 | 22604 | 1.4417 | 0.9663 | 0.8203 | 0.7522 | 0.9021 |
| 0.7629 | 3.0 | 33906 | 0.9562 | 0.9661 | 0.8235 | 0.7406 | 0.9272 |
| 0.6321 | 4.0 | 45208 | 0.9106 | 0.9697 | 0.8376 | 0.7720 | 0.9153 |
| 0.5464 | 5.0 | 56510 | 0.9811 | 0.9705 | 0.8419 | 0.7760 | 0.9200 |
| 0.5043 | 6.0 | 67812 | 0.9484 | 0.9700 | 0.8409 | 0.7677 | 0.9296 |
| 0.4647 | 7.0 | 79114 | 0.8569 | 0.9713 | 0.8465 | 0.7781 | 0.9281 |
| 0.4215 | 8.0 | 90416 | 0.8620 | 0.9703 | 0.8430 | 0.7682 | 0.9338 |
| 0.3794 | 9.0 | 101718 | 0.8569 | 0.9704 | 0.8437 | 0.7682 | 0.9357 |
| 0.344 | 10.0 | 113020 | 0.8305 | 0.9708 | 0.8448 | 0.7720 | 0.9328 |
| 0.3247 | 11.0 | 124322 | 0.7900 | 0.9707 | 0.8446 | 0.7709 | 0.9338 |
| 0.3159 | 12.0 | 135624 | 0.7838 | 0.9711 | 0.8463 | 0.7734 | 0.9344 |
| 0.3166 | 13.0 | 146926 | 0.8381 | 0.9710 | 0.8462 | 0.7727 | 0.9351 |
| 0.279 | 14.0 | 158228 | 0.8694 | 0.9718 | 0.8487 | 0.7821 | 0.9277 |
| 0.281 | 15.0 | 169530 | 0.8564 | 0.9716 | 0.8484 | 0.7788 | 0.9316 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "alex-miller/ODABert", "model-index": [{"name": "cdp-multi-classifier-weighted", "results": []}]} | alex-miller/cdp-multi-classifier-weighted | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:alex-miller/ODABert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:29:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| cdp-multi-classifier-weighted
=============================
This model is a fine-tuned version of alex-miller/ODABert on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8564
* Accuracy: 0.9716
* F1: 0.8484
* Precision: 0.7788
* Recall: 0.9316
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.0.1
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_default_kl0.01-epoch2 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:33:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Amrit05/swag_bert_finetuned
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3440
- Validation Loss: 0.5666
- Train Accuracy: 0.7995
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.8207 | 0.5675 | 0.7803 | 0 |
| 0.3440 | 0.5666 | 0.7995 | 1 |
### Framework versions
- Transformers 4.39.3
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "Amrit05/swag_bert_finetuned", "results": []}]} | Amrit05/swag_bert_finetuned | null | [
"transformers",
"tf",
"bert",
"multiple-choice",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:33:55+00:00 | [] | [] | TAGS
#transformers #tf #bert #multiple-choice #generated_from_keras_callback #base_model-google-bert/bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| Amrit05/swag\_bert\_finetuned
=============================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.3440
* Validation Loss: 0.5666
* Train Accuracy: 0.7995
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 5e-05, 'decay\_steps': 9192, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.39.3
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 9192, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #bert #multiple-choice #generated_from_keras_callback #base_model-google-bert/bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 9192, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "Eric111/Yarn-Mistral-7b-128k-DPO", "saishf/Multi-Verse-RP-7B"], "base_model": "DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO", "quantized_by": "mradermacher"} | mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF | null | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Eric111/Yarn-Mistral-7b-128k-DPO",
"saishf/Multi-Verse-RP-7B",
"en",
"base_model:DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:35:30+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #merge #mergekit #lazymergekit #Eric111/Yarn-Mistral-7b-128k-DPO #saishf/Multi-Verse-RP-7B #en #base_model-DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #Eric111/Yarn-Mistral-7b-128k-DPO #saishf/Multi-Verse-RP-7B #en #base_model-DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_default_kl0.01-epoch3 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:36:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-cartpolev1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "422.30 +/- 72.49", "name": "mean_reward", "verified": false}]}]}]} | pietroorlandi/Reinforce-cartpolev1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-30T12:36:41+00:00 | [] | [] | TAGS
#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing CartPole-v1
This is a trained model of a Reinforce agent playing CartPole-v1 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vinai_EnglistToVietnamese_v2
This model is a fine-tuned version of [vinai/vinai-translate-en2vi-v2](https://huggingface.co/vinai/vinai-translate-en2vi-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| {"tags": ["translation", "generated_from_trainer"], "base_model": "vinai/vinai-translate-en2vi-v2", "model-index": [{"name": "vinai_EnglistToVietnamese_v2", "results": []}]} | TammyTrinh/vinai_EnglistToVietnamese_v2 | null | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:vinai/vinai-translate-en2vi-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:37:21+00:00 | [] | [] | TAGS
#transformers #safetensors #mbart #text2text-generation #translation #generated_from_trainer #base_model-vinai/vinai-translate-en2vi-v2 #autotrain_compatible #endpoints_compatible #region-us
|
# vinai_EnglistToVietnamese_v2
This model is a fine-tuned version of vinai/vinai-translate-en2vi-v2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"# vinai_EnglistToVietnamese_v2\n\nThis model is a fine-tuned version of vinai/vinai-translate-en2vi-v2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.1+cu121\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #mbart #text2text-generation #translation #generated_from_trainer #base_model-vinai/vinai-translate-en2vi-v2 #autotrain_compatible #endpoints_compatible #region-us \n",
"# vinai_EnglistToVietnamese_v2\n\nThis model is a fine-tuned version of vinai/vinai-translate-en2vi-v2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.1+cu121\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] |
feature-extraction | transformers | # fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564
## Model Description
fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:38:21+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #feature-extraction #custom_code #endpoints_compatible #region-us
| # fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564
## Model Description
fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found here.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
| [
"# fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564",
"## Model Description\n\nfine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.",
"## Use Case\nThis model is designed to support various applications in natural language processing and understanding.",
"## Associated Dataset\n\nThis the dataset for this model can be found here.",
"## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
"TAGS\n#transformers #safetensors #bert #feature-extraction #custom_code #endpoints_compatible #region-us \n",
"# fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564",
"## Model Description\n\nfine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.",
"## Use Case\nThis model is designed to support various applications in natural language processing and understanding.",
"## Associated Dataset\n\nThis the dataset for this model can be found here.",
"## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-tamil-gpu-custom_preprocessed_v2
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.53567e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.2473 | 0.24 | 300 | inf | 0.4771 |
| 0.7121 | 0.49 | 600 | inf | 0.3487 |
| 0.552 | 0.73 | 900 | inf | 0.3140 |
| 0.4973 | 0.97 | 1200 | inf | 0.3202 |
| 0.499 | 1.22 | 1500 | inf | 0.2678 |
| 0.4667 | 1.46 | 1800 | inf | 0.2784 |
| 0.5909 | 1.71 | 2100 | inf | 0.3930 |
| 1.411 | 1.95 | 2400 | inf | 0.3839 |
| 2.1124 | 2.19 | 2700 | inf | 0.4063 |
| 2.2398 | 2.44 | 3000 | inf | 0.4310 |
| 2.3058 | 2.68 | 3300 | inf | 0.4310 |
| 2.262 | 2.92 | 3600 | inf | 0.4310 |
| 2.2588 | 3.17 | 3900 | inf | 0.4310 |
| 2.3649 | 3.41 | 4200 | inf | 0.4310 |
| 2.2835 | 3.66 | 4500 | inf | 0.4310 |
| 2.3228 | 3.9 | 4800 | inf | 0.4310 |
| 2.2322 | 4.14 | 5100 | inf | 0.4310 |
| 2.3131 | 4.39 | 5400 | inf | 0.4310 |
| 2.2916 | 4.63 | 5700 | inf | 0.4310 |
| 2.3239 | 4.87 | 6000 | inf | 0.4310 |
| 2.3533 | 5.12 | 6300 | inf | 0.4310 |
| 2.2787 | 5.36 | 6600 | inf | 0.4310 |
| 2.2776 | 5.61 | 6900 | inf | 0.4310 |
| 2.3143 | 5.85 | 7200 | inf | 0.4310 |
| 2.3105 | 6.09 | 7500 | inf | 0.4310 |
| 2.2639 | 6.34 | 7800 | inf | 0.4310 |
| 2.3211 | 6.58 | 8100 | inf | 0.4310 |
| 2.2755 | 6.82 | 8400 | inf | 0.4310 |
| 2.3074 | 7.07 | 8700 | inf | 0.4310 |
| 2.2627 | 7.31 | 9000 | inf | 0.4310 |
| 2.2756 | 7.55 | 9300 | inf | 0.4310 |
| 2.2594 | 7.8 | 9600 | inf | 0.4310 |
| 2.2221 | 8.04 | 9900 | inf | 0.4310 |
| 2.2932 | 8.29 | 10200 | inf | 0.4310 |
| 2.2978 | 8.53 | 10500 | inf | 0.4310 |
| 2.2958 | 8.77 | 10800 | inf | 0.4310 |
| 2.3239 | 9.02 | 11100 | inf | 0.4310 |
| 2.281 | 9.26 | 11400 | inf | 0.4310 |
| 2.272 | 9.5 | 11700 | inf | 0.4310 |
| 2.2544 | 9.75 | 12000 | inf | 0.4310 |
| 2.3103 | 9.99 | 12300 | inf | 0.4310 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/w2v-bert-2.0", "model-index": [{"name": "w2v-bert-2.0-tamil-gpu-custom_preprocessed_v2", "results": []}]} | Sajjo/w2v-bert-2.0-tamil-gpu-custom_preprocessed_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:40:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #generated_from_trainer #base_model-facebook/w2v-bert-2.0 #license-mit #endpoints_compatible #region-us
| w2v-bert-2.0-tamil-gpu-custom\_preprocessed\_v2
===============================================
This model is a fine-tuned version of facebook/w2v-bert-2.0 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: inf
* Wer: 0.4310
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 4.53567e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4.53567e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #generated_from_trainer #base_model-facebook/w2v-bert-2.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4.53567e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-coder-6.7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF --model deepseek-coder-6.7b-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF --model deepseek-coder-6.7b-instruct.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m deepseek-coder-6.7b-instruct.Q6_K.gguf -n 128
```
| {"license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "license_name": "deepseek", "license_link": "LICENSE"} | int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"region:us"
] | null | 2024-04-30T12:41:14+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #license-other #region-us
|
# int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF
This model was converted to GGUF format from 'deepseek-ai/deepseek-coder-6.7b-instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'deepseek-ai/deepseek-coder-6.7b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-other #region-us \n",
"# int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'deepseek-ai/deepseek-coder-6.7b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta0.5_kl0.01-epoch1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:41:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# python-gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.2956 | 0.0138 | 25 | 7.9483 |
| 6.8319 | 0.0275 | 50 | 6.0463 |
| 5.653 | 0.0413 | 75 | 5.3905 |
| 5.0998 | 0.0551 | 100 | 5.0523 |
| 4.7296 | 0.0688 | 125 | 4.7295 |
| 4.4676 | 0.0826 | 150 | 4.4801 |
| 4.2285 | 0.0964 | 175 | 4.2580 |
| 4.0335 | 0.1101 | 200 | 4.0891 |
| 3.8654 | 0.1239 | 225 | 3.9376 |
| 3.7442 | 0.1377 | 250 | 3.8222 |
| 3.6155 | 0.1514 | 275 | 3.7006 |
| 3.4805 | 0.1652 | 300 | 3.5997 |
| 3.3804 | 0.1790 | 325 | 3.4840 |
| 3.3074 | 0.1927 | 350 | 3.3887 |
| 3.1737 | 0.2065 | 375 | 3.2711 |
| 3.0593 | 0.2203 | 400 | 3.1535 |
| 2.9634 | 0.2340 | 425 | 3.0443 |
| 2.887 | 0.2478 | 450 | 2.9574 |
| 2.7808 | 0.2616 | 475 | 2.8775 |
| 2.7117 | 0.2753 | 500 | 2.8190 |
| 2.6611 | 0.2891 | 525 | 2.7515 |
| 2.6141 | 0.3029 | 550 | 2.7097 |
| 2.5752 | 0.3167 | 575 | 2.6704 |
| 2.5038 | 0.3304 | 600 | 2.6307 |
| 2.4852 | 0.3442 | 625 | 2.6004 |
| 2.4638 | 0.3580 | 650 | 2.5696 |
| 2.4362 | 0.3717 | 675 | 2.5343 |
| 2.3896 | 0.3855 | 700 | 2.5131 |
| 2.3669 | 0.3993 | 725 | 2.4886 |
| 2.3174 | 0.4130 | 750 | 2.4695 |
| 2.3152 | 0.4268 | 775 | 2.4478 |
| 2.2916 | 0.4406 | 800 | 2.4271 |
| 2.2743 | 0.4543 | 825 | 2.4166 |
| 2.2555 | 0.4681 | 850 | 2.3959 |
| 2.2545 | 0.4819 | 875 | 2.3794 |
| 2.2291 | 0.4956 | 900 | 2.3645 |
| 2.2032 | 0.5094 | 925 | 2.3499 |
| 2.1842 | 0.5232 | 950 | 2.3382 |
| 2.1505 | 0.5369 | 975 | 2.3263 |
| 2.1668 | 0.5507 | 1000 | 2.3147 |
| 2.1649 | 0.5645 | 1025 | 2.3072 |
| 2.1427 | 0.5782 | 1050 | 2.2926 |
| 2.1051 | 0.5920 | 1075 | 2.2799 |
| 2.0792 | 0.6058 | 1100 | 2.2708 |
| 2.1171 | 0.6195 | 1125 | 2.2570 |
| 2.1012 | 0.6333 | 1150 | 2.2470 |
| 2.0853 | 0.6471 | 1175 | 2.2405 |
| 2.0786 | 0.6608 | 1200 | 2.2312 |
| 2.0664 | 0.6746 | 1225 | 2.2238 |
| 2.0706 | 0.6884 | 1250 | 2.2183 |
| 2.0557 | 0.7021 | 1275 | 2.2102 |
| 2.0404 | 0.7159 | 1300 | 2.2042 |
| 2.0493 | 0.7297 | 1325 | 2.1978 |
| 2.0373 | 0.7434 | 1350 | 2.1907 |
| 2.0093 | 0.7572 | 1375 | 2.1837 |
| 2.0228 | 0.7710 | 1400 | 2.1819 |
| 2.0147 | 0.7847 | 1425 | 2.1739 |
| 2.0206 | 0.7985 | 1450 | 2.1694 |
| 2.0156 | 0.8123 | 1475 | 2.1671 |
| 2.0126 | 0.8260 | 1500 | 2.1622 |
| 1.9834 | 0.8398 | 1525 | 2.1598 |
| 2.0182 | 0.8536 | 1550 | 2.1558 |
| 1.9876 | 0.8674 | 1575 | 2.1543 |
| 1.9914 | 0.8811 | 1600 | 2.1515 |
| 1.9933 | 0.8949 | 1625 | 2.1498 |
| 1.9945 | 0.9087 | 1650 | 2.1483 |
| 1.9733 | 0.9224 | 1675 | 2.1470 |
| 1.9778 | 0.9362 | 1700 | 2.1467 |
| 1.983 | 0.9500 | 1725 | 2.1454 |
| 1.9716 | 0.9637 | 1750 | 2.1453 |
| 1.9668 | 0.9775 | 1775 | 2.1449 |
| 1.9733 | 0.9913 | 1800 | 2.1448 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "python-gpt2", "results": []}]} | MadMarx37/python-gpt2 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:42:02+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| python-gpt2
===========
This model is a fine-tuned version of gpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1448
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
feature-extraction | transformers |
## Bedrock Titan Text Embeddings v2
This repository contains the MTEB scores and usage examples of Bedrock Titan Text Embeddings v2. You can use the embedding model either via the Bedrock InvokeModel API or via Bedrock's batch jobs. For RAG use cases we recommend the former to embed queries during search (latency optimized) and the latter to index corpus (throughput optimized).
## Using Bedrock's InvokeModel API
```python
import json
import boto3
class TitanEmbeddings(object):
accept = "application/json"
content_type = "application/json"
def __init__(self, model_id="amazon.titan-embed-text-v2:0"):
self.bedrock = boto3.client(service_name='bedrock-runtime')
self.model_id = model_id
def __call__(self, text, dimensions, normalize=True):
"""
Returns Titan Embeddings
Args:
text (str): text to embed
dimensions (int): Number of output dimensions.
normalize (bool): Whether to return the normalized embedding or not.
Return:
List[float]: Embedding
"""
body = json.dumps({
"inputText": text,
"dimensions": dimensions,
"normalize": normalize
})
response = self.bedrock.invoke_model(
body=body, modelId=self.model_id, accept=self.accept, contentType=self.content_type
)
response_body = json.loads(response.get('body').read())
return response_body['embedding']
if __name__ == '__main__':
"""
Entrypoint for Amazon Titan Embeddings V2 - Text example.
"""
dimensions = 1024
normalize = True
titan_embeddings_v2 = TitanEmbeddings(model_id="amazon.titan-embed-text-v2:0")
input_text = "What are the different services that you offer?"
embedding = titan_embeddings_v2(input_text, dimensions, normalize)
print(f"{input_text=}")
print(f"{embedding[:10]=}")
```
## Using Bedrock's batch jobs
```python
import requests
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
region = "us-east-1"
base_uri = f"bedrock.{region}.amazonaws.com"
batch_job_uri = f"https://{base_uri}/model-invocation-job/"
# For details on how to set up an IAM role for batch inference, see
# https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-permissions.html
role_arn = "arn:aws:iam::111122223333:role/my-batch-inference-role"
payload = {
"inputDataConfig": {
"s3InputDataConfig": {
"s3Uri": "s3://my-input-bucket/batch-input/",
"s3InputFormat": "JSONL"
}
},
"jobName": "embeddings-v2-batch-job",
"modelId": "amazon.titan-embed-text-v2:0",
"outputDataConfig": {
"s3OutputDataConfig": {
"s3Uri": "s3://my-output-bucket/batch-output/"
}
},
"roleArn": role_arn
}
request_auth = BotoAWSRequestsAuth(
aws_host=base_uri,
aws_region=region,
aws_service="bedrock"
)
response= requests.request("POST", batch_job_uri, json=payload, auth=request_auth)
print(response.json())
``` | {"language": ["en", "fr", "de", "es", "ja", "zh", "hi", "ar", "it", "pt", "sv", "ko", "he", "cs", "tr", "tl", "ru", "nl", "pl", "ta", "mr", "ml", "te", "kn", "vi", "id", "fa", "hu", "el", "ro", "da", "th", "fi", "sk", "uk", "no", "bg", "ca", "sr", "hr", "lt", "sl", "et", "la", "bn", "lv", "ms", "bs", "sq", "az", "gl", "is", "ka", "mk", "eu", "hy", "ne", "ur", "kk", "mn", "be", "uz", "km", "nn", "gu", "my", "cy", "eo", "si", "tt", "sw", "af", "ga", "pa", "ku", "ky", "tg", "or", "lo", "fo", "mt", "so", "lb", "am", "oc", "jv", "ha", "ps", "sa", "fy", "mg", "as", "ba", "br", "tk", "co", "dv", "rw", "ht", "yi", "sd", "zu", "gd", "bo", "ug", "mi", "rm", "xh", "su", "yo"], "license": "other", "tags": ["feature-extraction", "sentence-similarity", "mteb"], "license_name": "amazon-service-terms", "license_link": "https://aws.amazon.com/service-terms/", "inference": false, "model-index": [{"name": "Titan-text-embeddings-v2", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 79.31343283582089}, {"type": "ap", "value": 43.9465851246623}, {"type": "f1", "value": 73.6131343594374}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 70.94218415417559}, {"type": "ap", "value": 82.30115528468109}, {"type": "f1", "value": 69.37963699148699}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 82.29385307346327}, {"type": "ap", "value": 29.956638709449372}, {"type": "f1", "value": 68.88158061498754}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 80.06423982869379}, {"type": "ap", "value": 25.2439835379337}, {"type": "f1", "value": 65.53837311569734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 76.66435}, {"type": "ap", "value": 70.76988138513991}, {"type": "f1", "value": 76.54117595647566}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 35.276}, {"type": "f1", "value": 34.90637768461089}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 38.826}, {"type": "f1", "value": 37.71339372044998}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 39.385999999999996}, {"type": "f1", "value": 38.24347249789392}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 39.472}, {"type": "f1", "value": 38.37157729490788}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 35.897999999999996}, {"type": "f1", "value": 35.187204289589346}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 36.068}, {"type": "f1", "value": 35.042441064207175}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 27.027}, {"type": "map_at_10", "value": 42.617}, {"type": "map_at_100", "value": 43.686}, {"type": "map_at_1000", "value": 43.695}, {"type": "map_at_3", "value": 37.684}, {"type": "map_at_5", "value": 40.532000000000004}, {"type": "mrr_at_1", "value": 27.667}, {"type": "mrr_at_10", "value": 42.88}, {"type": "mrr_at_100", "value": 43.929}, {"type": "mrr_at_1000", "value": 43.938}, {"type": "mrr_at_3", "value": 37.933}, {"type": "mrr_at_5", "value": 40.774}, {"type": "ndcg_at_1", "value": 27.027}, {"type": "ndcg_at_10", "value": 51.312000000000005}, {"type": "ndcg_at_100", "value": 55.696}, {"type": "ndcg_at_1000", "value": 55.896}, {"type": "ndcg_at_3", "value": 41.124}, {"type": "ndcg_at_5", "value": 46.283}, {"type": "precision_at_1", "value": 27.027}, {"type": "precision_at_10", "value": 7.9159999999999995}, {"type": "precision_at_100", "value": 0.979}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 17.022000000000002}, {"type": "precision_at_5", "value": 12.731}, {"type": "recall_at_1", "value": 27.027}, {"type": "recall_at_10", "value": 79.161}, {"type": "recall_at_100", "value": 97.937}, {"type": "recall_at_1000", "value": 99.431}, {"type": "recall_at_3", "value": 51.06699999999999}, {"type": "recall_at_5", "value": 63.656}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 41.775131599226874}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 34.134214263072494}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 63.2885651257187}, {"type": "mrr", "value": 76.37712702809655}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.53738990667027}, {"type": "cos_sim_spearman", "value": 87.13210584606783}, {"type": "euclidean_pearson", "value": 87.33265405736388}, {"type": "euclidean_spearman", "value": 87.18632394893399}, {"type": "manhattan_pearson", "value": 87.33673166528312}, {"type": "manhattan_spearman", "value": 86.9736685010257}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (de-en)", "type": "mteb/bucc-bitext-mining", "config": "de-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 98.32985386221294}, {"type": "f1", "value": 98.18371607515658}, {"type": "precision", "value": 98.1106471816284}, {"type": "recall", "value": 98.32985386221294}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (fr-en)", "type": "mteb/bucc-bitext-mining", "config": "fr-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 98.20603125687872}, {"type": "f1", "value": 98.04461075647515}, {"type": "precision", "value": 97.96390050627338}, {"type": "recall", "value": 98.20603125687872}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (ru-en)", "type": "mteb/bucc-bitext-mining", "config": "ru-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 94.8874263941808}, {"type": "f1", "value": 94.57568410114305}, {"type": "precision", "value": 94.42096755570951}, {"type": "recall", "value": 94.8874263941808}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (zh-en)", "type": "mteb/bucc-bitext-mining", "config": "zh-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 96.78778304370721}, {"type": "f1", "value": 96.75267684746358}, {"type": "precision", "value": 96.73512374934175}, {"type": "recall", "value": 96.78778304370721}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.3051948051948}, {"type": "f1", "value": 83.97876601554812}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 35.005716163806575}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 30.999141295578852}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 36.153}, {"type": "map_at_10", "value": 48.742000000000004}, {"type": "map_at_100", "value": 50.253}, {"type": "map_at_1000", "value": 50.373999999999995}, {"type": "map_at_3", "value": 45.089}, {"type": "map_at_5", "value": 47.08}, {"type": "mrr_at_1", "value": 44.635000000000005}, {"type": "mrr_at_10", "value": 54.715}, {"type": "mrr_at_100", "value": 55.300000000000004}, {"type": "mrr_at_1000", "value": 55.337}, {"type": "mrr_at_3", "value": 52.527}, {"type": "mrr_at_5", "value": 53.76499999999999}, {"type": "ndcg_at_1", "value": 44.635000000000005}, {"type": "ndcg_at_10", "value": 55.31}, {"type": "ndcg_at_100", "value": 60.084}, {"type": "ndcg_at_1000", "value": 61.645}, {"type": "ndcg_at_3", "value": 50.876999999999995}, {"type": "ndcg_at_5", "value": 52.764}, {"type": "precision_at_1", "value": 44.635000000000005}, {"type": "precision_at_10", "value": 10.687000000000001}, {"type": "precision_at_100", "value": 1.66}, {"type": "precision_at_1000", "value": 0.212}, {"type": "precision_at_3", "value": 24.94}, {"type": "precision_at_5", "value": 17.596999999999998}, {"type": "recall_at_1", "value": 36.153}, {"type": "recall_at_10", "value": 67.308}, {"type": "recall_at_100", "value": 87.199}, {"type": "recall_at_1000", "value": 96.904}, {"type": "recall_at_3", "value": 53.466}, {"type": "recall_at_5", "value": 59.512}, {"type": "map_at_1", "value": 32.0}, {"type": "map_at_10", "value": 43.646}, {"type": "map_at_100", "value": 44.933}, {"type": "map_at_1000", "value": 45.049}, {"type": "map_at_3", "value": 40.333999999999996}, {"type": "map_at_5", "value": 42.108000000000004}, {"type": "mrr_at_1", "value": 40.382}, {"type": "mrr_at_10", "value": 49.738}, {"type": "mrr_at_100", "value": 50.331}, {"type": "mrr_at_1000", "value": 50.364}, {"type": "mrr_at_3", "value": 47.442}, {"type": "mrr_at_5", "value": 48.719}, {"type": "ndcg_at_1", "value": 40.382}, {"type": "ndcg_at_10", "value": 49.808}, {"type": "ndcg_at_100", "value": 54.053}, {"type": "ndcg_at_1000", "value": 55.753}, {"type": "ndcg_at_3", "value": 45.355000000000004}, {"type": "ndcg_at_5", "value": 47.215}, {"type": "precision_at_1", "value": 40.382}, {"type": "precision_at_10", "value": 9.58}, {"type": "precision_at_100", "value": 1.488}, {"type": "precision_at_1000", "value": 0.192}, {"type": "precision_at_3", "value": 22.272}, {"type": "precision_at_5", "value": 15.604999999999999}, {"type": "recall_at_1", "value": 32.0}, {"type": "recall_at_10", "value": 60.839}, {"type": "recall_at_100", "value": 78.869}, {"type": "recall_at_1000", "value": 89.384}, {"type": "recall_at_3", "value": 47.226}, {"type": "recall_at_5", "value": 52.864}, {"type": "map_at_1", "value": 44.084}, {"type": "map_at_10", "value": 56.591}, {"type": "map_at_100", "value": 57.533}, {"type": "map_at_1000", "value": 57.583}, {"type": "map_at_3", "value": 53.356}, {"type": "map_at_5", "value": 55.236}, {"type": "mrr_at_1", "value": 50.532999999999994}, {"type": "mrr_at_10", "value": 59.974000000000004}, {"type": "mrr_at_100", "value": 60.557}, {"type": "mrr_at_1000", "value": 60.584}, {"type": "mrr_at_3", "value": 57.774}, {"type": "mrr_at_5", "value": 59.063}, {"type": "ndcg_at_1", "value": 50.532999999999994}, {"type": "ndcg_at_10", "value": 62.265}, {"type": "ndcg_at_100", "value": 65.78}, {"type": "ndcg_at_1000", "value": 66.76299999999999}, {"type": "ndcg_at_3", "value": 57.154}, {"type": "ndcg_at_5", "value": 59.708000000000006}, {"type": "precision_at_1", "value": 50.532999999999994}, {"type": "precision_at_10", "value": 9.85}, {"type": "precision_at_100", "value": 1.247}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 25.434}, {"type": "precision_at_5", "value": 17.279}, {"type": "recall_at_1", "value": 44.084}, {"type": "recall_at_10", "value": 75.576}, {"type": "recall_at_100", "value": 90.524}, {"type": "recall_at_1000", "value": 97.38799999999999}, {"type": "recall_at_3", "value": 61.792}, {"type": "recall_at_5", "value": 68.112}, {"type": "map_at_1", "value": 29.203000000000003}, {"type": "map_at_10", "value": 38.078}, {"type": "map_at_100", "value": 39.144}, {"type": "map_at_1000", "value": 39.222}, {"type": "map_at_3", "value": 35.278999999999996}, {"type": "map_at_5", "value": 36.812}, {"type": "mrr_at_1", "value": 31.299}, {"type": "mrr_at_10", "value": 39.879}, {"type": "mrr_at_100", "value": 40.832}, {"type": "mrr_at_1000", "value": 40.891}, {"type": "mrr_at_3", "value": 37.513999999999996}, {"type": "mrr_at_5", "value": 38.802}, {"type": "ndcg_at_1", "value": 31.299}, {"type": "ndcg_at_10", "value": 43.047999999999995}, {"type": "ndcg_at_100", "value": 48.101}, {"type": "ndcg_at_1000", "value": 49.958999999999996}, {"type": "ndcg_at_3", "value": 37.778}, {"type": "ndcg_at_5", "value": 40.257}, {"type": "precision_at_1", "value": 31.299}, {"type": "precision_at_10", "value": 6.508}, {"type": "precision_at_100", "value": 0.9530000000000001}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_3", "value": 15.744}, {"type": "precision_at_5", "value": 10.893}, {"type": "recall_at_1", "value": 29.203000000000003}, {"type": "recall_at_10", "value": 56.552}, {"type": "recall_at_100", "value": 79.21000000000001}, {"type": "recall_at_1000", "value": 92.884}, {"type": "recall_at_3", "value": 42.441}, {"type": "recall_at_5", "value": 48.399}, {"type": "map_at_1", "value": 19.029}, {"type": "map_at_10", "value": 28.410000000000004}, {"type": "map_at_100", "value": 29.773}, {"type": "map_at_1000", "value": 29.887000000000004}, {"type": "map_at_3", "value": 25.374000000000002}, {"type": "map_at_5", "value": 27.162}, {"type": "mrr_at_1", "value": 23.632}, {"type": "mrr_at_10", "value": 33.0}, {"type": "mrr_at_100", "value": 34.043}, {"type": "mrr_at_1000", "value": 34.105999999999995}, {"type": "mrr_at_3", "value": 30.245}, {"type": "mrr_at_5", "value": 31.830000000000002}, {"type": "ndcg_at_1", "value": 23.632}, {"type": "ndcg_at_10", "value": 34.192}, {"type": "ndcg_at_100", "value": 40.29}, {"type": "ndcg_at_1000", "value": 42.753}, {"type": "ndcg_at_3", "value": 28.811999999999998}, {"type": "ndcg_at_5", "value": 31.46}, {"type": "precision_at_1", "value": 23.632}, {"type": "precision_at_10", "value": 6.455}, {"type": "precision_at_100", "value": 1.095}, {"type": "precision_at_1000", "value": 0.14200000000000002}, {"type": "precision_at_3", "value": 14.096}, {"type": "precision_at_5", "value": 10.448}, {"type": "recall_at_1", "value": 19.029}, {"type": "recall_at_10", "value": 47.278999999999996}, {"type": "recall_at_100", "value": 72.977}, {"type": "recall_at_1000", "value": 90.17699999999999}, {"type": "recall_at_3", "value": 32.519}, {"type": "recall_at_5", "value": 39.156}, {"type": "map_at_1", "value": 30.983}, {"type": "map_at_10", "value": 42.595}, {"type": "map_at_100", "value": 43.906}, {"type": "map_at_1000", "value": 44.001000000000005}, {"type": "map_at_3", "value": 39.245000000000005}, {"type": "map_at_5", "value": 41.14}, {"type": "mrr_at_1", "value": 38.114}, {"type": "mrr_at_10", "value": 48.181000000000004}, {"type": "mrr_at_100", "value": 48.935}, {"type": "mrr_at_1000", "value": 48.972}, {"type": "mrr_at_3", "value": 45.877}, {"type": "mrr_at_5", "value": 47.249}, {"type": "ndcg_at_1", "value": 38.114}, {"type": "ndcg_at_10", "value": 48.793}, {"type": "ndcg_at_100", "value": 54.001999999999995}, {"type": "ndcg_at_1000", "value": 55.749}, {"type": "ndcg_at_3", "value": 43.875}, {"type": "ndcg_at_5", "value": 46.23}, {"type": "precision_at_1", "value": 38.114}, {"type": "precision_at_10", "value": 8.98}, {"type": "precision_at_100", "value": 1.3390000000000002}, {"type": "precision_at_1000", "value": 0.166}, {"type": "precision_at_3", "value": 21.303}, {"type": "precision_at_5", "value": 15.072}, {"type": "recall_at_1", "value": 30.983}, {"type": "recall_at_10", "value": 61.47}, {"type": "recall_at_100", "value": 83.14399999999999}, {"type": "recall_at_1000", "value": 94.589}, {"type": "recall_at_3", "value": 47.019}, {"type": "recall_at_5", "value": 53.445}, {"type": "map_at_1", "value": 29.707}, {"type": "map_at_10", "value": 40.900999999999996}, {"type": "map_at_100", "value": 42.369}, {"type": "map_at_1000", "value": 42.455}, {"type": "map_at_3", "value": 37.416}, {"type": "map_at_5", "value": 39.483000000000004}, {"type": "mrr_at_1", "value": 36.301}, {"type": "mrr_at_10", "value": 46.046}, {"type": "mrr_at_100", "value": 46.922999999999995}, {"type": "mrr_at_1000", "value": 46.964}, {"type": "mrr_at_3", "value": 43.436}, {"type": "mrr_at_5", "value": 45.04}, {"type": "ndcg_at_1", "value": 36.301}, {"type": "ndcg_at_10", "value": 46.955999999999996}, {"type": "ndcg_at_100", "value": 52.712}, {"type": "ndcg_at_1000", "value": 54.447}, {"type": "ndcg_at_3", "value": 41.643}, {"type": "ndcg_at_5", "value": 44.305}, {"type": "precision_at_1", "value": 36.301}, {"type": "precision_at_10", "value": 8.607}, {"type": "precision_at_100", "value": 1.34}, {"type": "precision_at_1000", "value": 0.164}, {"type": "precision_at_3", "value": 19.901}, {"type": "precision_at_5", "value": 14.429}, {"type": "recall_at_1", "value": 29.707}, {"type": "recall_at_10", "value": 59.559}, {"type": "recall_at_100", "value": 83.60499999999999}, {"type": "recall_at_1000", "value": 95.291}, {"type": "recall_at_3", "value": 44.774}, {"type": "recall_at_5", "value": 51.67}, {"type": "map_at_1", "value": 29.455416666666668}, {"type": "map_at_10", "value": 39.61333333333334}, {"type": "map_at_100", "value": 40.85875}, {"type": "map_at_1000", "value": 40.96791666666667}, {"type": "map_at_3", "value": 36.48874999999999}, {"type": "map_at_5", "value": 38.24341666666667}, {"type": "mrr_at_1", "value": 34.80258333333334}, {"type": "mrr_at_10", "value": 43.783}, {"type": "mrr_at_100", "value": 44.591833333333334}, {"type": "mrr_at_1000", "value": 44.64208333333333}, {"type": "mrr_at_3", "value": 41.38974999999999}, {"type": "mrr_at_5", "value": 42.74566666666667}, {"type": "ndcg_at_1", "value": 34.80258333333334}, {"type": "ndcg_at_10", "value": 45.2705}, {"type": "ndcg_at_100", "value": 50.31224999999999}, {"type": "ndcg_at_1000", "value": 52.27916666666667}, {"type": "ndcg_at_3", "value": 40.2745}, {"type": "ndcg_at_5", "value": 42.61575}, {"type": "precision_at_1", "value": 34.80258333333334}, {"type": "precision_at_10", "value": 7.97075}, {"type": "precision_at_100", "value": 1.2400000000000002}, {"type": "precision_at_1000", "value": 0.1595}, {"type": "precision_at_3", "value": 18.627583333333337}, {"type": "precision_at_5", "value": 13.207000000000003}, {"type": "recall_at_1", "value": 29.455416666666668}, {"type": "recall_at_10", "value": 57.66091666666665}, {"type": "recall_at_100", "value": 79.51966666666665}, {"type": "recall_at_1000", "value": 93.01883333333333}, {"type": "recall_at_3", "value": 43.580416666666665}, {"type": "recall_at_5", "value": 49.7025}, {"type": "map_at_1", "value": 27.569}, {"type": "map_at_10", "value": 34.73}, {"type": "map_at_100", "value": 35.708}, {"type": "map_at_1000", "value": 35.808}, {"type": "map_at_3", "value": 32.62}, {"type": "map_at_5", "value": 33.556999999999995}, {"type": "mrr_at_1", "value": 31.135}, {"type": "mrr_at_10", "value": 37.833}, {"type": "mrr_at_100", "value": 38.68}, {"type": "mrr_at_1000", "value": 38.749}, {"type": "mrr_at_3", "value": 35.915}, {"type": "mrr_at_5", "value": 36.751}, {"type": "ndcg_at_1", "value": 31.135}, {"type": "ndcg_at_10", "value": 39.047}, {"type": "ndcg_at_100", "value": 43.822}, {"type": "ndcg_at_1000", "value": 46.249}, {"type": "ndcg_at_3", "value": 35.115}, {"type": "ndcg_at_5", "value": 36.49}, {"type": "precision_at_1", "value": 31.135}, {"type": "precision_at_10", "value": 6.058}, {"type": "precision_at_100", "value": 0.923}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 15.031}, {"type": "precision_at_5", "value": 10.030999999999999}, {"type": "recall_at_1", "value": 27.569}, {"type": "recall_at_10", "value": 49.332}, {"type": "recall_at_100", "value": 70.967}, {"type": "recall_at_1000", "value": 88.876}, {"type": "recall_at_3", "value": 37.858999999999995}, {"type": "recall_at_5", "value": 41.589}, {"type": "map_at_1", "value": 19.677}, {"type": "map_at_10", "value": 28.097}, {"type": "map_at_100", "value": 29.24}, {"type": "map_at_1000", "value": 29.365000000000002}, {"type": "map_at_3", "value": 25.566}, {"type": "map_at_5", "value": 26.852999999999998}, {"type": "mrr_at_1", "value": 23.882}, {"type": "mrr_at_10", "value": 31.851000000000003}, {"type": "mrr_at_100", "value": 32.757}, {"type": "mrr_at_1000", "value": 32.83}, {"type": "mrr_at_3", "value": 29.485}, {"type": "mrr_at_5", "value": 30.744}, {"type": "ndcg_at_1", "value": 23.882}, {"type": "ndcg_at_10", "value": 33.154}, {"type": "ndcg_at_100", "value": 38.491}, {"type": "ndcg_at_1000", "value": 41.274}, {"type": "ndcg_at_3", "value": 28.648}, {"type": "ndcg_at_5", "value": 30.519000000000002}, {"type": "precision_at_1", "value": 23.882}, {"type": "precision_at_10", "value": 6.117999999999999}, {"type": "precision_at_100", "value": 1.0330000000000001}, {"type": "precision_at_1000", "value": 0.145}, {"type": "precision_at_3", "value": 13.73}, {"type": "precision_at_5", "value": 9.794}, {"type": "recall_at_1", "value": 19.677}, {"type": "recall_at_10", "value": 44.444}, {"type": "recall_at_100", "value": 68.477}, {"type": "recall_at_1000", "value": 88.23}, {"type": "recall_at_3", "value": 31.708}, {"type": "recall_at_5", "value": 36.599}, {"type": "map_at_1", "value": 30.489}, {"type": "map_at_10", "value": 40.883}, {"type": "map_at_100", "value": 42.058}, {"type": "map_at_1000", "value": 42.152}, {"type": "map_at_3", "value": 37.525999999999996}, {"type": "map_at_5", "value": 39.753}, {"type": "mrr_at_1", "value": 35.541}, {"type": "mrr_at_10", "value": 44.842999999999996}, {"type": "mrr_at_100", "value": 45.673}, {"type": "mrr_at_1000", "value": 45.723}, {"type": "mrr_at_3", "value": 42.397}, {"type": "mrr_at_5", "value": 43.937}, {"type": "ndcg_at_1", "value": 35.541}, {"type": "ndcg_at_10", "value": 46.504}, {"type": "ndcg_at_100", "value": 51.637}, {"type": "ndcg_at_1000", "value": 53.535}, {"type": "ndcg_at_3", "value": 41.127}, {"type": "ndcg_at_5", "value": 44.17}, {"type": "precision_at_1", "value": 35.541}, {"type": "precision_at_10", "value": 7.864}, {"type": "precision_at_100", "value": 1.165}, {"type": "precision_at_1000", "value": 0.14300000000000002}, {"type": "precision_at_3", "value": 18.688}, {"type": "precision_at_5", "value": 13.507}, {"type": "recall_at_1", "value": 30.489}, {"type": "recall_at_10", "value": 59.378}, {"type": "recall_at_100", "value": 81.38300000000001}, {"type": "recall_at_1000", "value": 94.294}, {"type": "recall_at_3", "value": 44.946000000000005}, {"type": "recall_at_5", "value": 52.644999999999996}, {"type": "map_at_1", "value": 29.981}, {"type": "map_at_10", "value": 39.688}, {"type": "map_at_100", "value": 41.400999999999996}, {"type": "map_at_1000", "value": 41.634}, {"type": "map_at_3", "value": 36.047000000000004}, {"type": "map_at_5", "value": 38.064}, {"type": "mrr_at_1", "value": 35.375}, {"type": "mrr_at_10", "value": 44.169000000000004}, {"type": "mrr_at_100", "value": 45.07}, {"type": "mrr_at_1000", "value": 45.113}, {"type": "mrr_at_3", "value": 41.502}, {"type": "mrr_at_5", "value": 43.034}, {"type": "ndcg_at_1", "value": 35.375}, {"type": "ndcg_at_10", "value": 45.959}, {"type": "ndcg_at_100", "value": 51.688}, {"type": "ndcg_at_1000", "value": 53.714}, {"type": "ndcg_at_3", "value": 40.457}, {"type": "ndcg_at_5", "value": 43.08}, {"type": "precision_at_1", "value": 35.375}, {"type": "precision_at_10", "value": 8.953}, {"type": "precision_at_100", "value": 1.709}, {"type": "precision_at_1000", "value": 0.253}, {"type": "precision_at_3", "value": 18.775}, {"type": "precision_at_5", "value": 14.032}, {"type": "recall_at_1", "value": 29.981}, {"type": "recall_at_10", "value": 57.896}, {"type": "recall_at_100", "value": 83.438}, {"type": "recall_at_1000", "value": 95.608}, {"type": "recall_at_3", "value": 42.327}, {"type": "recall_at_5", "value": 49.069}, {"type": "map_at_1", "value": 24.59}, {"type": "map_at_10", "value": 32.999}, {"type": "map_at_100", "value": 33.987}, {"type": "map_at_1000", "value": 34.085}, {"type": "map_at_3", "value": 30.013}, {"type": "map_at_5", "value": 31.673000000000002}, {"type": "mrr_at_1", "value": 26.802}, {"type": "mrr_at_10", "value": 35.167}, {"type": "mrr_at_100", "value": 36.001}, {"type": "mrr_at_1000", "value": 36.071999999999996}, {"type": "mrr_at_3", "value": 32.562999999999995}, {"type": "mrr_at_5", "value": 34.014}, {"type": "ndcg_at_1", "value": 26.802}, {"type": "ndcg_at_10", "value": 38.21}, {"type": "ndcg_at_100", "value": 43.086999999999996}, {"type": "ndcg_at_1000", "value": 45.509}, {"type": "ndcg_at_3", "value": 32.452999999999996}, {"type": "ndcg_at_5", "value": 35.191}, {"type": "precision_at_1", "value": 26.802}, {"type": "precision_at_10", "value": 5.989}, {"type": "precision_at_100", "value": 0.928}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 13.617}, {"type": "precision_at_5", "value": 9.797}, {"type": "recall_at_1", "value": 24.59}, {"type": "recall_at_10", "value": 52.298}, {"type": "recall_at_100", "value": 74.443}, {"type": "recall_at_1000", "value": 92.601}, {"type": "recall_at_3", "value": 36.888}, {"type": "recall_at_5", "value": 43.37}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.798}, {"type": "map_at_10", "value": 15.983}, {"type": "map_at_100", "value": 17.18}, {"type": "map_at_1000", "value": 17.329}, {"type": "map_at_3", "value": 13.594000000000001}, {"type": "map_at_5", "value": 14.984}, {"type": "mrr_at_1", "value": 21.564}, {"type": "mrr_at_10", "value": 31.415}, {"type": "mrr_at_100", "value": 32.317}, {"type": "mrr_at_1000", "value": 32.376}, {"type": "mrr_at_3", "value": 28.360000000000003}, {"type": "mrr_at_5", "value": 30.194}, {"type": "ndcg_at_1", "value": 21.564}, {"type": "ndcg_at_10", "value": 22.762}, {"type": "ndcg_at_100", "value": 28.199}, {"type": "ndcg_at_1000", "value": 31.284}, {"type": "ndcg_at_3", "value": 18.746}, {"type": "ndcg_at_5", "value": 20.434}, {"type": "precision_at_1", "value": 21.564}, {"type": "precision_at_10", "value": 6.755999999999999}, {"type": "precision_at_100", "value": 1.258}, {"type": "precision_at_1000", "value": 0.182}, {"type": "precision_at_3", "value": 13.507}, {"type": "precision_at_5", "value": 10.541}, {"type": "recall_at_1", "value": 9.798}, {"type": "recall_at_10", "value": 27.407999999999998}, {"type": "recall_at_100", "value": 46.659}, {"type": "recall_at_1000", "value": 64.132}, {"type": "recall_at_3", "value": 17.541999999999998}, {"type": "recall_at_5", "value": 22.137999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.276}, {"type": "map_at_10", "value": 18.003}, {"type": "map_at_100", "value": 23.759}, {"type": "map_at_1000", "value": 25.105}, {"type": "map_at_3", "value": 13.812}, {"type": "map_at_5", "value": 15.659999999999998}, {"type": "mrr_at_1", "value": 63.0}, {"type": "mrr_at_10", "value": 71.812}, {"type": "mrr_at_100", "value": 72.205}, {"type": "mrr_at_1000", "value": 72.21300000000001}, {"type": "mrr_at_3", "value": 70.375}, {"type": "mrr_at_5", "value": 71.188}, {"type": "ndcg_at_1", "value": 50.5}, {"type": "ndcg_at_10", "value": 36.954}, {"type": "ndcg_at_100", "value": 40.083999999999996}, {"type": "ndcg_at_1000", "value": 47.661}, {"type": "ndcg_at_3", "value": 42.666}, {"type": "ndcg_at_5", "value": 39.581}, {"type": "precision_at_1", "value": 63.0}, {"type": "precision_at_10", "value": 28.249999999999996}, {"type": "precision_at_100", "value": 8.113}, {"type": "precision_at_1000", "value": 1.7149999999999999}, {"type": "precision_at_3", "value": 47.083000000000006}, {"type": "precision_at_5", "value": 38.65}, {"type": "recall_at_1", "value": 8.276}, {"type": "recall_at_10", "value": 23.177}, {"type": "recall_at_100", "value": 45.321}, {"type": "recall_at_1000", "value": 68.742}, {"type": "recall_at_3", "value": 15.473}, {"type": "recall_at_5", "value": 18.276}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 55.605000000000004}, {"type": "f1", "value": 49.86208997523934}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 80.079}, {"type": "map_at_10", "value": 85.143}, {"type": "map_at_100", "value": 85.287}, {"type": "map_at_1000", "value": 85.297}, {"type": "map_at_3", "value": 84.533}, {"type": "map_at_5", "value": 84.953}, {"type": "mrr_at_1", "value": 86.424}, {"type": "mrr_at_10", "value": 91.145}, {"type": "mrr_at_100", "value": 91.212}, {"type": "mrr_at_1000", "value": 91.213}, {"type": "mrr_at_3", "value": 90.682}, {"type": "mrr_at_5", "value": 91.013}, {"type": "ndcg_at_1", "value": 86.424}, {"type": "ndcg_at_10", "value": 88.175}, {"type": "ndcg_at_100", "value": 88.77199999999999}, {"type": "ndcg_at_1000", "value": 88.967}, {"type": "ndcg_at_3", "value": 87.265}, {"type": "ndcg_at_5", "value": 87.813}, {"type": "precision_at_1", "value": 86.424}, {"type": "precision_at_10", "value": 10.012}, {"type": "precision_at_100", "value": 1.042}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 32.228}, {"type": "precision_at_5", "value": 19.724}, {"type": "recall_at_1", "value": 80.079}, {"type": "recall_at_10", "value": 91.96600000000001}, {"type": "recall_at_100", "value": 94.541}, {"type": "recall_at_1000", "value": 95.824}, {"type": "recall_at_3", "value": 89.213}, {"type": "recall_at_5", "value": 90.791}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 23.006999999999998}, {"type": "map_at_10", "value": 36.923}, {"type": "map_at_100", "value": 38.932}, {"type": "map_at_1000", "value": 39.096}, {"type": "map_at_3", "value": 32.322}, {"type": "map_at_5", "value": 35.119}, {"type": "mrr_at_1", "value": 45.37}, {"type": "mrr_at_10", "value": 53.418}, {"type": "mrr_at_100", "value": 54.174}, {"type": "mrr_at_1000", "value": 54.20700000000001}, {"type": "mrr_at_3", "value": 51.132}, {"type": "mrr_at_5", "value": 52.451}, {"type": "ndcg_at_1", "value": 45.37}, {"type": "ndcg_at_10", "value": 44.799}, {"type": "ndcg_at_100", "value": 51.605000000000004}, {"type": "ndcg_at_1000", "value": 54.30500000000001}, {"type": "ndcg_at_3", "value": 41.33}, {"type": "ndcg_at_5", "value": 42.608000000000004}, {"type": "precision_at_1", "value": 45.37}, {"type": "precision_at_10", "value": 12.33}, {"type": "precision_at_100", "value": 1.9349999999999998}, {"type": "precision_at_1000", "value": 0.241}, {"type": "precision_at_3", "value": 27.828999999999997}, {"type": "precision_at_5", "value": 20.432}, {"type": "recall_at_1", "value": 23.006999999999998}, {"type": "recall_at_10", "value": 51.06699999999999}, {"type": "recall_at_100", "value": 75.917}, {"type": "recall_at_1000", "value": 92.331}, {"type": "recall_at_3", "value": 36.544}, {"type": "recall_at_5", "value": 43.449}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 38.196999999999996}, {"type": "map_at_10", "value": 55.554}, {"type": "map_at_100", "value": 56.309}, {"type": "map_at_1000", "value": 56.37799999999999}, {"type": "map_at_3", "value": 53.123}, {"type": "map_at_5", "value": 54.626}, {"type": "mrr_at_1", "value": 76.39399999999999}, {"type": "mrr_at_10", "value": 81.75}, {"type": "mrr_at_100", "value": 81.973}, {"type": "mrr_at_1000", "value": 81.982}, {"type": "mrr_at_3", "value": 80.79499999999999}, {"type": "mrr_at_5", "value": 81.393}, {"type": "ndcg_at_1", "value": 76.39399999999999}, {"type": "ndcg_at_10", "value": 64.14800000000001}, {"type": "ndcg_at_100", "value": 66.90899999999999}, {"type": "ndcg_at_1000", "value": 68.277}, {"type": "ndcg_at_3", "value": 60.529999999999994}, {"type": "ndcg_at_5", "value": 62.513}, {"type": "precision_at_1", "value": 76.39399999999999}, {"type": "precision_at_10", "value": 12.967999999999998}, {"type": "precision_at_100", "value": 1.5150000000000001}, {"type": "precision_at_1000", "value": 0.16999999999999998}, {"type": "precision_at_3", "value": 37.884}, {"type": "precision_at_5", "value": 24.294}, {"type": "recall_at_1", "value": 38.196999999999996}, {"type": "recall_at_10", "value": 64.84100000000001}, {"type": "recall_at_100", "value": 75.726}, {"type": "recall_at_1000", "value": 84.794}, {"type": "recall_at_3", "value": 56.826}, {"type": "recall_at_5", "value": 60.736000000000004}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 82.3912}, {"type": "ap", "value": 76.3949298163793}, {"type": "f1", "value": 82.30848699417406}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.454}, {"type": "map_at_10", "value": 31.22}, {"type": "map_at_100", "value": 32.475}, {"type": "map_at_1000", "value": 32.532}, {"type": "map_at_3", "value": 27.419}, {"type": "map_at_5", "value": 29.608}, {"type": "mrr_at_1", "value": 20.072000000000003}, {"type": "mrr_at_10", "value": 31.813999999999997}, {"type": "mrr_at_100", "value": 33.01}, {"type": "mrr_at_1000", "value": 33.062000000000005}, {"type": "mrr_at_3", "value": 28.055999999999997}, {"type": "mrr_at_5", "value": 30.218}, {"type": "ndcg_at_1", "value": 20.072000000000003}, {"type": "ndcg_at_10", "value": 38.0}, {"type": "ndcg_at_100", "value": 44.038}, {"type": "ndcg_at_1000", "value": 45.43}, {"type": "ndcg_at_3", "value": 30.219}, {"type": "ndcg_at_5", "value": 34.127}, {"type": "precision_at_1", "value": 20.072000000000003}, {"type": "precision_at_10", "value": 6.159}, {"type": "precision_at_100", "value": 0.9169999999999999}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 13.071}, {"type": "precision_at_5", "value": 9.814}, {"type": "recall_at_1", "value": 19.454}, {"type": "recall_at_10", "value": 58.931}, {"type": "recall_at_100", "value": 86.886}, {"type": "recall_at_1000", "value": 97.425}, {"type": "recall_at_3", "value": 37.697}, {"type": "recall_at_5", "value": 47.101}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 90.46283629730961}, {"type": "f1", "value": 90.22448402668293}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 86.91462383770076}, {"type": "f1", "value": 85.77767304705436}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 87.73849232821881}, {"type": "f1", "value": 87.33680109229385}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 86.22298778578141}, {"type": "f1", "value": 85.88868176519013}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 82.91860882036572}, {"type": "f1", "value": 81.38044567838352}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 69.90235081374323}, {"type": "f1", "value": 68.12897827044782}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 66.0031919744642}, {"type": "f1", "value": 48.13490278120492}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 63.260073260073256}, {"type": "f1", "value": 42.627167415555505}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 65.06004002668445}, {"type": "f1", "value": 44.90527231209402}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 59.42687128092702}, {"type": "f1", "value": 41.79584710899656}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 59.078522768017216}, {"type": "f1", "value": 40.398016878580734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 43.750452079565996}, {"type": "f1", "value": 28.985320742729865}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (af)", "type": "mteb/amazon_massive_intent", "config": "af", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.59919300605245}, {"type": "f1", "value": 44.27505749600044}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (am)", "type": "mteb/amazon_massive_intent", "config": "am", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 31.56691324815064}, {"type": "f1", "value": 30.34952276390722}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ar)", "type": "mteb/amazon_massive_intent", "config": "ar", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 52.62945527908541}, {"type": "f1", "value": 49.689536347222386}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (az)", "type": "mteb/amazon_massive_intent", "config": "az", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 50.0941492938803}, {"type": "f1", "value": 48.47831879848094}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (bn)", "type": "mteb/amazon_massive_intent", "config": "bn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 46.540013449899135}, {"type": "f1", "value": 44.25663324630171}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (cy)", "type": "mteb/amazon_massive_intent", "config": "cy", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 44.25689307330195}, {"type": "f1", "value": 42.06066077477426}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (da)", "type": "mteb/amazon_massive_intent", "config": "da", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 55.05716207128446}, {"type": "f1", "value": 52.41516089202158}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 61.86953597848015}, {"type": "f1", "value": 58.45989820228606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (el)", "type": "mteb/amazon_massive_intent", "config": "el", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.02084734364493}, {"type": "f1", "value": 45.21525882986924}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.24008069939475}, {"type": "f1", "value": 68.27971089998472}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.53530598520511}, {"type": "f1", "value": 61.83588971206536}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fa)", "type": "mteb/amazon_massive_intent", "config": "fa", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 55.19166106254204}, {"type": "f1", "value": 52.335787325774}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fi)", "type": "mteb/amazon_massive_intent", "config": "fi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 48.43308675184936}, {"type": "f1", "value": 45.841102061239184}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.26698049764627}, {"type": "f1", "value": 62.25607481996241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (he)", "type": "mteb/amazon_massive_intent", "config": "he", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 57.619367854741085}, {"type": "f1", "value": 54.93671211092237}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hi)", "type": "mteb/amazon_massive_intent", "config": "hi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 57.53530598520511}, {"type": "f1", "value": 55.36413211751344}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hu)", "type": "mteb/amazon_massive_intent", "config": "hu", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 45.66913248150638}, {"type": "f1", "value": 42.52092657926257}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hy)", "type": "mteb/amazon_massive_intent", "config": "hy", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 39.19973100201749}, {"type": "f1", "value": 37.194613407773566}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (id)", "type": "mteb/amazon_massive_intent", "config": "id", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.99663752521856}, {"type": "f1", "value": 53.875181150315356}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (is)", "type": "mteb/amazon_massive_intent", "config": "is", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 43.143913920645595}, {"type": "f1", "value": 41.756257561394456}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (it)", "type": "mteb/amazon_massive_intent", "config": "it", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.99529253530599}, {"type": "f1", "value": 59.103812128183705}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ja)", "type": "mteb/amazon_massive_intent", "config": "ja", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.29051782111634}, {"type": "f1", "value": 62.5268914542489}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (jv)", "type": "mteb/amazon_massive_intent", "config": "jv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 43.69199731002017}, {"type": "f1", "value": 41.71651113018154}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ka)", "type": "mteb/amazon_massive_intent", "config": "ka", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 38.34566240753194}, {"type": "f1", "value": 36.935911015227894}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (km)", "type": "mteb/amazon_massive_intent", "config": "km", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 34.21654337592467}, {"type": "f1", "value": 32.067289455027755}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (kn)", "type": "mteb/amazon_massive_intent", "config": "kn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 51.785474108944186}, {"type": "f1", "value": 49.29285691779668}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ko)", "type": "mteb/amazon_massive_intent", "config": "ko", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 59.58977807666444}, {"type": "f1", "value": 57.81630371862734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (lv)", "type": "mteb/amazon_massive_intent", "config": "lv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 46.53665097511768}, {"type": "f1", "value": 44.8386852929464}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ml)", "type": "mteb/amazon_massive_intent", "config": "ml", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.468728984532625}, {"type": "f1", "value": 52.13613631138983}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (mn)", "type": "mteb/amazon_massive_intent", "config": "mn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 40.67921990585071}, {"type": "f1", "value": 39.87218130311539}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ms)", "type": "mteb/amazon_massive_intent", "config": "ms", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 51.2441156691325}, {"type": "f1", "value": 48.93351041227674}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (my)", "type": "mteb/amazon_massive_intent", "config": "my", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 31.76193678547411}, {"type": "f1", "value": 29.917012787908785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nb)", "type": "mteb/amazon_massive_intent", "config": "nb", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.40820443846671}, {"type": "f1", "value": 51.232049156874396}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nl)", "type": "mteb/amazon_massive_intent", "config": "nl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.8170813718897}, {"type": "f1", "value": 57.74887572270486}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.067249495628786}, {"type": "f1", "value": 57.60151669462318}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pt)", "type": "mteb/amazon_massive_intent", "config": "pt", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.73705447209146}, {"type": "f1", "value": 61.14377989075874}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ro)", "type": "mteb/amazon_massive_intent", "config": "ro", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 49.68392737054472}, {"type": "f1", "value": 48.07062918679129}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ru)", "type": "mteb/amazon_massive_intent", "config": "ru", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.85406859448555}, {"type": "f1", "value": 58.48852652838252}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sl)", "type": "mteb/amazon_massive_intent", "config": "sl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 48.58776059179556}, {"type": "f1", "value": 46.92163099241966}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sq)", "type": "mteb/amazon_massive_intent", "config": "sq", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.16879623402824}, {"type": "f1", "value": 45.8155066134247}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sv)", "type": "mteb/amazon_massive_intent", "config": "sv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.41425689307329}, {"type": "f1", "value": 60.097954878192574}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sw)", "type": "mteb/amazon_massive_intent", "config": "sw", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 45.97175521183591}, {"type": "f1", "value": 44.29275283000346}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ta)", "type": "mteb/amazon_massive_intent", "config": "ta", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 53.597848016139885}, {"type": "f1", "value": 51.54318966923094}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (te)", "type": "mteb/amazon_massive_intent", "config": "te", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 53.44653665097512}, {"type": "f1", "value": 51.60095623356469}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (th)", "type": "mteb/amazon_massive_intent", "config": "th", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 46.173503698722264}, {"type": "f1", "value": 46.311285276929105}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tl)", "type": "mteb/amazon_massive_intent", "config": "tl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 49.47881640887693}, {"type": "f1", "value": 46.63989802589145}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tr)", "type": "mteb/amazon_massive_intent", "config": "tr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 58.02958977807666}, {"type": "f1", "value": 55.34728796730868}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ur)", "type": "mteb/amazon_massive_intent", "config": "ur", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 39.26361802286483}, {"type": "f1", "value": 37.61201358829197}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (vi)", "type": "mteb/amazon_massive_intent", "config": "vi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 52.15534633490249}, {"type": "f1", "value": 50.438951980623145}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 63.39946200403498}, {"type": "f1", "value": 62.152249150179664}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-TW)", "type": "mteb/amazon_massive_intent", "config": "zh-TW", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 58.207800941492934}, {"type": "f1", "value": 58.318584465398104}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (af)", "type": "mteb/amazon_massive_scenario", "config": "af", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.069939475453936}, {"type": "f1", "value": 55.04073616892449}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (am)", "type": "mteb/amazon_massive_scenario", "config": "am", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.214525891055814}, {"type": "f1", "value": 36.42184260742777}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ar)", "type": "mteb/amazon_massive_scenario", "config": "ar", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.47141896435777}, {"type": "f1", "value": 57.22453431938479}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (az)", "type": "mteb/amazon_massive_scenario", "config": "az", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 54.37121721587089}, {"type": "f1", "value": 53.004976087120134}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (bn)", "type": "mteb/amazon_massive_scenario", "config": "bn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.71687962340283}, {"type": "f1", "value": 51.140151342341646}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (cy)", "type": "mteb/amazon_massive_scenario", "config": "cy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 49.502353732347004}, {"type": "f1", "value": 45.74604753969847}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (da)", "type": "mteb/amazon_massive_scenario", "config": "da", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.25689307330195}, {"type": "f1", "value": 62.25355539317913}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.27774041694688}, {"type": "f1", "value": 70.26880477280841}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (el)", "type": "mteb/amazon_massive_scenario", "config": "el", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.420981842636195}, {"type": "f1", "value": 50.824547366213565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.11230665770006}, {"type": "f1", "value": 73.00723710263364}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.04102219233356}, {"type": "f1", "value": 66.7904194512351}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fa)", "type": "mteb/amazon_massive_scenario", "config": "fa", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 60.1714862138534}, {"type": "f1", "value": 58.781208933846095}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fi)", "type": "mteb/amazon_massive_scenario", "config": "fi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 54.04841963685272}, {"type": "f1", "value": 51.185007148328545}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.76462676529927}, {"type": "f1", "value": 68.85227238388136}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (he)", "type": "mteb/amazon_massive_scenario", "config": "he", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 62.84801613987895}, {"type": "f1", "value": 61.18395865529196}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hi)", "type": "mteb/amazon_massive_scenario", "config": "hi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 62.17888365837256}, {"type": "f1", "value": 60.40570575783401}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hu)", "type": "mteb/amazon_massive_scenario", "config": "hu", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 53.52051109616678}, {"type": "f1", "value": 51.210696278552014}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hy)", "type": "mteb/amazon_massive_scenario", "config": "hy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.94821788836584}, {"type": "f1", "value": 43.65062337089374}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (id)", "type": "mteb/amazon_massive_scenario", "config": "id", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 60.33288500336248}, {"type": "f1", "value": 59.50436947982156}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (is)", "type": "mteb/amazon_massive_scenario", "config": "is", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 50.09751176866174}, {"type": "f1", "value": 47.293838685239}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (it)", "type": "mteb/amazon_massive_scenario", "config": "it", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.49293880295897}, {"type": "f1", "value": 65.96586462307134}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ja)", "type": "mteb/amazon_massive_scenario", "config": "ja", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 68.35911230665769}, {"type": "f1", "value": 67.77840431764355}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (jv)", "type": "mteb/amazon_massive_scenario", "config": "jv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 50.585070611970416}, {"type": "f1", "value": 47.957277125670295}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ka)", "type": "mteb/amazon_massive_scenario", "config": "ka", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.76059179556153}, {"type": "f1", "value": 40.446327361325565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (km)", "type": "mteb/amazon_massive_scenario", "config": "km", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.648957632817755}, {"type": "f1", "value": 37.231284508608276}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (kn)", "type": "mteb/amazon_massive_scenario", "config": "kn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.24613315400134}, {"type": "f1", "value": 55.14523425690653}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ko)", "type": "mteb/amazon_massive_scenario", "config": "ko", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 63.839946200403496}, {"type": "f1", "value": 62.6239063060589}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (lv)", "type": "mteb/amazon_massive_scenario", "config": "lv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 53.14391392064559}, {"type": "f1", "value": 50.08744471966442}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ml)", "type": "mteb/amazon_massive_scenario", "config": "ml", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.8399462004035}, {"type": "f1", "value": 57.586991117740794}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (mn)", "type": "mteb/amazon_massive_scenario", "config": "mn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.81842636180229}, {"type": "f1", "value": 42.82813975084655}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ms)", "type": "mteb/amazon_massive_scenario", "config": "ms", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.90047074646939}, {"type": "f1", "value": 56.640503134745714}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (my)", "type": "mteb/amazon_massive_scenario", "config": "my", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.52051109616678}, {"type": "f1", "value": 36.504553927569454}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nb)", "type": "mteb/amazon_massive_scenario", "config": "nb", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.63685272360458}, {"type": "f1", "value": 62.88129994502907}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nl)", "type": "mteb/amazon_massive_scenario", "config": "nl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.54203093476798}, {"type": "f1", "value": 66.02745142287087}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.00470746469402}, {"type": "f1", "value": 62.91845058355313}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pt)", "type": "mteb/amazon_massive_scenario", "config": "pt", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 65.69939475453934}, {"type": "f1", "value": 65.37413822081011}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ro)", "type": "mteb/amazon_massive_scenario", "config": "ro", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.19905850706121}, {"type": "f1", "value": 55.08271383695852}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ru)", "type": "mteb/amazon_massive_scenario", "config": "ru", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 65.42367182246134}, {"type": "f1", "value": 64.61962307022019}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sl)", "type": "mteb/amazon_massive_scenario", "config": "sl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 55.147948890383326}, {"type": "f1", "value": 53.2933851469903}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sq)", "type": "mteb/amazon_massive_scenario", "config": "sq", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 55.679219905850715}, {"type": "f1", "value": 52.80159603468007}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sv)", "type": "mteb/amazon_massive_scenario", "config": "sv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.42165433759246}, {"type": "f1", "value": 67.99984081248608}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sw)", "type": "mteb/amazon_massive_scenario", "config": "sw", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.30329522528581}, {"type": "f1", "value": 50.10810382364662}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ta)", "type": "mteb/amazon_massive_scenario", "config": "ta", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 56.186953597848024}, {"type": "f1", "value": 55.51656586643505}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (te)", "type": "mteb/amazon_massive_scenario", "config": "te", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.019502353732356}, {"type": "f1", "value": 56.260726586358736}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (th)", "type": "mteb/amazon_massive_scenario", "config": "th", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.55548083389374}, {"type": "f1", "value": 51.139712264362714}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tl)", "type": "mteb/amazon_massive_scenario", "config": "tl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.43443174176194}, {"type": "f1", "value": 55.76244076715635}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tr)", "type": "mteb/amazon_massive_scenario", "config": "tr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 61.55346334902488}, {"type": "f1", "value": 61.25819823057803}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ur)", "type": "mteb/amazon_massive_scenario", "config": "ur", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 47.114996637525216}, {"type": "f1", "value": 45.20428169546973}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (vi)", "type": "mteb/amazon_massive_scenario", "config": "vi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 56.83254875588434}, {"type": "f1", "value": 56.00919757601416}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.57969065232012}, {"type": "f1", "value": 69.17378512156806}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-TW)", "type": "mteb/amazon_massive_scenario", "config": "zh-TW", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.02488231338263}, {"type": "f1", "value": 64.09790488949963}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 29.71446786877363}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.003624498407547}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.29671894458151}, {"type": "mrr", "value": 32.44455140124599}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.127}, {"type": "map_at_10", "value": 13.047}, {"type": "map_at_100", "value": 15.754000000000001}, {"type": "map_at_1000", "value": 16.930999999999997}, {"type": "map_at_3", "value": 9.876999999999999}, {"type": "map_at_5", "value": 11.265}, {"type": "mrr_at_1", "value": 45.511}, {"type": "mrr_at_10", "value": 54.75600000000001}, {"type": "mrr_at_100", "value": 55.33}, {"type": "mrr_at_1000", "value": 55.374}, {"type": "mrr_at_3", "value": 53.147999999999996}, {"type": "mrr_at_5", "value": 53.952999999999996}, {"type": "ndcg_at_1", "value": 43.653}, {"type": "ndcg_at_10", "value": 33.936}, {"type": "ndcg_at_100", "value": 29.952}, {"type": "ndcg_at_1000", "value": 38.356}, {"type": "ndcg_at_3", "value": 40.018}, {"type": "ndcg_at_5", "value": 37.102000000000004}, {"type": "precision_at_1", "value": 45.511}, {"type": "precision_at_10", "value": 24.768}, {"type": "precision_at_100", "value": 7.13}, {"type": "precision_at_1000", "value": 1.928}, {"type": "precision_at_3", "value": 37.461}, {"type": "precision_at_5", "value": 31.703}, {"type": "recall_at_1", "value": 6.127}, {"type": "recall_at_10", "value": 16.512999999999998}, {"type": "recall_at_100", "value": 29.057}, {"type": "recall_at_1000", "value": 59.25899999999999}, {"type": "recall_at_3", "value": 10.940999999999999}, {"type": "recall_at_5", "value": 12.925}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 32.228}, {"type": "map_at_10", "value": 47.56}, {"type": "map_at_100", "value": 48.539}, {"type": "map_at_1000", "value": 48.567}, {"type": "map_at_3", "value": 43.214999999999996}, {"type": "map_at_5", "value": 45.799}, {"type": "mrr_at_1", "value": 36.53}, {"type": "mrr_at_10", "value": 50.004000000000005}, {"type": "mrr_at_100", "value": 50.737}, {"type": "mrr_at_1000", "value": 50.758}, {"type": "mrr_at_3", "value": 46.543}, {"type": "mrr_at_5", "value": 48.672}, {"type": "ndcg_at_1", "value": 36.501}, {"type": "ndcg_at_10", "value": 55.103}, {"type": "ndcg_at_100", "value": 59.156}, {"type": "ndcg_at_1000", "value": 59.821999999999996}, {"type": "ndcg_at_3", "value": 47.089}, {"type": "ndcg_at_5", "value": 51.35999999999999}, {"type": "precision_at_1", "value": 36.501}, {"type": "precision_at_10", "value": 9.046999999999999}, {"type": "precision_at_100", "value": 1.13}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 21.398}, {"type": "precision_at_5", "value": 15.307}, {"type": "recall_at_1", "value": 32.228}, {"type": "recall_at_10", "value": 75.608}, {"type": "recall_at_100", "value": 93.062}, {"type": "recall_at_1000", "value": 98.059}, {"type": "recall_at_3", "value": 55.021}, {"type": "recall_at_5", "value": 64.873}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.623}, {"type": "map_at_10", "value": 84.705}, {"type": "map_at_100", "value": 85.333}, {"type": "map_at_1000", "value": 85.348}, {"type": "map_at_3", "value": 81.736}, {"type": "map_at_5", "value": 83.616}, {"type": "mrr_at_1", "value": 81.28}, {"type": "mrr_at_10", "value": 87.518}, {"type": "mrr_at_100", "value": 87.619}, {"type": "mrr_at_1000", "value": 87.62}, {"type": "mrr_at_3", "value": 86.545}, {"type": "mrr_at_5", "value": 87.238}, {"type": "ndcg_at_1", "value": 81.28999999999999}, {"type": "ndcg_at_10", "value": 88.412}, {"type": "ndcg_at_100", "value": 89.603}, {"type": "ndcg_at_1000", "value": 89.696}, {"type": "ndcg_at_3", "value": 85.563}, {"type": "ndcg_at_5", "value": 87.17}, {"type": "precision_at_1", "value": 81.28999999999999}, {"type": "precision_at_10", "value": 13.439}, {"type": "precision_at_100", "value": 1.5310000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.437}, {"type": "precision_at_5", "value": 24.662}, {"type": "recall_at_1", "value": 70.623}, {"type": "recall_at_10", "value": 95.531}, {"type": "recall_at_100", "value": 99.58}, {"type": "recall_at_1000", "value": 99.978}, {"type": "recall_at_3", "value": 87.368}, {"type": "recall_at_5", "value": 91.898}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 49.53241309124786}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 59.712004482915994}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.313}, {"type": "map_at_10", "value": 13.447000000000001}, {"type": "map_at_100", "value": 15.491}, {"type": "map_at_1000", "value": 15.784999999999998}, {"type": "map_at_3", "value": 9.58}, {"type": "map_at_5", "value": 11.562}, {"type": "mrr_at_1", "value": 26.200000000000003}, {"type": "mrr_at_10", "value": 37.212}, {"type": "mrr_at_100", "value": 38.190000000000005}, {"type": "mrr_at_1000", "value": 38.242}, {"type": "mrr_at_3", "value": 34.067}, {"type": "mrr_at_5", "value": 35.862}, {"type": "ndcg_at_1", "value": 26.200000000000003}, {"type": "ndcg_at_10", "value": 21.979000000000003}, {"type": "ndcg_at_100", "value": 29.726999999999997}, {"type": "ndcg_at_1000", "value": 34.766000000000005}, {"type": "ndcg_at_3", "value": 21.16}, {"type": "ndcg_at_5", "value": 18.478}, {"type": "precision_at_1", "value": 26.200000000000003}, {"type": "precision_at_10", "value": 11.25}, {"type": "precision_at_100", "value": 2.241}, {"type": "precision_at_1000", "value": 0.345}, {"type": "precision_at_3", "value": 19.633}, {"type": "precision_at_5", "value": 16.14}, {"type": "recall_at_1", "value": 5.313}, {"type": "recall_at_10", "value": 22.808}, {"type": "recall_at_100", "value": 45.540000000000006}, {"type": "recall_at_1000", "value": 70.043}, {"type": "recall_at_3", "value": 11.932}, {"type": "recall_at_5", "value": 16.347}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.95540796619258}, {"type": "cos_sim_spearman", "value": 76.49462277620303}, {"type": "euclidean_pearson", "value": 71.67643435507317}, {"type": "euclidean_spearman", "value": 76.4915921108082}, {"type": "manhattan_pearson", "value": 71.71412560074847}, {"type": "manhattan_spearman", "value": 76.46738312094736}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.48773267615617}, {"type": "cos_sim_spearman", "value": 74.99867664033701}, {"type": "euclidean_pearson", "value": 76.0885798115032}, {"type": "euclidean_spearman", "value": 74.99438208715942}, {"type": "manhattan_pearson", "value": 76.09382557464033}, {"type": "manhattan_spearman", "value": 74.96139353538533}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.19022560804167}, {"type": "cos_sim_spearman", "value": 87.9128142106699}, {"type": "euclidean_pearson", "value": 85.51390183763914}, {"type": "euclidean_spearman", "value": 87.89995488057309}, {"type": "manhattan_pearson", "value": 85.44945034816052}, {"type": "manhattan_spearman", "value": 87.791458898378}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.17877898640924}, {"type": "cos_sim_spearman", "value": 82.25544088807465}, {"type": "euclidean_pearson", "value": 82.36395988835416}, {"type": "euclidean_spearman", "value": 82.26359924974219}, {"type": "manhattan_pearson", "value": 82.39219808999891}, {"type": "manhattan_spearman", "value": 82.27757404868157}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.66865350602554}, {"type": "cos_sim_spearman", "value": 87.87150169810872}, {"type": "euclidean_pearson", "value": 85.41520650056647}, {"type": "euclidean_spearman", "value": 87.86636613654022}, {"type": "manhattan_pearson", "value": 85.38710485867502}, {"type": "manhattan_spearman", "value": 87.83513424575301}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.75527643407175}, {"type": "cos_sim_spearman", "value": 80.9239008594745}, {"type": "euclidean_pearson", "value": 79.37682746800515}, {"type": "euclidean_spearman", "value": 80.91978947194092}, {"type": "manhattan_pearson", "value": 79.38884189990698}, {"type": "manhattan_spearman", "value": 80.91771608341014}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ko-ko)", "type": "mteb/sts17-crosslingual-sts", "config": "ko-ko", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.24344311909609}, {"type": "cos_sim_spearman", "value": 80.78933956176022}, {"type": "euclidean_pearson", "value": 76.95229806538676}, {"type": "euclidean_spearman", "value": 80.79706724032172}, {"type": "manhattan_pearson", "value": 76.90212135774246}, {"type": "manhattan_spearman", "value": 80.68727415384441}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ar-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "ar-ar", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.33891809228084}, {"type": "cos_sim_spearman", "value": 79.37912430317627}, {"type": "euclidean_pearson", "value": 72.56919843951036}, {"type": "euclidean_spearman", "value": 79.3091436905072}, {"type": "manhattan_pearson", "value": 72.4282811588754}, {"type": "manhattan_spearman", "value": 78.90144894538078}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.68908656739356}, {"type": "cos_sim_spearman", "value": 58.76110210983758}, {"type": "euclidean_pearson", "value": 59.14749159577439}, {"type": "euclidean_spearman", "value": 59.015997032145016}, {"type": "manhattan_pearson", "value": 57.907675340322676}, {"type": "manhattan_spearman", "value": 57.07751173022352}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.53325164873934}, {"type": "cos_sim_spearman", "value": 76.13104388846271}, {"type": "euclidean_pearson", "value": 74.61931031522006}, {"type": "euclidean_spearman", "value": 75.96875166459931}, {"type": "manhattan_pearson", "value": 74.82154350849251}, {"type": "manhattan_spearman", "value": 76.64455924104236}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.4228376590724}, {"type": "cos_sim_spearman", "value": 87.22764976624408}, {"type": "euclidean_pearson", "value": 81.94975688107507}, {"type": "euclidean_spearman", "value": 87.19193932664932}, {"type": "manhattan_pearson", "value": 82.0043964628936}, {"type": "manhattan_spearman", "value": 87.09130430957818}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 57.5627552601949}, {"type": "cos_sim_spearman", "value": 55.5263144563657}, {"type": "euclidean_pearson", "value": 57.00569241610482}, {"type": "euclidean_spearman", "value": 55.35291811479459}, {"type": "manhattan_pearson", "value": 56.99656284623506}, {"type": "manhattan_spearman", "value": 55.593673744709946}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.93801311909735}, {"type": "cos_sim_spearman", "value": 72.2581115470475}, {"type": "euclidean_pearson", "value": 68.24881290268563}, {"type": "euclidean_spearman", "value": 72.60813652864522}, {"type": "manhattan_pearson", "value": 67.86369874088834}, {"type": "manhattan_spearman", "value": 71.92346382988023}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.20555264114785}, {"type": "cos_sim_spearman", "value": 85.0588060013836}, {"type": "euclidean_pearson", "value": 81.78229090166155}, {"type": "euclidean_spearman", "value": 85.09687374900614}, {"type": "manhattan_pearson", "value": 81.77449099980244}, {"type": "manhattan_spearman", "value": 84.70331476222177}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 73.786793911605}, {"type": "cos_sim_spearman", "value": 75.63094397551554}, {"type": "euclidean_pearson", "value": 71.64292842519251}, {"type": "euclidean_spearman", "value": 75.60215267384011}, {"type": "manhattan_pearson", "value": 72.2124078037642}, {"type": "manhattan_spearman", "value": 76.34546028465175}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.62139987106455}, {"type": "cos_sim_spearman", "value": 71.35872226722493}, {"type": "euclidean_pearson", "value": 68.50103697766141}, {"type": "euclidean_spearman", "value": 71.24590187948473}, {"type": "manhattan_pearson", "value": 68.89236562525663}, {"type": "manhattan_spearman", "value": 71.77994400789173}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.62728174871292}, {"type": "cos_sim_spearman", "value": 71.98655715409397}, {"type": "euclidean_pearson", "value": 70.27026741609356}, {"type": "euclidean_spearman", "value": 72.14004669693777}, {"type": "manhattan_pearson", "value": 70.46335140108751}, {"type": "manhattan_spearman", "value": 72.6638254374311}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.10248717637424}, {"type": "cos_sim_spearman", "value": 68.5905931564714}, {"type": "euclidean_pearson", "value": 71.23290000423759}, {"type": "euclidean_spearman", "value": 68.6419513130457}, {"type": "manhattan_pearson", "value": 71.6886015250234}, {"type": "manhattan_spearman", "value": 69.47543660368697}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de)", "type": "mteb/sts22-crosslingual-sts", "config": "de", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.010555056244776}, {"type": "cos_sim_spearman", "value": 60.121771179899255}, {"type": "euclidean_pearson", "value": 53.04527785573465}, {"type": "euclidean_spearman", "value": 60.121771179899255}, {"type": "manhattan_pearson", "value": 52.931480071124234}, {"type": "manhattan_spearman", "value": 60.03868409331775}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es)", "type": "mteb/sts22-crosslingual-sts", "config": "es", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 70.6833028374664}, {"type": "cos_sim_spearman", "value": 68.57396263856863}, {"type": "euclidean_pearson", "value": 68.30905084522986}, {"type": "euclidean_spearman", "value": 68.57396263856863}, {"type": "manhattan_pearson", "value": 70.91400657516918}, {"type": "manhattan_spearman", "value": 72.72240857808112}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 36.948290734279645}, {"type": "cos_sim_spearman", "value": 42.07722031011005}, {"type": "euclidean_pearson", "value": 22.539446972018467}, {"type": "euclidean_spearman", "value": 42.07722031011005}, {"type": "manhattan_pearson", "value": 24.119402246951786}, {"type": "manhattan_spearman", "value": 45.80525501822569}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (tr)", "type": "mteb/sts22-crosslingual-sts", "config": "tr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 66.97840719036533}, {"type": "cos_sim_spearman", "value": 66.62430648804775}, {"type": "euclidean_pearson", "value": 66.89526587772023}, {"type": "euclidean_spearman", "value": 66.62430648804775}, {"type": "manhattan_pearson", "value": 68.6929895225091}, {"type": "manhattan_spearman", "value": 68.91772708432867}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ar)", "type": "mteb/sts22-crosslingual-sts", "config": "ar", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 56.65098289103698}, {"type": "cos_sim_spearman", "value": 57.436674670689214}, {"type": "euclidean_pearson", "value": 51.79149892785239}, {"type": "euclidean_spearman", "value": 57.436674670689214}, {"type": "manhattan_pearson", "value": 52.64807953938707}, {"type": "manhattan_spearman", "value": 58.94583987372767}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ru)", "type": "mteb/sts22-crosslingual-sts", "config": "ru", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 60.669531297510225}, {"type": "cos_sim_spearman", "value": 61.71342510003327}, {"type": "euclidean_pearson", "value": 55.821871433553504}, {"type": "euclidean_spearman", "value": 61.71342510003327}, {"type": "manhattan_pearson", "value": 57.77073441351117}, {"type": "manhattan_spearman", "value": 65.20759033207}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 64.34728960310699}, {"type": "cos_sim_spearman", "value": 64.03565302589584}, {"type": "euclidean_pearson", "value": 61.958942333930544}, {"type": "euclidean_spearman", "value": 64.03565302589584}, {"type": "manhattan_pearson", "value": 64.65072672727923}, {"type": "manhattan_spearman", "value": 67.82569969943107}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.47120815594353}, {"type": "cos_sim_spearman", "value": 81.46916544955101}, {"type": "euclidean_pearson", "value": 79.21753533489019}, {"type": "euclidean_spearman", "value": 81.46916544955101}, {"type": "manhattan_pearson", "value": 78.26605518839271}, {"type": "manhattan_spearman", "value": 81.29749169339514}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.31467231933632}, {"type": "cos_sim_spearman", "value": 53.36160506603274}, {"type": "euclidean_pearson", "value": 64.98434169416196}, {"type": "euclidean_spearman", "value": 53.36160506603274}, {"type": "manhattan_pearson", "value": 69.6837006629638}, {"type": "manhattan_spearman", "value": 60.85384324700893}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.99425127770438}, {"type": "cos_sim_spearman", "value": 77.41308957007035}, {"type": "euclidean_pearson", "value": 79.69441265626801}, {"type": "euclidean_spearman", "value": 77.41308957007035}, {"type": "manhattan_pearson", "value": 80.3726291667624}, {"type": "manhattan_spearman", "value": 79.0414050644631}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (it)", "type": "mteb/sts22-crosslingual-sts", "config": "it", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.13469287716659}, {"type": "cos_sim_spearman", "value": 79.27976881582065}, {"type": "euclidean_pearson", "value": 77.65964425780172}, {"type": "euclidean_spearman", "value": 79.27976881582065}, {"type": "manhattan_pearson", "value": 77.64158710257945}, {"type": "manhattan_spearman", "value": 79.22242281895944}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl-en)", "type": "mteb/sts22-crosslingual-sts", "config": "pl-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.303314995599}, {"type": "cos_sim_spearman", "value": 77.4991345414335}, {"type": "euclidean_pearson", "value": 74.88826621426401}, {"type": "euclidean_spearman", "value": 77.4991345414335}, {"type": "manhattan_pearson", "value": 77.70223488989319}, {"type": "manhattan_spearman", "value": 79.69746987627822}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh-en)", "type": "mteb/sts22-crosslingual-sts", "config": "zh-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 70.87814957197239}, {"type": "cos_sim_spearman", "value": 69.86785751801642}, {"type": "euclidean_pearson", "value": 68.68630146548654}, {"type": "euclidean_spearman", "value": 69.8615799070054}, {"type": "manhattan_pearson", "value": 61.83743315022061}, {"type": "manhattan_spearman", "value": 64.35346450347738}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-it)", "type": "mteb/sts22-crosslingual-sts", "config": "es-it", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 74.1484689923211}, {"type": "cos_sim_spearman", "value": 74.69046355179742}, {"type": "euclidean_pearson", "value": 73.03951899271793}, {"type": "euclidean_spearman", "value": 74.69820632954205}, {"type": "manhattan_pearson", "value": 73.36810146930709}, {"type": "manhattan_spearman", "value": 75.33154135287258}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-fr)", "type": "mteb/sts22-crosslingual-sts", "config": "de-fr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 51.43125921362742}, {"type": "cos_sim_spearman", "value": 58.25341239774093}, {"type": "euclidean_pearson", "value": 48.00689582162098}, {"type": "euclidean_spearman", "value": 58.533194841668426}, {"type": "manhattan_pearson", "value": 46.11721778230745}, {"type": "manhattan_spearman", "value": 55.026889052448134}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "de-pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 40.066205533538046}, {"type": "cos_sim_spearman", "value": 48.46991890841381}, {"type": "euclidean_pearson", "value": 42.29606506858651}, {"type": "euclidean_spearman", "value": 48.34674249441531}, {"type": "manhattan_pearson", "value": 41.70680990555484}, {"type": "manhattan_spearman", "value": 47.54609580342499}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "fr-pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.26527545520592}, {"type": "cos_sim_spearman", "value": 73.24670207647144}, {"type": "euclidean_pearson", "value": 81.78699781584893}, {"type": "euclidean_spearman", "value": 73.24670207647144}, {"type": "manhattan_pearson", "value": 83.14172292187807}, {"type": "manhattan_spearman", "value": 73.24670207647144}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.51438108053523}, {"type": "cos_sim_spearman", "value": 81.9481311864648}, {"type": "euclidean_pearson", "value": 78.6683040592179}, {"type": "euclidean_spearman", "value": 81.9535649926177}, {"type": "manhattan_pearson", "value": 78.65396325536754}, {"type": "manhattan_spearman", "value": 81.96918240343872}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 80.6689275068653}, {"type": "mrr", "value": 95.021337594867}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 55.193999999999996}, {"type": "map_at_10", "value": 65.814}, {"type": "map_at_100", "value": 66.428}, {"type": "map_at_1000", "value": 66.447}, {"type": "map_at_3", "value": 63.304}, {"type": "map_at_5", "value": 64.64}, {"type": "mrr_at_1", "value": 57.99999999999999}, {"type": "mrr_at_10", "value": 66.957}, {"type": "mrr_at_100", "value": 67.405}, {"type": "mrr_at_1000", "value": 67.422}, {"type": "mrr_at_3", "value": 65.0}, {"type": "mrr_at_5", "value": 66.183}, {"type": "ndcg_at_1", "value": 57.99999999999999}, {"type": "ndcg_at_10", "value": 70.523}, {"type": "ndcg_at_100", "value": 72.987}, {"type": "ndcg_at_1000", "value": 73.605}, {"type": "ndcg_at_3", "value": 66.268}, {"type": "ndcg_at_5", "value": 68.27600000000001}, {"type": "precision_at_1", "value": 57.99999999999999}, {"type": "precision_at_10", "value": 9.467}, {"type": "precision_at_100", "value": 1.073}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 26.444000000000003}, {"type": "precision_at_5", "value": 17.2}, {"type": "recall_at_1", "value": 55.193999999999996}, {"type": "recall_at_10", "value": 83.52199999999999}, {"type": "recall_at_100", "value": 94.5}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 71.989}, {"type": "recall_at_5", "value": 77.31700000000001}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.73465346534654}, {"type": "cos_sim_ap", "value": 92.91719494015508}, {"type": "cos_sim_f1", "value": 86.46200301962756}, {"type": "cos_sim_precision", "value": 87.03140830800406}, {"type": "cos_sim_recall", "value": 85.9}, {"type": "dot_accuracy", "value": 99.73663366336633}, {"type": "dot_ap", "value": 92.90802848215259}, {"type": "dot_f1", "value": 86.46200301962756}, {"type": "dot_precision", "value": 87.03140830800406}, {"type": "dot_recall", "value": 85.9}, {"type": "euclidean_accuracy", "value": 99.73465346534654}, {"type": "euclidean_ap", "value": 92.91627363446204}, {"type": "euclidean_f1", "value": 86.43469490670702}, {"type": "euclidean_precision", "value": 87.18209562563581}, {"type": "euclidean_recall", "value": 85.7}, {"type": "manhattan_accuracy", "value": 99.73663366336633}, {"type": "manhattan_ap", "value": 92.90219877406929}, {"type": "manhattan_f1", "value": 86.31471040492056}, {"type": "manhattan_precision", "value": 88.53838065194533}, {"type": "manhattan_recall", "value": 84.2}, {"type": "max_accuracy", "value": 99.73663366336633}, {"type": "max_ap", "value": 92.91719494015508}, {"type": "max_f1", "value": 86.46200301962756}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 60.73098998430779}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 34.64256206757585}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 54.749150614295694}, {"type": "mrr", "value": 55.78880984211867}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 28.863577054305907}, {"type": "cos_sim_spearman", "value": 27.538596944829774}, {"type": "dot_pearson", "value": 28.93043755116643}, {"type": "dot_spearman", "value": 27.733110516733987}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.22899999999999998}, {"type": "map_at_10", "value": 2.078}, {"type": "map_at_100", "value": 12.024}, {"type": "map_at_1000", "value": 29.036}, {"type": "map_at_3", "value": 0.681}, {"type": "map_at_5", "value": 1.083}, {"type": "mrr_at_1", "value": 86.0}, {"type": "mrr_at_10", "value": 92.667}, {"type": "mrr_at_100", "value": 92.667}, {"type": "mrr_at_1000", "value": 92.667}, {"type": "mrr_at_3", "value": 92.667}, {"type": "mrr_at_5", "value": 92.667}, {"type": "ndcg_at_1", "value": 82.0}, {"type": "ndcg_at_10", "value": 80.746}, {"type": "ndcg_at_100", "value": 61.090999999999994}, {"type": "ndcg_at_1000", "value": 55.034000000000006}, {"type": "ndcg_at_3", "value": 82.419}, {"type": "ndcg_at_5", "value": 81.018}, {"type": "precision_at_1", "value": 86.0}, {"type": "precision_at_10", "value": 86.2}, {"type": "precision_at_100", "value": 62.68}, {"type": "precision_at_1000", "value": 24.032}, {"type": "precision_at_3", "value": 88.667}, {"type": "precision_at_5", "value": 86.0}, {"type": "recall_at_1", "value": 0.22899999999999998}, {"type": "recall_at_10", "value": 2.263}, {"type": "recall_at_100", "value": 15.238999999999999}, {"type": "recall_at_1000", "value": 51.937}, {"type": "recall_at_3", "value": 0.719}, {"type": "recall_at_5", "value": 1.15}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (sqi-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "sqi-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 19.400000000000002}, {"type": "f1", "value": 15.386076064970075}, {"type": "precision", "value": 14.253878834615676}, {"type": "recall", "value": 19.400000000000002}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fry-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fry-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.19653179190752}, {"type": "f1", "value": 37.726396917148364}, {"type": "precision", "value": 36.14643545279384}, {"type": "recall", "value": 42.19653179190752}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kur-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kur-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 18.536585365853657}, {"type": "f1", "value": 13.512010347376199}, {"type": "precision", "value": 12.034068912117693}, {"type": "recall", "value": 18.536585365853657}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tur-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tur-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 81.69999999999999}, {"type": "f1", "value": 77.37888888888888}, {"type": "precision", "value": 75.49583333333332}, {"type": "recall", "value": 81.69999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (deu-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "deu-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.39999999999999}, {"type": "f1", "value": 96.56666666666666}, {"type": "precision", "value": 96.16666666666667}, {"type": "recall", "value": 97.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nld-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nld-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 90.0}, {"type": "f1", "value": 87.22333333333333}, {"type": "precision", "value": 85.89166666666667}, {"type": "recall", "value": 90.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ron-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ron-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 64.7}, {"type": "f1", "value": 59.10904761904763}, {"type": "precision", "value": 56.91968253968254}, {"type": "recall", "value": 64.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ang-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ang-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 38.80597014925373}, {"type": "f1", "value": 30.890784174366264}, {"type": "precision", "value": 28.327114427860696}, {"type": "recall", "value": 38.80597014925373}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ido-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ido-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 53.900000000000006}, {"type": "f1", "value": 48.294138583638585}, {"type": "precision", "value": 46.333495670995674}, {"type": "recall", "value": 53.900000000000006}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (jav-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "jav-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.707317073170733}, {"type": "f1", "value": 8.999999999999998}, {"type": "precision", "value": 8.175377468060395}, {"type": "recall", "value": 11.707317073170733}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (isl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "isl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 15.9}, {"type": "f1", "value": 12.451226269430602}, {"type": "precision", "value": 11.404807799760325}, {"type": "recall", "value": 15.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (slv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "slv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.919805589307416}, {"type": "f1", "value": 35.880619060297064}, {"type": "precision", "value": 33.77682308241239}, {"type": "recall", "value": 41.919805589307416}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cym-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cym-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 10.956521739130434}, {"type": "f1", "value": 9.098715976676996}, {"type": "precision", "value": 8.659935858401333}, {"type": "recall", "value": 10.956521739130434}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kaz-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kaz-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.652173913043478}, {"type": "f1", "value": 9.154324883225136}, {"type": "precision", "value": 8.505898125360801}, {"type": "recall", "value": 11.652173913043478}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (est-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "est-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.700000000000001}, {"type": "f1", "value": 7.431679431679432}, {"type": "precision", "value": 6.799925118740907}, {"type": "recall", "value": 9.700000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (heb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "heb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.5}, {"type": "f1", "value": 72.39999999999999}, {"type": "precision", "value": 70.13444444444444}, {"type": "recall", "value": 77.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gla-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gla-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 5.548854041013269}, {"type": "f1", "value": 4.233155465362944}, {"type": "precision", "value": 3.948150869646547}, {"type": "recall", "value": 5.548854041013269}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mar-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mar-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 73.5}, {"type": "f1", "value": 67.35333333333332}, {"type": "precision", "value": 64.63666666666666}, {"type": "recall", "value": 73.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 27.700000000000003}, {"type": "f1", "value": 21.152765495941964}, {"type": "precision", "value": 19.27832403707404}, {"type": "recall", "value": 27.700000000000003}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bel-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bel-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 48.1}, {"type": "f1", "value": 41.21001443001443}, {"type": "precision", "value": 38.628495670995676}, {"type": "recall", "value": 48.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pms-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pms-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 40.0}, {"type": "f1", "value": 34.32060003488575}, {"type": "precision", "value": 32.32134353741497}, {"type": "recall", "value": 40.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gle-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gle-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 6.800000000000001}, {"type": "f1", "value": 4.3954389450190465}, {"type": "precision", "value": 3.893838027469606}, {"type": "recall", "value": 6.800000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pes-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pes-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 51.800000000000004}, {"type": "f1", "value": 45.04222943722944}, {"type": "precision", "value": 42.541984126984126}, {"type": "recall", "value": 51.800000000000004}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nob-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nob-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 83.1}, {"type": "f1", "value": 79.20675324675324}, {"type": "precision", "value": 77.44944444444444}, {"type": "recall", "value": 83.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bul-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bul-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 66.8}, {"type": "f1", "value": 60.25746031746031}, {"type": "precision", "value": 57.55250000000001}, {"type": "recall", "value": 66.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cbk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cbk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 63.6}, {"type": "f1", "value": 56.73421356421356}, {"type": "precision", "value": 54.02218253968254}, {"type": "recall", "value": 63.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hun-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hun-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 17.599999999999998}, {"type": "f1", "value": 13.17699134199134}, {"type": "precision", "value": 11.77444805194805}, {"type": "recall", "value": 17.599999999999998}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (uig-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "uig-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 2.0}, {"type": "f1", "value": 1.3126923076923078}, {"type": "precision", "value": 1.104952380952381}, {"type": "recall", "value": 2.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (rus-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "rus-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 88.3}, {"type": "f1", "value": 84.96333333333334}, {"type": "precision", "value": 83.38333333333333}, {"type": "recall", "value": 88.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (spa-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "spa-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.69999999999999}, {"type": "f1", "value": 93.12333333333333}, {"type": "precision", "value": 92.375}, {"type": "recall", "value": 94.69999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hye-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hye-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 0.6738544474393532}, {"type": "f1", "value": 0.3690849566291394}, {"type": "precision", "value": 0.3305452159899599}, {"type": "recall", "value": 0.6738544474393532}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tel-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tel-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.7948717948718}, {"type": "f1", "value": 65.37037037037037}, {"type": "precision", "value": 62.46438746438747}, {"type": "recall", "value": 71.7948717948718}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (afr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "afr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 56.699999999999996}, {"type": "f1", "value": 50.58054945054945}, {"type": "precision", "value": 48.313047619047616}, {"type": "recall", "value": 56.699999999999996}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mon-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mon-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.863636363636363}, {"type": "f1", "value": 10.948429096156369}, {"type": "precision", "value": 10.227287994137523}, {"type": "recall", "value": 13.863636363636363}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (arz-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "arz-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 62.473794549266245}, {"type": "f1", "value": 56.04172906059699}, {"type": "precision", "value": 53.26694619147448}, {"type": "recall", "value": 62.473794549266245}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hrv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hrv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 40.0}, {"type": "f1", "value": 34.62948179271708}, {"type": "precision", "value": 32.699030910609864}, {"type": "recall", "value": 40.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nov-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nov-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 60.311284046692606}, {"type": "f1", "value": 54.06182447038479}, {"type": "precision", "value": 51.757921067259595}, {"type": "recall", "value": 60.311284046692606}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gsw-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gsw-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 43.58974358974359}, {"type": "f1", "value": 37.042359350051655}, {"type": "precision", "value": 34.75783475783476}, {"type": "recall", "value": 43.58974358974359}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nds-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nds-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 56.49999999999999}, {"type": "f1", "value": 49.471269841269844}, {"type": "precision", "value": 46.742182539682545}, {"type": "recall", "value": 56.49999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ukr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ukr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.5}, {"type": "f1", "value": 65.32880952380951}, {"type": "precision", "value": 62.71261904761904}, {"type": "recall", "value": 71.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (uzb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "uzb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.448598130841122}, {"type": "f1", "value": 7.861361294691689}, {"type": "precision", "value": 6.961045509526818}, {"type": "recall", "value": 11.448598130841122}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lit-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lit-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.5}, {"type": "f1", "value": 10.448586132968154}, {"type": "precision", "value": 9.624691955878397}, {"type": "recall", "value": 13.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ina-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ina-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 82.19999999999999}, {"type": "f1", "value": 78.25366946778712}, {"type": "precision", "value": 76.54291666666667}, {"type": "recall", "value": 82.19999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lfn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lfn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 53.5}, {"type": "f1", "value": 47.48505411255411}, {"type": "precision", "value": 45.29801587301587}, {"type": "recall", "value": 53.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (zsm-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "zsm-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 61.1}, {"type": "f1", "value": 54.60758056758057}, {"type": "precision", "value": 52.16455433455434}, {"type": "recall", "value": 61.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ita-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ita-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 85.1}, {"type": "f1", "value": 81.98506715506716}, {"type": "precision", "value": 80.64754901960784}, {"type": "recall", "value": 85.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cmn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cmn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.2}, {"type": "f1", "value": 86.13333333333333}, {"type": "precision", "value": 84.65}, {"type": "recall", "value": 89.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lvs-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lvs-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.600000000000001}, {"type": "f1", "value": 10.721816580317723}, {"type": "precision", "value": 9.97922024538847}, {"type": "recall", "value": 13.600000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (glg-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "glg-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 79.0}, {"type": "f1", "value": 74.2652380952381}, {"type": "precision", "value": 72.18690476190476}, {"type": "recall", "value": 79.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ceb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ceb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 12.833333333333332}, {"type": "f1", "value": 10.45993265993266}, {"type": "precision", "value": 9.849548907882243}, {"type": "recall", "value": 12.833333333333332}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bre-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bre-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 8.3}, {"type": "f1", "value": 5.457311371692176}, {"type": "precision", "value": 4.8466941508148595}, {"type": "recall", "value": 8.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ben-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ben-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 26.3}, {"type": "f1", "value": 20.851341154819416}, {"type": "precision", "value": 19.1173617945522}, {"type": "recall", "value": 26.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swg-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swg-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.964285714285715}, {"type": "f1", "value": 36.38605442176871}, {"type": "precision", "value": 34.523809523809526}, {"type": "recall", "value": 41.964285714285715}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (arq-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "arq-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 26.454445664105382}, {"type": "f1", "value": 20.67692765826684}, {"type": "precision", "value": 18.684070229075715}, {"type": "recall", "value": 26.454445664105382}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kab-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kab-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 2.8000000000000003}, {"type": "f1", "value": 1.9487240537240536}, {"type": "precision", "value": 1.7766582325720255}, {"type": "recall", "value": 2.8000000000000003}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fra-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fra-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.5}, {"type": "f1", "value": 89.39}, {"type": "precision", "value": 88.425}, {"type": "recall", "value": 91.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (por-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "por-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.5}, {"type": "f1", "value": 89.38333333333333}, {"type": "precision", "value": 88.36666666666667}, {"type": "recall", "value": 91.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.2}, {"type": "f1", "value": 6.672282438325198}, {"type": "precision", "value": 6.046073589145276}, {"type": "recall", "value": 9.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (oci-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "oci-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 45.2}, {"type": "f1", "value": 39.12095238095238}, {"type": "precision", "value": 36.820952380952384}, {"type": "recall", "value": 45.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pol-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pol-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 86.8}, {"type": "f1", "value": 83.35000000000001}, {"type": "precision", "value": 81.825}, {"type": "recall", "value": 86.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (war-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "war-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.5}, {"type": "f1", "value": 10.66862856136998}, {"type": "precision", "value": 9.845928551928552}, {"type": "recall", "value": 13.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (aze-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "aze-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 33.4}, {"type": "f1", "value": 27.78153389993659}, {"type": "precision", "value": 25.778055555555557}, {"type": "recall", "value": 33.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (vie-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "vie-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 57.699999999999996}, {"type": "f1", "value": 50.440714285714286}, {"type": "precision", "value": 47.64396825396825}, {"type": "recall", "value": 57.699999999999996}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nno-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nno-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 62.2}, {"type": "f1", "value": 56.0098625351257}, {"type": "precision", "value": 53.691914098972916}, {"type": "recall", "value": 62.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cha-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cha-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 27.00729927007299}, {"type": "f1", "value": 22.798053527980535}, {"type": "precision", "value": 21.107055961070557}, {"type": "recall", "value": 27.00729927007299}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mhr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mhr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 6.2}, {"type": "f1", "value": 4.295544090473964}, {"type": "precision", "value": 3.913153952193392}, {"type": "recall", "value": 6.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dan-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dan-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.10000000000001}, {"type": "f1", "value": 72.49333333333334}, {"type": "precision", "value": 70.53368637110017}, {"type": "recall", "value": 77.10000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ell-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ell-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 15.2}, {"type": "f1", "value": 10.429591693330824}, {"type": "precision", "value": 9.145801926831338}, {"type": "recall", "value": 15.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (amh-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "amh-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 1.7857142857142856}, {"type": "f1", "value": 0.3635204081632653}, {"type": "precision", "value": 0.205026455026455}, {"type": "recall", "value": 1.7857142857142856}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pam-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pam-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 6.4}, {"type": "f1", "value": 4.8412763053939525}, {"type": "precision", "value": 4.444087810337809}, {"type": "recall", "value": 6.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hsb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hsb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 43.47826086956522}, {"type": "f1", "value": 37.13266949291794}, {"type": "precision", "value": 34.655332590115194}, {"type": "recall", "value": 43.47826086956522}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (srp-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "srp-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.0}, {"type": "f1", "value": 35.412229437229435}, {"type": "precision", "value": 32.907539682539685}, {"type": "recall", "value": 42.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (epo-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "epo-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 36.0}, {"type": "f1", "value": 30.53874458874459}, {"type": "precision", "value": 28.711192408382807}, {"type": "recall", "value": 36.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kzj-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kzj-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 7.9}, {"type": "f1", "value": 5.80190114561213}, {"type": "precision", "value": 5.298527531836355}, {"type": "recall", "value": 7.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (awa-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "awa-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 49.35064935064935}, {"type": "f1", "value": 41.57805638325119}, {"type": "precision", "value": 38.87445887445887}, {"type": "recall", "value": 49.35064935064935}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fao-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fao-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 25.572519083969464}, {"type": "f1", "value": 21.338006776938073}, {"type": "precision", "value": 20.194474736459465}, {"type": "recall", "value": 25.572519083969464}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mal-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mal-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 79.62154294032024}, {"type": "f1", "value": 74.47355652595827}, {"type": "precision", "value": 72.2076661814653}, {"type": "recall", "value": 79.62154294032024}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ile-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ile-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 68.0}, {"type": "f1", "value": 61.80859649122807}, {"type": "precision", "value": 59.30381381381381}, {"type": "recall", "value": 68.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bos-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bos-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.93785310734463}, {"type": "f1", "value": 36.72617201306135}, {"type": "precision", "value": 34.72641059505466}, {"type": "recall", "value": 42.93785310734463}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cor-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cor-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 5.5}, {"type": "f1", "value": 3.8651658986175113}, {"type": "precision", "value": 3.4432814407814405}, {"type": "recall", "value": 5.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 69.19999999999999}, {"type": "f1", "value": 63.41880952380953}, {"type": "precision", "value": 61.07913419913419}, {"type": "recall", "value": 69.19999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (eus-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "eus-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 15.4}, {"type": "f1", "value": 11.672122577122575}, {"type": "precision", "value": 10.59919974661354}, {"type": "recall", "value": 15.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (yue-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "yue-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 58.5}, {"type": "f1", "value": 51.31880452880453}, {"type": "precision", "value": 48.60550125313283}, {"type": "recall", "value": 58.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swe-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swe-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.3}, {"type": "f1", "value": 86.32666666666667}, {"type": "precision", "value": 84.98333333333333}, {"type": "recall", "value": 89.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dtp-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dtp-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 5.7}, {"type": "f1", "value": 3.8739805216757546}, {"type": "precision", "value": 3.4734608954367014}, {"type": "recall", "value": 5.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 0.8042895442359249}, {"type": "f1", "value": 0.7596067917783735}, {"type": "precision", "value": 0.7372654155495978}, {"type": "recall", "value": 0.8042895442359249}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (jpn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "jpn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.7}, {"type": "f1", "value": 86.92333333333333}, {"type": "precision", "value": 85.64166666666667}, {"type": "recall", "value": 89.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (csb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "csb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 26.08695652173913}, {"type": "f1", "value": 20.517863778733343}, {"type": "precision", "value": 18.901098901098898}, {"type": "recall", "value": 26.08695652173913}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (xho-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "xho-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 12.676056338028168}, {"type": "f1", "value": 9.526324614352783}, {"type": "precision", "value": 9.006292657908235}, {"type": "recall", "value": 12.676056338028168}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (orv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "orv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 24.910179640718564}, {"type": "f1", "value": 19.645099411566473}, {"type": "precision", "value": 17.676076418591386}, {"type": "recall", "value": 24.910179640718564}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ind-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ind-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 61.4}, {"type": "f1", "value": 54.64269841269841}, {"type": "precision", "value": 51.981071428571425}, {"type": "recall", "value": 61.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tuk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tuk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.330049261083744}, {"type": "f1", "value": 9.610016420361248}, {"type": "precision", "value": 9.123781574258464}, {"type": "recall", "value": 11.330049261083744}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (max-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "max-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 27.816901408450708}, {"type": "f1", "value": 22.51925345174495}, {"type": "precision", "value": 21.10468365750056}, {"type": "recall", "value": 27.816901408450708}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swh-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swh-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.282051282051283}, {"type": "f1", "value": 7.777167097237831}, {"type": "precision", "value": 7.050109879436802}, {"type": "recall", "value": 11.282051282051283}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hin-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hin-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 86.0}, {"type": "f1", "value": 82.05857142857143}, {"type": "precision", "value": 80.25}, {"type": "recall", "value": 86.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dsb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dsb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 34.44676409185804}, {"type": "f1", "value": 28.296517215097587}, {"type": "precision", "value": 26.16624956236465}, {"type": "recall", "value": 34.44676409185804}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ber-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ber-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 7.199999999999999}, {"type": "f1", "value": 5.500051631938041}, {"type": "precision", "value": 5.164411510424442}, {"type": "recall", "value": 7.199999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tam-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tam-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.9869706840391}, {"type": "f1", "value": 65.79339227547696}, {"type": "precision", "value": 63.16503800217155}, {"type": "recall", "value": 71.9869706840391}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (slk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "slk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 70.89999999999999}, {"type": "f1", "value": 65.4152380952381}, {"type": "precision", "value": 63.106666666666655}, {"type": "recall", "value": 70.89999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tgl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tgl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 21.0}, {"type": "f1", "value": 17.86438197644649}, {"type": "precision", "value": 16.84469948469949}, {"type": "recall", "value": 21.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ast-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ast-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 62.20472440944882}, {"type": "f1", "value": 55.81364829396325}, {"type": "precision", "value": 53.262092238470196}, {"type": "recall", "value": 62.20472440944882}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mkd-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mkd-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.8}, {"type": "f1", "value": 34.724603174603175}, {"type": "precision", "value": 32.040277777777774}, {"type": "recall", "value": 41.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (khm-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "khm-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 0.41551246537396125}, {"type": "f1", "value": 0.3462603878116343}, {"type": "precision", "value": 0.32317636195752536}, {"type": "recall", "value": 0.41551246537396125}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ces-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ces-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 85.6}, {"type": "f1", "value": 81.81333333333333}, {"type": "precision", "value": 80.08333333333334}, {"type": "recall", "value": 85.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tzl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tzl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 31.73076923076923}, {"type": "f1", "value": 26.097374847374844}, {"type": "precision", "value": 24.31891025641026}, {"type": "recall", "value": 31.73076923076923}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (urd-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "urd-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.6}, {"type": "f1", "value": 6.598392371412457}, {"type": "precision", "value": 5.855494356434758}, {"type": "recall", "value": 9.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ara-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ara-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 83.5}, {"type": "f1", "value": 79.65190476190476}, {"type": "precision", "value": 77.875}, {"type": "recall", "value": 83.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kor-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kor-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 80.5}, {"type": "f1", "value": 75.75999999999999}, {"type": "precision", "value": 73.60333333333332}, {"type": "recall", "value": 80.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (yid-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "yid-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 2.1226415094339623}, {"type": "f1", "value": 1.4622641509433962}, {"type": "precision", "value": 1.2637578616352203}, {"type": "recall", "value": 2.1226415094339623}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fin-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fin-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 23.0}, {"type": "f1", "value": 18.111780719280716}, {"type": "precision", "value": 16.497738095238095}, {"type": "recall", "value": 23.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tha-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tha-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 4.562043795620438}, {"type": "f1", "value": 3.1632119907667358}, {"type": "precision", "value": 2.8806772100567724}, {"type": "recall", "value": 4.562043795620438}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (wuu-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "wuu-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 75.9}, {"type": "f1", "value": 70.57690476190476}, {"type": "precision", "value": 68.19761904761904}, {"type": "recall", "value": 75.9}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.804}, {"type": "map_at_10", "value": 11.267000000000001}, {"type": "map_at_100", "value": 17.034}, {"type": "map_at_1000", "value": 18.733}, {"type": "map_at_3", "value": 6.071}, {"type": "map_at_5", "value": 8.187}, {"type": "mrr_at_1", "value": 34.694}, {"type": "mrr_at_10", "value": 50.504000000000005}, {"type": "mrr_at_100", "value": 51.162}, {"type": "mrr_at_1000", "value": 51.162}, {"type": "mrr_at_3", "value": 45.918}, {"type": "mrr_at_5", "value": 49.082}, {"type": "ndcg_at_1", "value": 33.672999999999995}, {"type": "ndcg_at_10", "value": 27.478}, {"type": "ndcg_at_100", "value": 37.961}, {"type": "ndcg_at_1000", "value": 50.117}, {"type": "ndcg_at_3", "value": 30.156}, {"type": "ndcg_at_5", "value": 29.293999999999997}, {"type": "precision_at_1", "value": 34.694}, {"type": "precision_at_10", "value": 24.082}, {"type": "precision_at_100", "value": 7.632999999999999}, {"type": "precision_at_1000", "value": 1.569}, {"type": "precision_at_3", "value": 30.612000000000002}, {"type": "precision_at_5", "value": 29.387999999999998}, {"type": "recall_at_1", "value": 2.804}, {"type": "recall_at_10", "value": 17.785}, {"type": "recall_at_100", "value": 47.452}, {"type": "recall_at_1000", "value": 84.687}, {"type": "recall_at_3", "value": 6.9190000000000005}, {"type": "recall_at_5", "value": 10.807}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 74.5162}, {"type": "ap", "value": 15.022137849208509}, {"type": "f1", "value": 56.77914300422838}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 59.589700056593095}, {"type": "f1", "value": 59.93893560752363}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 40.11538634360855}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 83.97806520832091}, {"type": "cos_sim_ap", "value": 67.80381341664686}, {"type": "cos_sim_f1", "value": 63.01665268958908}, {"type": "cos_sim_precision", "value": 57.713407943822695}, {"type": "cos_sim_recall", "value": 69.39313984168865}, {"type": "dot_accuracy", "value": 83.9899862907552}, {"type": "dot_ap", "value": 67.80914960711299}, {"type": "dot_f1", "value": 63.0287144048612}, {"type": "dot_precision", "value": 57.46252444058223}, {"type": "dot_recall", "value": 69.78891820580475}, {"type": "euclidean_accuracy", "value": 83.9601835846695}, {"type": "euclidean_ap", "value": 67.79862461635126}, {"type": "euclidean_f1", "value": 63.02426882389545}, {"type": "euclidean_precision", "value": 59.64664310954063}, {"type": "euclidean_recall", "value": 66.80738786279683}, {"type": "manhattan_accuracy", "value": 83.94230196101806}, {"type": "manhattan_ap", "value": 67.78560087328111}, {"type": "manhattan_f1", "value": 63.10622881851117}, {"type": "manhattan_precision", "value": 56.63939584644431}, {"type": "manhattan_recall", "value": 71.2401055408971}, {"type": "max_accuracy", "value": 83.9899862907552}, {"type": "max_ap", "value": 67.80914960711299}, {"type": "max_f1", "value": 63.10622881851117}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.04994760740482}, {"type": "cos_sim_ap", "value": 85.71231674852108}, {"type": "cos_sim_f1", "value": 78.92350867093619}, {"type": "cos_sim_precision", "value": 74.07807645549101}, {"type": "cos_sim_recall", "value": 84.44718201416693}, {"type": "dot_accuracy", "value": 89.05188807389295}, {"type": "dot_ap", "value": 85.71776365526502}, {"type": "dot_f1", "value": 78.92055922835156}, {"type": "dot_precision", "value": 74.34152317430069}, {"type": "dot_recall", "value": 84.10070834616569}, {"type": "euclidean_accuracy", "value": 89.05188807389295}, {"type": "euclidean_ap", "value": 85.7114644968015}, {"type": "euclidean_f1", "value": 78.9458525345622}, {"type": "euclidean_precision", "value": 74.14119556397078}, {"type": "euclidean_recall", "value": 84.41638435478903}, {"type": "manhattan_accuracy", "value": 89.06547133930997}, {"type": "manhattan_ap", "value": 85.70658730333459}, {"type": "manhattan_f1", "value": 78.91009741543552}, {"type": "manhattan_precision", "value": 74.00714719169308}, {"type": "manhattan_recall", "value": 84.5087773329227}, {"type": "max_accuracy", "value": 89.06547133930997}, {"type": "max_ap", "value": 85.71776365526502}, {"type": "max_f1", "value": 78.9458525345622}]}]}]} | amazon/Titan-text-embeddings-v2 | null | [
"transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"fr",
"de",
"es",
"ja",
"zh",
"hi",
"ar",
"it",
"pt",
"sv",
"ko",
"he",
"cs",
"tr",
"tl",
"ru",
"nl",
"pl",
"ta",
"mr",
"ml",
"te",
"kn",
"vi",
"id",
"fa",
"hu",
"el",
"ro",
"da",
"th",
"fi",
"sk",
"uk",
"no",
"bg",
"ca",
"sr",
"hr",
"lt",
"sl",
"et",
"la",
"bn",
"lv",
"ms",
"bs",
"sq",
"az",
"gl",
"is",
"ka",
"mk",
"eu",
"hy",
"ne",
"ur",
"kk",
"mn",
"be",
"uz",
"km",
"nn",
"gu",
"my",
"cy",
"eo",
"si",
"tt",
"sw",
"af",
"ga",
"pa",
"ku",
"ky",
"tg",
"or",
"lo",
"fo",
"mt",
"so",
"lb",
"am",
"oc",
"jv",
"ha",
"ps",
"sa",
"fy",
"mg",
"as",
"ba",
"br",
"tk",
"co",
"dv",
"rw",
"ht",
"yi",
"sd",
"zu",
"gd",
"bo",
"ug",
"mi",
"rm",
"xh",
"su",
"yo",
"license:other",
"model-index",
"region:us"
] | null | 2024-04-30T12:43:01+00:00 | [] | [
"en",
"fr",
"de",
"es",
"ja",
"zh",
"hi",
"ar",
"it",
"pt",
"sv",
"ko",
"he",
"cs",
"tr",
"tl",
"ru",
"nl",
"pl",
"ta",
"mr",
"ml",
"te",
"kn",
"vi",
"id",
"fa",
"hu",
"el",
"ro",
"da",
"th",
"fi",
"sk",
"uk",
"no",
"bg",
"ca",
"sr",
"hr",
"lt",
"sl",
"et",
"la",
"bn",
"lv",
"ms",
"bs",
"sq",
"az",
"gl",
"is",
"ka",
"mk",
"eu",
"hy",
"ne",
"ur",
"kk",
"mn",
"be",
"uz",
"km",
"nn",
"gu",
"my",
"cy",
"eo",
"si",
"tt",
"sw",
"af",
"ga",
"pa",
"ku",
"ky",
"tg",
"or",
"lo",
"fo",
"mt",
"so",
"lb",
"am",
"oc",
"jv",
"ha",
"ps",
"sa",
"fy",
"mg",
"as",
"ba",
"br",
"tk",
"co",
"dv",
"rw",
"ht",
"yi",
"sd",
"zu",
"gd",
"bo",
"ug",
"mi",
"rm",
"xh",
"su",
"yo"
] | TAGS
#transformers #feature-extraction #sentence-similarity #mteb #en #fr #de #es #ja #zh #hi #ar #it #pt #sv #ko #he #cs #tr #tl #ru #nl #pl #ta #mr #ml #te #kn #vi #id #fa #hu #el #ro #da #th #fi #sk #uk #no #bg #ca #sr #hr #lt #sl #et #la #bn #lv #ms #bs #sq #az #gl #is #ka #mk #eu #hy #ne #ur #kk #mn #be #uz #km #nn #gu #my #cy #eo #si #tt #sw #af #ga #pa #ku #ky #tg #or #lo #fo #mt #so #lb #am #oc #jv #ha #ps #sa #fy #mg #as #ba #br #tk #co #dv #rw #ht #yi #sd #zu #gd #bo #ug #mi #rm #xh #su #yo #license-other #model-index #region-us
|
## Bedrock Titan Text Embeddings v2
This repository contains the MTEB scores and usage examples of Bedrock Titan Text Embeddings v2. You can use the embedding model either via the Bedrock InvokeModel API or via Bedrock's batch jobs. For RAG use cases we recommend the former to embed queries during search (latency optimized) and the latter to index corpus (throughput optimized).
## Using Bedrock's InvokeModel API
## Using Bedrock's batch jobs
| [
"## Bedrock Titan Text Embeddings v2\nThis repository contains the MTEB scores and usage examples of Bedrock Titan Text Embeddings v2. You can use the embedding model either via the Bedrock InvokeModel API or via Bedrock's batch jobs. For RAG use cases we recommend the former to embed queries during search (latency optimized) and the latter to index corpus (throughput optimized).",
"## Using Bedrock's InvokeModel API",
"## Using Bedrock's batch jobs"
] | [
"TAGS\n#transformers #feature-extraction #sentence-similarity #mteb #en #fr #de #es #ja #zh #hi #ar #it #pt #sv #ko #he #cs #tr #tl #ru #nl #pl #ta #mr #ml #te #kn #vi #id #fa #hu #el #ro #da #th #fi #sk #uk #no #bg #ca #sr #hr #lt #sl #et #la #bn #lv #ms #bs #sq #az #gl #is #ka #mk #eu #hy #ne #ur #kk #mn #be #uz #km #nn #gu #my #cy #eo #si #tt #sw #af #ga #pa #ku #ky #tg #or #lo #fo #mt #so #lb #am #oc #jv #ha #ps #sa #fy #mg #as #ba #br #tk #co #dv #rw #ht #yi #sd #zu #gd #bo #ug #mi #rm #xh #su #yo #license-other #model-index #region-us \n",
"## Bedrock Titan Text Embeddings v2\nThis repository contains the MTEB scores and usage examples of Bedrock Titan Text Embeddings v2. You can use the embedding model either via the Bedrock InvokeModel API or via Bedrock's batch jobs. For RAG use cases we recommend the former to embed queries during search (latency optimized) and the latter to index corpus (throughput optimized).",
"## Using Bedrock's InvokeModel API",
"## Using Bedrock's batch jobs"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta0.5_kl0.01-epoch2 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:44:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-dpo-full-sft-wo-kqa_silver_wogold
This model is a fine-tuned version of [Minbyul/mistral-7b-wo-kqa_silver_wogold-sft](https://huggingface.co/Minbyul/mistral-7b-wo-kqa_silver_wogold-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0530
- Rewards/chosen: -2.4760
- Rewards/rejected: -21.0723
- Rewards/accuracies: 0.9700
- Rewards/margins: 18.5963
- Logps/rejected: -2709.2131
- Logps/chosen: -407.7003
- Logits/rejected: -2.0225
- Logits/chosen: -2.2276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2735 | 0.32 | 100 | 0.0529 | -1.3592 | -8.1857 | 0.9700 | 6.8265 | -1420.5509 | -296.0260 | -2.7457 | -2.5375 |
| 0.1321 | 0.63 | 200 | 0.0507 | -2.0405 | -16.8511 | 0.9600 | 14.8106 | -2287.0967 | -364.1557 | -2.2518 | -2.3349 |
| 0.117 | 0.95 | 300 | 0.0531 | -2.4855 | -21.1345 | 0.9700 | 18.6490 | -2715.4331 | -408.6504 | -2.0210 | -2.2273 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/mistral-7b-wo-kqa_silver_wogold-sft", "model-index": [{"name": "mistral-7b-dpo-full-sft-wo-kqa_silver_wogold", "results": []}]} | Minbyul/mistral-7b-dpo-full-sft-wo-kqa_silver_wogold | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/mistral-7b-wo-kqa_silver_wogold-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:45:00+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/mistral-7b-wo-kqa_silver_wogold-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mistral-7b-dpo-full-sft-wo-kqa\_silver\_wogold
==============================================
This model is a fine-tuned version of Minbyul/mistral-7b-wo-kqa\_silver\_wogold-sft on the HuggingFaceH4/ultrafeedback\_binarized dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0530
* Rewards/chosen: -2.4760
* Rewards/rejected: -21.0723
* Rewards/accuracies: 0.9700
* Rewards/margins: 18.5963
* Logps/rejected: -2709.2131
* Logps/chosen: -407.7003
* Logits/rejected: -2.0225
* Logits/chosen: -2.2276
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-07
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/mistral-7b-wo-kqa_silver_wogold-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta0.5_kl0.01-epoch3 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:48:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "base_model": "AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct", "quantized_by": "mradermacher"} | mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:48:19+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct #license-llama3 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct #license-llama3 #endpoints_compatible #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | nuvocare/adpater_nuvochat | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:49:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EyaZr/eya-test | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:49:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-real-boat-dataset
This model is a fine-tuned version of [zhuchi76/detr-resnet-50-finetuned-boat-dataset](https://huggingface.co/zhuchi76/detr-resnet-50-finetuned-boat-dataset) on the boat_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["boat_dataset"], "base_model": "zhuchi76/detr-resnet-50-finetuned-boat-dataset", "model-index": [{"name": "detr-resnet-50-finetuned-real-boat-dataset", "results": []}]} | SIS-2024-spring/detr-resnet-50-finetuned-real-boat-dataset | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:boat_dataset",
"base_model:zhuchi76/detr-resnet-50-finetuned-boat-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:50:27+00:00 | [] | [] | TAGS
#transformers #safetensors #detr #object-detection #generated_from_trainer #dataset-boat_dataset #base_model-zhuchi76/detr-resnet-50-finetuned-boat-dataset #license-apache-2.0 #endpoints_compatible #region-us
|
# detr-resnet-50-finetuned-real-boat-dataset
This model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# detr-resnet-50-finetuned-real-boat-dataset\n\nThis model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #detr #object-detection #generated_from_trainer #dataset-boat_dataset #base_model-zhuchi76/detr-resnet-50-finetuned-boat-dataset #license-apache-2.0 #endpoints_compatible #region-us \n",
"# detr-resnet-50-finetuned-real-boat-dataset\n\nThis model is a fine-tuned version of zhuchi76/detr-resnet-50-finetuned-boat-dataset on the boat_dataset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alignment-adaptor-test04
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "HuggingFaceH4/zephyr-7b-beta", "model-index": [{"name": "alignment-adaptor-test04", "results": []}]} | Ksgk-fy/alignment-adaptor-test04 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-04-30T12:51:42+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-HuggingFaceH4/zephyr-7b-beta #license-mit #region-us
|
# alignment-adaptor-test04
This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# alignment-adaptor-test04\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-HuggingFaceH4/zephyr-7b-beta #license-mit #region-us \n",
"# alignment-adaptor-test04\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers | Model Card for Model ID
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
Developed by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Model type: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
Repository: [More Information Needed]
Paper [optional]: [More Information Needed]
Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact | {"license": "apache-2.0"} | Jayant9928/orpo_v2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:55:12+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Model Card for Model ID
Model Details
Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
Developed by:
Funded by [optional]:
Shared by [optional]:
Model type:
Language(s) (NLP):
License:
Finetuned from model [optional]:
Model Sources [optional]
Repository:
Paper [optional]:
Demo [optional]:
Uses
Direct Use
Downstream Use [optional]
Out-of-Scope Use
Bias, Risks, and Limitations
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
Training Details
Training Data
Training Procedure
Preprocessing [optional]
Training Hyperparameters
Training regime:
Speeds, Sizes, Times [optional]
Evaluation
Testing Data, Factors & Metrics
Testing Data
Factors
Metrics
Results
Summary
Model Examination [optional]
Environmental Impact | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null | Bản sao các mô hình chatbot Việt Nam | {"language": ["vi"]} | duyv/ChatBot-GGUF-VietNam | null | [
"gguf",
"vi",
"region:us"
] | null | 2024-04-30T12:55:47+00:00 | [] | [
"vi"
] | TAGS
#gguf #vi #region-us
| Bản sao các mô hình chatbot Việt Nam | [] | [
"TAGS\n#gguf #vi #region-us \n"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Business_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:56:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Politics_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:00:13+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta1.1_kl0.01-epoch2 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:00:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | IDPZEro/dummy-model | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:02:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-13b-cost-UF-5e-7-nojudge
This model is a fine-tuned version of [allenai/tulu-2-13b](https://huggingface.co/allenai/tulu-2-13b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Rewards/chosen: 0.0268
- Rewards/rejected: 0.0260
- Rewards/accuracies: 0.5450
- Rewards/margins: 0.0008
- Rewards/margins Max: 0.0629
- Rewards/margins Min: -0.0642
- Rewards/margins Std: 0.0421
- Logps/rejected: -327.6042
- Logps/chosen: -331.2294
- Logits/rejected: -0.8979
- Logits/chosen: -1.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6681 | 1.0 | 1245 | 0.6931 | 0.0268 | 0.0260 | 0.5450 | 0.0008 | 0.0629 | -0.0642 | 0.0421 | -327.6042 | -331.2294 | -0.8979 | -1.0239 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-13b", "model-index": [{"name": "tulu2-13b-cost-UF-5e-7-nojudge", "results": []}]} | just1nseo/tulu2-13b-cost-UF-5e-7-nojudge | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-13b",
"region:us"
] | null | 2024-04-30T13:02:30+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-allenai/tulu-2-13b #region-us
| tulu2-13b-cost-UF-5e-7-nojudge
==============================
This model is a fine-tuned version of allenai/tulu-2-13b on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6931
* Rewards/chosen: 0.0268
* Rewards/rejected: 0.0260
* Rewards/accuracies: 0.5450
* Rewards/margins: 0.0008
* Rewards/margins Max: 0.0629
* Rewards/margins Min: -0.0642
* Rewards/margins Std: 0.0421
* Logps/rejected: -327.6042
* Logps/chosen: -331.2294
* Logits/rejected: -0.8979
* Logits/chosen: -1.0239
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-07
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.39.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-allenai/tulu-2-13b #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Entertainment_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:02:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fil_b64_le5_s4000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.575 | 22.2222 | 500 | 0.4967 |
| 0.4945 | 44.4444 | 1000 | 0.4460 |
| 0.4681 | 66.6667 | 1500 | 0.4301 |
| 0.4514 | 88.8889 | 2000 | 0.4194 |
| 0.4396 | 111.1111 | 2500 | 0.4129 |
| 0.432 | 133.3333 | 3000 | 0.4124 |
| 0.43 | 155.5556 | 3500 | 0.4104 |
| 0.4317 | 177.7778 | 4000 | 0.4125 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b64_le5_s4000", "results": []}]} | mikhail-panzo/fil_b64_le5_s4000 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:02:58+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
| fil\_b64\_le5\_s4000
====================
This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4125
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model18 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:04:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta1.1_kl0.01-epoch3 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:04:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-13b-cost-UI-5e-7-nojudge
This model is a fine-tuned version of [allenai/tulu-2-13b](https://huggingface.co/allenai/tulu-2-13b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6912
- Rewards/chosen: -0.0076
- Rewards/rejected: -0.0119
- Rewards/accuracies: 0.5960
- Rewards/margins: 0.0043
- Rewards/margins Max: 0.0285
- Rewards/margins Min: -0.0168
- Rewards/margins Std: 0.0151
- Logps/rejected: -331.3923
- Logps/chosen: -334.6692
- Logits/rejected: -0.8885
- Logits/chosen: -1.0144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6731 | 1.0 | 1185 | 0.6912 | -0.0076 | -0.0119 | 0.5960 | 0.0043 | 0.0285 | -0.0168 | 0.0151 | -331.3923 | -334.6692 | -0.8885 | -1.0144 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-13b", "model-index": [{"name": "tulu2-13b-cost-UI-5e-7-nojudge", "results": []}]} | just1nseo/tulu2-13b-cost-UI-5e-7-nojudge | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-13b",
"region:us"
] | null | 2024-04-30T13:04:54+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-allenai/tulu-2-13b #region-us
| tulu2-13b-cost-UI-5e-7-nojudge
==============================
This model is a fine-tuned version of allenai/tulu-2-13b on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6912
* Rewards/chosen: -0.0076
* Rewards/rejected: -0.0119
* Rewards/accuracies: 0.5960
* Rewards/margins: 0.0043
* Rewards/margins Max: 0.0285
* Rewards/margins Min: -0.0168
* Rewards/margins Std: 0.0151
* Logps/rejected: -331.3923
* Logps/chosen: -334.6692
* Logits/rejected: -0.8885
* Logits/chosen: -1.0144
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-07
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.39.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-allenai/tulu-2-13b #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b)
* [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: codellama/CodeLlama-7b-hf
- model: EleutherAI/llemma_7b
merge_method: slerp
base_model: codellama/CodeLlama-7b-hf
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["EleutherAI/llemma_7b", "codellama/CodeLlama-7b-hf"]} | JyoP/merged_llemma_code_llama_slerp | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:EleutherAI/llemma_7b",
"base_model:codellama/CodeLlama-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:04:55+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-EleutherAI/llemma_7b #base_model-codellama/CodeLlama-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Untitled Model (1)
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* EleutherAI/llemma_7b
* codellama/CodeLlama-7b-hf
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* EleutherAI/llemma_7b\n* codellama/CodeLlama-7b-hf",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-EleutherAI/llemma_7b #base_model-codellama/CodeLlama-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* EleutherAI/llemma_7b\n* codellama/CodeLlama-7b-hf",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2", "quantized_by": "mradermacher"} | mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF | null | [
"transformers",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:generator",
"base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:05:21+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #trl #sft #generated_from_trainer #en #dataset-generator #base_model-yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2 #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #trl #sft #generated_from_trainer #en #dataset-generator #base_model-yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2 #license-other #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Sport_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:06:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0643
- Precision: 0.9384
- Recall: 0.9510
- F1: 0.9447
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0756 | 1.0 | 1756 | 0.0674 | 0.9094 | 0.9357 | 0.9224 | 0.9815 |
| 0.0367 | 2.0 | 3512 | 0.0666 | 0.9372 | 0.9487 | 0.9429 | 0.9855 |
| 0.0223 | 3.0 | 5268 | 0.0643 | 0.9384 | 0.9510 | 0.9447 | 0.9860 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner", "results": []}]} | dcram/bert-finetuned-ner | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:06:41+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0643
* Precision: 0.9384
* Recall: 0.9510
* F1: 0.9447
* Accuracy: 0.9860
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fil_b128_le4_s4000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.4635 | 44.4444 | 500 | 0.4207 |
| 0.4317 | 88.8889 | 1000 | 0.4081 |
| 0.412 | 133.3333 | 1500 | 0.4051 |
| 0.395 | 177.7778 | 2000 | 0.4049 |
| 0.3848 | 222.2222 | 2500 | 0.4063 |
| 0.3738 | 266.6667 | 3000 | 0.4063 |
| 0.3618 | 311.1111 | 3500 | 0.4072 |
| 0.357 | 355.5556 | 4000 | 0.4081 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b128_le4_s4000", "results": []}]} | mikhail-panzo/fil_b128_le4_s4000 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:07:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
| fil\_b128\_le4\_s4000
=====================
This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4081
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Tech_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:07:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NDD-dimeshift_test-content
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5833
- Accuracy: 0.8875
- F1: 0.8913
- Precision: 0.8954
- Recall: 0.8875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1131 | 0.9989 | 669 | 0.5635 | 0.8758 | 0.8800 | 0.8845 | 0.8758 |
| 0.0553 | 1.9978 | 1338 | 0.5833 | 0.8875 | 0.8913 | 0.8954 | 0.8875 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-dimeshift_test-content", "results": []}]} | lgk03/NDD-dimeshift_test-content | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:08:22+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| NDD-dimeshift\_test-content
===========================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5833
* Accuracy: 0.8875
* F1: 0.8913
* Precision: 0.8954
* Recall: 0.8875
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | nemo |
<h1 align="center"> nach0 </h1>
<h3 align="center"> Multimodal Natural and Chemical Languages Foundation Model </h3>
<p align="center">
📃 <a href="https://arxiv.org/abs/2311.12410" target="_blank">Paper</a> • ⏬ <a href="https://huggingface.co/insilicomedicine/nach0_base" target="_blank">Base nach0</a> • ⏬ <a href="https://huggingface.co/insilicomedicine/nach0_large" target="_blank">Large nach0</a> <br>
</p>
<div align=center><img src="images/nach0_Pub_2.png" width="70%" height="70%" /></div>
<h2 id="1">Overview</h2>
- nach0 is a multi-domain and multi-task encoder-decoder LLM pre-trained on unlabeled text from scientific literature, patents, and molecule strings to incorporate a range of chemical and linguistic knowledge.
- We employed instruction tuning, where specific task-related instructions are utilized to fine-tune nach0 for the final set of tasks. To train nach0 effectively, we leverage the NeMo framework, enabling efficient parallel optimization of both base and large model versions.
- Extensive experiments demonstrate that our model outperforms state-of-the-art baselines on single-domain and cross-domain tasks. Furthermore, it can generate high-quality outputs in molecular and textual formats, showcasing its effectiveness in multi-domain setups.
<h2 id="1">Tasks</h2>
Datasets used for training and evaluation. Colour represents the type of tasks. Yellow and blue datasets are single-domain, typically requiring regression/classification losses or generation in the target domain (natural language or SMILES strings). Gradients from yellow to blue represent cross-domain generation tasks that require natural language input and SMILES output, or vise versa.
<div align=center><img src="images/nach0_Pub_1.png" width="70%" height="70%" /></div>
<h2> Model Usage Guide</h2>
To use model for the inference follow the steps bellow:
1. Preprocess the input by replacing the atom tokens with special tokens.
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import re
from rdkit.Chem import MolFromSmiles
import string
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*')
atoms_tokens = ['Ag','Al','As','Au','B','Ba','Bi','Br','C','Ca',
'Cd','Cl','Co','Cr','Cs','Cu','F','Fe','Ga','Gd',
'Ge','H','Hg','I','In','K','Li','M','Mg','Mn',
'Mo','N','Na','O','P','Pt','Ru','S','Sb','Sc',
'Se','Si','Sn','V','W','Z','Zn','c','e','n','o','p','s']
atoms_tokens = sorted(atoms_tokens, key=lambda s: len(s), reverse=True)
SMI_REGEX_PATTERN = r"(\[|\]|\(|\)|\.|=|#|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9]|" + \
'|'.join(atoms_tokens) + ")"
regex = re.compile(SMI_REGEX_PATTERN)
def clean_output_sequence(output_sequence):
return output_sequence.replace('</s>', '').replace('<sm_', '').replace(' sm_', '').replace('>', '').strip()
def add_special_symbols(text):
output = []
for word in text.split():
tokens = [token for token in regex.findall(word)]
if len(tokens) > 4 and (word == ''.join(tokens)) and MolFromSmiles(word):
output.append(''.join(['<sm_'+t+'>' for t in tokens]))
else:
output.append(word)
return ' '.join(output)
PROMPT = """Given the following reactants and reagents, please provide a possible product.
CCN(CC)CC.CCN=C=NCCCN(C)C.CN(C)C=O.Cl.NC1=CC=C(Cl)C=C1N.O.O=C(O)CCCCCNC(=O)C=C1C2=CC=CC=C2C2=CC=CC=C12.OC1=CC=CC2=C1N=NN2.[Cl-].[Na+]"""
PROMPT = add_special_symbols(PROMPT)
```
2. Load the model checkoint
```python
model = AutoModelForSeq2SeqLM.from_pretrained('insilicomedicine/nach0_base')
tokenizer = AutoTokenizer.from_pretrained('insilicomedicine/nach0_base')
```
3. Generate response to prompt and replace special tokens with corresponding atom tokens
```python
input_text_ids = tokenizer(PROMPT, padding="longest", max_length=512, truncation=True, return_tensors="pt")
generated_text_ids = model.generate(**input_text_ids, do_sample=True, top_k=100, top_p=0.95, max_length=512)
generated_text = tokenizer.batch_decode(generated_text_ids, skip_special_tokens=True)[0]
generated_text = clean_output_sequence(generated_text)
```
```python
# NC1=CC=C(Cl)C=C1NC(=O)CCCCCNC(=O)C=C1C2=CC=CC=C2C2=CC=CC=C12
```
<h3> References</h3>
If you use our repository, please cite the following related paper:
```
@article{nach0,
title={nach0: Multimodal Natural and Chemical Languages Foundation Model},
author={Micha Livne and Zulfat Miftahutdinov and Elena Tutubalina and Maksim Kuznetsov and Daniil Polykovskiy and Annika Brundyn and Aastha Jhunjhunwala and Anthony Costa and Alex Aliper and Alán Aspuru-Guzik and Alex Zhavoronkov},
year={2024},
journal={Chem. Sci.},
pages={-},
publisher={The Royal Society of Chemistry},
}
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["chemistry"]} | insilicomedicine/nach0_large | null | [
"nemo",
"chemistry",
"en",
"arxiv:2311.12410",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-30T13:10:49+00:00 | [
"2311.12410"
] | [
"en"
] | TAGS
#nemo #chemistry #en #arxiv-2311.12410 #license-cc-by-nc-4.0 #region-us
|
<h1 align="center"> nach0 </h1>
<h3 align="center"> Multimodal Natural and Chemical Languages Foundation Model </h3>
<p align="center">
<a href="URL target="_blank">Paper</a> • ⏬ <a href="URL target="_blank">Base nach0</a> • ⏬ <a href="URL target="_blank">Large nach0</a> <br>
</p>
<div align=center><img src="images/nach0_Pub_2.png" width="70%" height="70%" /></div>
<h2 id="1">Overview</h2>
- nach0 is a multi-domain and multi-task encoder-decoder LLM pre-trained on unlabeled text from scientific literature, patents, and molecule strings to incorporate a range of chemical and linguistic knowledge.
- We employed instruction tuning, where specific task-related instructions are utilized to fine-tune nach0 for the final set of tasks. To train nach0 effectively, we leverage the NeMo framework, enabling efficient parallel optimization of both base and large model versions.
- Extensive experiments demonstrate that our model outperforms state-of-the-art baselines on single-domain and cross-domain tasks. Furthermore, it can generate high-quality outputs in molecular and textual formats, showcasing its effectiveness in multi-domain setups.
<h2 id="1">Tasks</h2>
Datasets used for training and evaluation. Colour represents the type of tasks. Yellow and blue datasets are single-domain, typically requiring regression/classification losses or generation in the target domain (natural language or SMILES strings). Gradients from yellow to blue represent cross-domain generation tasks that require natural language input and SMILES output, or vise versa.
<div align=center><img src="images/nach0_Pub_1.png" width="70%" height="70%" /></div>
<h2> Model Usage Guide</h2>
To use model for the inference follow the steps bellow:
1. Preprocess the input by replacing the atom tokens with special tokens.
2. Load the model checkoint
3. Generate response to prompt and replace special tokens with corresponding atom tokens
<h3> References</h3>
If you use our repository, please cite the following related paper:
| [] | [
"TAGS\n#nemo #chemistry #en #arxiv-2311.12410 #license-cc-by-nc-4.0 #region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2094
- Accuracy: 0.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3895 | 1.0 | 370 | 0.2819 | 0.9432 |
| 0.225 | 2.0 | 740 | 0.2152 | 0.9472 |
| 0.1687 | 3.0 | 1110 | 0.1938 | 0.9499 |
| 0.1392 | 4.0 | 1480 | 0.1860 | 0.9526 |
| 0.1255 | 5.0 | 1850 | 0.1814 | 0.9553 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-oxford-iiit-pets", "results": []}]} | tedbelford/vit-base-oxford-iiit-pets | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:11:03+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| vit-base-oxford-iiit-pets
=========================
This model is a fine-tuned version of google/vit-base-patch16-224 on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2094
* Accuracy: 0.9350
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tropianhs/mistral-tweet-finetune-tropianhs | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:11:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_ext_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2572
- Precision: 0.8607
- Recall: 0.8915
- F1: 0.8758
- Accuracy: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3946 | 1.72 | 500 | 0.1925 | 0.7835 | 0.8471 | 0.8141 | 0.9467 |
| 0.1653 | 3.44 | 1000 | 0.1627 | 0.8340 | 0.8675 | 0.8504 | 0.9572 |
| 0.1183 | 5.15 | 1500 | 0.1700 | 0.8378 | 0.8808 | 0.8588 | 0.9595 |
| 0.0869 | 6.87 | 2000 | 0.1901 | 0.8554 | 0.8728 | 0.8640 | 0.9589 |
| 0.0661 | 8.59 | 2500 | 0.2037 | 0.8482 | 0.8867 | 0.8670 | 0.9595 |
| 0.053 | 10.31 | 3000 | 0.2011 | 0.8460 | 0.8867 | 0.8659 | 0.9609 |
| 0.043 | 12.03 | 3500 | 0.2216 | 0.8555 | 0.8888 | 0.8718 | 0.9593 |
| 0.0358 | 13.75 | 4000 | 0.2245 | 0.8492 | 0.8878 | 0.8680 | 0.9603 |
| 0.0296 | 15.46 | 4500 | 0.2401 | 0.8513 | 0.8872 | 0.8689 | 0.9603 |
| 0.0264 | 17.18 | 5000 | 0.2415 | 0.8564 | 0.8862 | 0.8710 | 0.9610 |
| 0.0212 | 18.9 | 5500 | 0.2570 | 0.8557 | 0.8872 | 0.8712 | 0.9622 |
| 0.0205 | 20.62 | 6000 | 0.2540 | 0.8567 | 0.8883 | 0.8722 | 0.9616 |
| 0.0167 | 22.34 | 6500 | 0.2573 | 0.8568 | 0.8894 | 0.8728 | 0.9614 |
| 0.0161 | 24.05 | 7000 | 0.2572 | 0.8607 | 0.8915 | 0.8758 | 0.9627 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_1_1_ext_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8606811145510835, "name": "Precision"}, {"type": "recall", "value": 0.8915018706574025, "name": "Recall"}, {"type": "f1", "value": 0.8758204253084799, "name": "F1"}, {"type": "accuracy", "value": 0.9626885008032336, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_1_1_ext_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:12:56+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-DeepPavlov/bert-base-bg-cs-pl-ru-cased #model-index #autotrain_compatible #endpoints_compatible #region-us
| CNEC\_1\_1\_ext\_slavicbert
===========================
This model is a fine-tuned version of DeepPavlov/bert-base-bg-cs-pl-ru-cased on the cnec dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2572
* Precision: 0.8607
* Recall: 0.8915
* F1: 0.8758
* Accuracy: 0.9627
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 25
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-DeepPavlov/bert-base-bg-cs-pl-ru-cased #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
text-generation | transformers |
# TyphoonTime-passthrough
TyphoonTime-passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b)
* [chargoddard/storytime-13b](https://huggingface.co/chargoddard/storytime-13b)
## 🧩 Configuration
\```yaml
slices:
- sources:
- model: scb10x/typhoon-7b
layer_range: [0, 32]
- sources:
- model: chargoddard/storytime-13b
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
\``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "scb10x/typhoon-7b", "chargoddard/storytime-13b"]} | Manichik/TyphoonTime-passthrough | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"scb10x/typhoon-7b",
"chargoddard/storytime-13b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:13:33+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #scb10x/typhoon-7b #chargoddard/storytime-13b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# TyphoonTime-passthrough
TyphoonTime-passthrough is a merge of the following models using mergekit:
* scb10x/typhoon-7b
* chargoddard/storytime-13b
## Configuration
\ | [
"# TyphoonTime-passthrough\n\nTyphoonTime-passthrough is a merge of the following models using mergekit:\n* scb10x/typhoon-7b\n* chargoddard/storytime-13b",
"## Configuration\n\n\\"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #scb10x/typhoon-7b #chargoddard/storytime-13b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# TyphoonTime-passthrough\n\nTyphoonTime-passthrough is a merge of the following models using mergekit:\n* scb10x/typhoon-7b\n* chargoddard/storytime-13b",
"## Configuration\n\n\\"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.