Datasets:
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-28 06:28:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 438
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-28 06:27:47
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zhiyuanyou/DeQA-Score-LoRA-Mix3 | zhiyuanyou | "2025-03-25T14:16:13" | 21 | 0 | transformers | [
"transformers",
"mplug_owl2",
"image-to-text",
"en",
"arxiv:2501.11561",
"base_model:MAGAer13/mplug-owl2-llama2-7b",
"base_model:finetune:MAGAer13/mplug-owl2-llama2-7b",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | "2025-01-15T08:07:00" | ---
base_model:
- MAGAer13/mplug-owl2-llama2-7b
language:
- en
license: mit
library_name: transformers
pipeline_tag: image-to-text
---
# DeQA-Score-LoRA-Mix3
DeQA-Score (
[project page](https://depictqa.github.io/deqa-score/) /
[codes](https://github.com/zhiyuanyou/DeQA-Score) /
[paper](https://arxiv.org/abs/2501.11561)
) model weights LoRA fine-tuned on KonIQ, SPAQ, and KADID datasets.
This work is under our [DepictQA project](https://depictqa.github.io/).
## Non-reference IQA Results (PLCC / SRCC)
| | Fine-tune | KonIQ | SPAQ | KADID | PIPAL | LIVE-Wild | AGIQA | TID2013 | CSIQ |
|--------------|-----------|-----------|----------|----------|----------|-----------|----------|----------|----------|
| Q-Align (Baseline) | Fully | 0.945 / 0.938 | 0.933 / 0.931 | 0.935 / 0.934 | 0.409 / 0.420 | 0.887 / 0.883 | 0.788 / 0.733 | 0.829 / 0.808 | 0.876 / 0.845 |
| DeQA-Score (Ours) | LoRA | **0.956 / 0.944** | **0.939 / 0.935** | **0.953 / 0.951** | **0.481 / 0.481** | **0.903 / 0.890** | **0.806 / 0.754** | **0.851 / 0.821** | **0.900 / 0.860** |
If you find our work useful for your research and applications, please cite using the BibTeX:
```bibtex
@inproceedings{deqa_score,
title={Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution},
author={You, Zhiyuan and Cai, Xin and Gu, Jinjin and Xue, Tianfan and Dong, Chao},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
year={2025},
}
``` |
mlfoundations-dev/llama3-1_8b_4o_annotated_olympiads | mlfoundations-dev | "2025-02-04T01:28:32" | 3,842 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-01T20:45:48" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-1_8b_4o_annotated_olympiads
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-1_8b_4o_annotated_olympiads
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/4o_annotated_olympiads dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
ArthurMor4is/vit-base-patch16-224-finetuned-covid_ct_set_full | ArthurMor4is | "2023-08-15T13:27:03" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-08-14T13:41:52" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-covid_ct_set_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-covid_ct_set_full
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1225
- Accuracy: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4343 | 0.99 | 29 | 0.1945 | 0.9298 |
| 0.2353 | 1.98 | 58 | 0.2052 | 0.9290 |
| 0.1395 | 2.97 | 87 | 0.2567 | 0.9075 |
| 0.1399 | 4.0 | 117 | 0.1225 | 0.9627 |
| 0.1186 | 4.96 | 145 | 0.1531 | 0.9521 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Free188/llama-merge-ch_alpaca_lora-quantized-7b | Free188 | "2023-04-22T14:54:11" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"chemistry",
"text-classification",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | text-classification | "2023-04-22T14:51:35" | ---
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- chemistry
--- |
Katsie011/t5-small-finetuned-xsum | Katsie011 | "2023-04-15T15:19:43" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-15T07:47:38" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
InternationalOlympiadAI/miniSD-diffusers | InternationalOlympiadAI | "2024-08-08T21:09:54" | 64 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-08-08T21:07:24" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
baby-dev/e40209b6-0413-40cd-b61f-65c437733d04 | baby-dev | "2025-02-23T10:52:54" | 0 | 0 | peft | [
"peft",
"bloom",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"region:us"
] | null | "2025-02-23T10:52:49" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: baby-dev/e40209b6-0413-40cd-b61f-65c437733d04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/e40209b6-0413-40cd-b61f-65c437733d04
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 | stefan-it | "2023-10-17T22:57:31" | 8 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] | token-classification | "2023-10-06T23:57:41" | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
AmelieSchreiber/esm2_t6_8M_finetuned_cafa5 | AmelieSchreiber | "2023-08-29T10:48:38" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"esm",
"text-classification",
"esm2",
"protein language model",
"pLM",
"biology",
"multilabel sequence classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-27T15:52:57" | ---
license: mit
language:
- en
library_name: transformers
tags:
- esm
- esm2
- protein language model
- pLM
- biology
- multilabel sequence classification
metrics:
- f1
- precision
- recall
---
# ESM-2 Fine-tuned CAFA-5
## ESM-2 for Protein Function Prediction
This is an experimental model fine-tuned from the
[esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) model
for multi-label classification. In particular, the model is fine-tuned on the CAFA-5 protein sequence dataset available
[here](https://huggingface.co/datasets/AmelieSchreiber/cafa_5). More precisely, the `train_sequences.fasta` file is the
list of protein sequences that were trained on, and the
`train_terms.tsv` file contains the gene ontology protein function labels for each protein sequence. For more details on using
ESM-2 models for multi-label sequence classification, [see here](https://huggingface.co/docs/transformers/model_doc/esm).
Due to the potentially complicated class weighting necessary for the hierarchical ontology, further fine-tuning will be necessary.
## Training
The training/validation split of the data for this model is available [here](https://huggingface.co/datasets/AmelieSchreiber/cafa_5_train_val_split_1).
Macro
```
Epoch 5/5
Training loss: 0.06925179701577704
Validation Precision: 0.9821931289359406
Validation Recall: 0.999934039607066
Validation MultilabelF1Score: 0.9907671213150024
Validation AUROC: 0.5831210653861931
```
Micro
```
Validation Precision: 0.9822020821532512
Validation Recall: 0.9999363677941498
```
## Using the model
First, download the `train_sequences.fasta` file and the `train_terms.tsv` file, and provide the local paths in the code below:
```python
import os
import numpy as np
import torch
from transformers import AutoTokenizer, EsmForSequenceClassification, AdamW
from torch.nn.functional import binary_cross_entropy_with_logits
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, precision_score, recall_score
# from accelerate import Accelerator
from Bio import SeqIO
# Step 1: Data Preprocessing (Replace with your local paths)
fasta_file = "data/train_sequences.fasta"
tsv_file = "data/train_terms.tsv"
fasta_data = {}
tsv_data = {}
for record in SeqIO.parse(fasta_file, "fasta"):
fasta_data[record.id] = str(record.seq)
with open(tsv_file, 'r') as f:
for line in f:
parts = line.strip().split("\t")
tsv_data[parts[0]] = parts[1:]
# tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
seq_length = 1022
# tokenized_data = tokenizer(list(fasta_data.values()), padding=True, truncation=True, return_tensors="pt", max_length=seq_length)
unique_terms = list(set(term for terms in tsv_data.values() for term in terms))
```
Second, downlowd the file `go-basic.obo` [from here](https://huggingface.co/datasets/AmelieSchreiber/cafa_5)
and store the file locally, then provide the local path in the the code below:
```python
import torch
from transformers import AutoTokenizer, EsmForSequenceClassification
from sklearn.metrics import precision_recall_fscore_support
# 1. Parsing the go-basic.obo file
def parse_obo_file(file_path):
with open(file_path, 'r') as f:
data = f.read().split("[Term]")
terms = []
for entry in data[1:]:
lines = entry.strip().split("\n")
term = {}
for line in lines:
if line.startswith("id:"):
term["id"] = line.split("id:")[1].strip()
elif line.startswith("name:"):
term["name"] = line.split("name:")[1].strip()
elif line.startswith("namespace:"):
term["namespace"] = line.split("namespace:")[1].strip()
elif line.startswith("def:"):
term["definition"] = line.split("def:")[1].split('"')[1]
terms.append(term)
return terms
parsed_terms = parse_obo_file("go-basic.obo") # Replace `go-basic.obo` with your path
# 2. Load the saved model and tokenizer
model_path = "AmelieSchreiber/esm2_t6_8M_finetuned_cafa5"
loaded_model = EsmForSequenceClassification.from_pretrained(model_path)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_path)
# 3. The predict_protein_function function
def predict_protein_function(sequence, model, tokenizer, go_terms):
inputs = tokenizer(sequence, return_tensors="pt", padding=True, truncation=True, max_length=1022)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.sigmoid(outputs.logits)
predicted_indices = torch.where(predictions > 0.05)[1].tolist()
functions = []
for idx in predicted_indices:
term_id = unique_terms[idx] # Use the unique_terms list from your training script
for term in go_terms:
if term["id"] == term_id:
functions.append(term["name"])
break
return functions
# 4. Predicting protein function for an example sequence
example_sequence = "MAYLGSLVQRRLELASGDRLEASLGVGSELDVRGDRVKAVGSLDLEEGRLEQAGVSMA" # Replace with your protein sequence
predicted_functions = predict_protein_function(example_sequence, loaded_model, loaded_tokenizer, parsed_terms)
print(predicted_functions)
```
|
huggingtweets/alexisgallagher | huggingtweets | "2021-05-21T18:10:44" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05" | ---
language: en
thumbnail: https://www.huggingtweets.com/alexisgallagher/1616871355671/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1274068177215827968/g9sB0dE1_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">alexis 🤖 AI Bot </div>
<div style="font-size: 15px">@alexisgallagher bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@alexisgallagher's tweets](https://twitter.com/alexisgallagher).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 104 |
| Short tweets | 232 |
| Tweets kept | 2914 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28ak07sx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alexisgallagher's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kmu6pnu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kmu6pnu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alexisgallagher')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sardoo8/finetuning-sentiment-model-3000-samples | sardoo8 | "2024-06-26T10:48:42" | 98 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-26T10:41:48" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3197
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3
|
LoneStriker/dolphin-2.9-llama3-70b-2.4bpw-h6-exl2 | LoneStriker | "2024-04-25T21:33:34" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-04-25T21:23:58" | ---
license: llama3
language:
- en
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- abacusai/SystemChat-1.1
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# Dolphin 2.9 Llama 3 70b 🐬
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations
Discord: https://discord.gg/8fbBeC7ZGx
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
This model is based on Llama-3-70b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
The base model has 8k context, and the qLoRA fine-tuning was with 8k sequence length.
It took 2.5 days on 8xH100 node provided by Crusoe Cloud
This model was trained FFT on all parameters, using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Evals

## Quants
- https://huggingface.co/crusoeai/dolphin-2.9-llama3-70b-GGUF
- https://huggingface.co/crusoeai/dolphin2.9-llama3-70b-2.25bpw-exl2
- https://huggingface.co/crusoeai/dolphin2.9-llama3-70b-2.5bpw-exl2
- https://huggingface.co/crusoeai/dolphin2.9-llama3-70b-4.5bpw-exl2
|
nhung02/89a05e93-2200-4be6-b952-303a03b55798 | nhung02 | "2025-01-25T15:54:18" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T15:39:36" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 89a05e93-2200-4be6-b952-303a03b55798
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b5ffad465136296_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b5ffad465136296_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung02/89a05e93-2200-4be6-b952-303a03b55798
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b5ffad465136296_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9b90f9ef-707a-4ff2-89f3-7f6ad5325643
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9b90f9ef-707a-4ff2-89f3-7f6ad5325643
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 89a05e93-2200-4be6-b952-303a03b55798
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6319 | 0.9112 | 200 | 0.6178 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Wood0529/StockDSR1-1.5B | Wood0529 | "2025-02-28T16:08:01" | 0 | 0 | null | [
"gguf",
"qwen2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-28T16:03:16" | 1 1.235900
2 1.241400
3 1.240100
4 1.209100
5 1.175600
6 1.184200
7 1.119700
8 1.134800
9 1.112700
10 1.110000
11 1.093000
12 1.081100
13 1.059400
14 1.075500
15 1.038300
16 1.064300
17 1.031100
18 1.021900
19 1.003600
20 1.011500
21 1.013300
22 1.001500
23 0.987100
24 0.970400
25 0.963100
26 0.964500
27 0.935400
28 0.952800
29 0.985400
30 1.002500
31 0.993600
32 0.992000
33 0.945400
34 0.948600
35 0.909800
36 0.953800
37 0.946600
38 0.951000
39 0.942400
40 0.932900
41 0.969600
42 0.928800
43 0.944500
44 0.941800
45 0.914500
46 0.946700
47 0.935600
48 0.942100
49 0.932600
50 0.904400
51 0.960200
52 0.943500
53 0.949000
54 0.955200
55 0.955700
56 0.955200
57 0.946700
58 0.920500
59 0.926900
60 0.928600
61 0.933700
62 0.906900
63 0.934200
64 0.920800
65 0.941200
66 0.924700
67 0.914700
68 0.923500
69 0.945200
70 0.931700
71 0.939300
72 0.956000
73 0.957700
74 0.930700
75 0.936200 |
glif-loradex-trainer/Swap_agrawal14_kuki_retro_orange | glif-loradex-trainer | "2025-03-28T07:01:33" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | "2025-03-28T07:01:23" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>We had to rate limit you. To continue using our service, please log in or create an account.</p>
</div>
</main>
</body>
</html> |
HPLT/translate-uk-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:19:09" | 0 | 0 | null | [
"translation",
"uk",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:18:51" |
---
language:
- uk
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Ukrainian-English (uk->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Ukrainian
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.uk-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
kostiantynk1205/b5eeaba0-eed5-4278-8fa1-631cd40316b6 | kostiantynk1205 | "2025-01-14T07:01:32" | 21 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T06:47:34" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5eeaba0-eed5-4278-8fa1-631cd40316b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 44550b09c3b3c037_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/44550b09c3b3c037_train_data.json
type:
field_input: post
field_instruction: query
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/b5eeaba0-eed5-4278-8fa1-631cd40316b6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/44550b09c3b3c037_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 35396ba4-675d-4701-920b-9b7f4a9ad59c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 35396ba4-675d-4701-920b-9b7f4a9ad59c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b5eeaba0-eed5-4278-8fa1-631cd40316b6
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0004 | 6 | nan |
| 0.0 | 0.0006 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MonteXiaofeng/Tranport-llama3_1_8B_instruct | MonteXiaofeng | "2024-09-29T03:18:57" | 12 | 0 | null | [
"safetensors",
"llama",
"交通运输",
"语言模型",
"chatmodel",
"dataset:BAAI/IndustryInstruction_Transportation",
"dataset:BAAI/IndustryInstruction",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2024-09-23T08:24:33" | ---
license: apache-2.0
datasets:
- BAAI/IndustryInstruction_Transportation
- BAAI/IndustryInstruction
base_model:
- meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- 交通运输
- 语言模型
- chatmodel
---
This model is finetuned on the model llama3.1-8b-instruct using the dataset [BAAI/IndustryInstruction_Transportation](https://huggingface.co/datasets/BAAI/IndustryInstruction_Transportation) dataset, the dataset details can jump to the repo: [BAAI/IndustryInstruction](https://huggingface.co/datasets/BAAI/IndustryInstruction)
## training params
The training framework is llama-factory, template=llama3
```
learning_rate=1e-5
lr_scheduler_type=cosine
max_length=2048
warmup_ratio=0.05
batch_size=64
epoch=10
```
select best ckpt by the evaluation loss
## evaluation
Since I only found an instruction dataset [DUOMO-Lab/Transgpt_sft_v2](https://huggingface.co/datasets/DUOMO-Lab/Transgpt_sft_v2) in the field of traffic, in order to remove the influence of the base model, I used the data in llama3.1-8b-instruc for fine-tuning and compared and evaluated our model.
The evaluation method is: use GPT4 on the validation set of each dataset to compare good, tie, and loss. The evaluation results are as follows

## How to use
```python
# !/usr/bin/env python
# -*- coding:utf-8 -*-
# ==================================================================
# [Author] : xiaofeng
# [Descriptions] :
# ==================================================================
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
llama3_jinja = """{% if messages[0]['role'] == 'system' %}
{% set offset = 1 %}
{% else %}
{% set offset = 0 %}
{% endif %}
{{ bos_token }}
{% for message in messages %}
{% if (message['role'] == 'user') != (loop.index0 % 2 == offset) %}
{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
{% endif %}
{{ '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' + message['content'] | trim + '<|eot_id|>' }}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|start_header_id|>' + 'assistant' + '<|end_header_id|>\n\n' }}
{% endif %}"""
dtype = torch.bfloat16
model_dir = "MonteXiaofeng/Tranport-llama3_1_8B_instruct"
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map="cuda",
torch_dtype=dtype,
)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
tokenizer.chat_template = llama3_jinja # update template
message = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "私人交通工具的发展对经济有什么影响?"},
]
prompt = tokenizer.apply_chat_template(
message, tokenize=False, add_generation_prompt=True
)
print(prompt)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
prompt_length = len(inputs[0])
print(f"prompt_length:{prompt_length}")
generating_args = {
"do_sample": True,
"temperature": 1.0,
"top_p": 0.5,
"top_k": 15,
"max_new_tokens": 512,
}
generate_output = model.generate(input_ids=inputs.to(model.device), **generating_args)
response_ids = generate_output[:, prompt_length:]
response = tokenizer.batch_decode(
response_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(response)
"""
私人交通工具的发展对经济有着深远的影响。首先,私人交通工具的发展可以促进汽车制造业的繁荣。随着私人交通工具的需求增加,汽车制造商将面临更大的市场需求,从而带动产业链的发展,创造就业机会,增加经济收入。其次,私人交通工具的发展也会带动相关
业的发展,如燃料供应、维修服务和保险等。这些行业的发展将为经济增长做出贡献。此外,私人交通工具的发展还会促进城市交通的便利性,提高人们的生活质量,从而带动消费,刺激经济发展。然而,私人交通工具的发展也会带来一些负面影响,如交通拥堵和环境
染等问题。因此,政府需要采取相应的政策措施来平衡经济发展和环境保护的需要。总的来说,私人交通工具的发展对经济有着重要的影响,需要综合考虑各种因素进行合理规划和管理。
"""
``` |
mradermacher/TimeZero-ActivityNet-7B-GGUF | mradermacher | "2025-04-11T20:47:36" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:wwwyyy/TimeZero-ActivityNet-7B",
"base_model:quantized:wwwyyy/TimeZero-ActivityNet-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-11T20:35:04" | ---
base_model: wwwyyy/TimeZero-ActivityNet-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wwwyyy/TimeZero-ActivityNet-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TimeZero-ActivityNet-7B-GGUF/resolve/main/TimeZero-ActivityNet-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
CharlesLi/llama_2_sky_safe_o1_4o_reflect_4000_500_full | CharlesLi | "2025-01-13T09:25:12" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-13T08:47:29" | ---
library_name: transformers
license: llama2
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: llama_2_sky_safe_o1_4o_reflect_4000_500_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_2_sky_safe_o1_4o_reflect_4000_500_full
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8123 | 0.3396 | 100 | 0.7191 |
| 0.6702 | 0.6791 | 200 | 0.6827 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Frank0303/Earningscall_Sentiment_model | Frank0303 | "2024-06-06T13:04:47" | 4 | 0 | transformers | [
"transformers",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-06T12:56:13" | ---
license: apache-2.0
---
|
Best000/e865856d-f1c7-4ec3-b176-119c1f7bc31a | Best000 | "2025-01-19T04:58:36" | 11 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | "2025-01-19T04:57:17" | ---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e865856d-f1c7-4ec3-b176-119c1f7bc31a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-350m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 09f2e168361c64eb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09f2e168361c64eb_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/e865856d-f1c7-4ec3-b176-119c1f7bc31a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/09f2e168361c64eb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ec5094d5-fc78-4451-abdc-291157d3224b
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: ec5094d5-fc78-4451-abdc-291157d3224b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e865856d-f1c7-4ec3-b176-119c1f7bc31a
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7566 | 0.0003 | 1 | 3.2728 |
| 13.6193 | 0.0010 | 3 | 3.2565 |
| 12.4585 | 0.0020 | 6 | 3.1574 |
| 13.1228 | 0.0030 | 9 | 2.9448 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bowilleatyou/5cdadc98-3b5a-4b9e-9104-11c92b402361 | bowilleatyou | "2025-04-04T11:32:08" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-04T11:17:38" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ISTA-DASLab/DeepSeek-R1-Distill-Llama-70B-HIGGS-4bit | ISTA-DASLab | "2025-02-12T10:41:12" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"higgs",
"region:us"
] | text-generation | "2025-02-12T10:22:15" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Liamdu/ppo-SnowballTarget | Liamdu | "2024-02-26T12:44:50" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-02-26T12:44:45" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Liamdu/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
haejiness/tmp-ner | haejiness | "2024-11-25T13:48:14" | 179 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-11-25T13:47:52" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fzzhang/pearl_lora_b8 | fzzhang | "2024-02-20T07:28:43" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:louisbrulenaudet/Pearl-7B-slerp",
"base_model:adapter:louisbrulenaudet/Pearl-7B-slerp",
"license:apache-2.0",
"region:us"
] | null | "2024-02-20T04:01:16" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: louisbrulenaudet/Pearl-7B-slerp
model-index:
- name: pearl_lora_b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pearl_lora_b8
This model is a fine-tuned version of [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
iloncka/spnasnet_100.rmsp_in1k_ep_20 | iloncka | "2023-12-25T15:02:09" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2023-12-25T14:59:12" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
research-backup/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated | research-backup | "2022-09-21T09:02:10" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-09-21T08:31:53" | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8261309523809524
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6417112299465241
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6409495548961425
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7871039466370205
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.946
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5921052631578947
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6527777777777778
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9100497212596053
- name: F1 (macro)
type: f1_macro
value: 0.9039162913439194
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8556338028169014
- name: F1 (macro)
type: f1_macro
value: 0.6945383312136448
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6852654387865655
- name: F1 (macro)
type: f1_macro
value: 0.6774872040266507
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9572233428392571
- name: F1 (macro)
type: f1_macro
value: 0.879744388826254
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9037919147602632
- name: F1 (macro)
type: f1_macro
value: 0.9024843094207563
---
# relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.6417112299465241
- Accuracy on SAT: 0.6409495548961425
- Accuracy on BATS: 0.7871039466370205
- Accuracy on U2: 0.5921052631578947
- Accuracy on U4: 0.6527777777777778
- Accuracy on Google: 0.946
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9100497212596053
- Micro F1 score on CogALexV: 0.8556338028169014
- Micro F1 score on EVALution: 0.6852654387865655
- Micro F1 score on K&H+N: 0.9572233428392571
- Micro F1 score on ROOT09: 0.9037919147602632
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8261309523809524
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/semeval2012_relational_similarity
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj>
- loss_function: info_loob
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 21
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-loob-conceptnet-validated/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
Eleven/xlm-roberta-base-finetuned-panx-de-fr | Eleven | "2022-07-05T15:59:42" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-07-05T15:37:17" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
moussaKam/frugalscore_medium_deberta_bert-score | moussaKam | "2022-02-01T10:51:45" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05" | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
bibom2001/whisper0 | bibom2001 | "2024-10-25T10:56:35" | 85 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-24T13:40:21" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper0
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5080
- Wer Ortho: 99.8700
- Wer: 10.1070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 1
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 3.778 | 0.0031 | 5 | 4.5080 | 99.8700 | 10.1070 |
### Framework versions
- Transformers 4.45.1
- Pytorch 1.12.1
- Datasets 3.0.1
- Tokenizers 0.20.0
|
MurDanya/llm-course-hw3-lora | MurDanya | "2025-04-12T15:11:11" | 0 | 0 | null | [
"safetensors",
"mistral",
"en",
"dataset:cardiffnlp/tweet_eval",
"base_model:OuteAI/Lite-Oute-1-300M-Instruct",
"base_model:finetune:OuteAI/Lite-Oute-1-300M-Instruct",
"region:us"
] | null | "2025-04-12T15:03:28" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Luongdzung/gpt-neo-1.3B-sft-che-lora-ALL-WEIGHT | Luongdzung | "2025-02-20T03:32:16" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-20T03:29:24" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
van-ng/distilhubert-finetuned-gtzan-v2 | van-ng | "2024-02-20T02:04:03" | 159 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-02-19T17:35:57" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-v2
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: gtzan
type: gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-v2
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the gtzan dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6766
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0361 | 1.0 | 113 | 1.8915 | 0.41 |
| 1.3728 | 2.0 | 226 | 1.2725 | 0.64 |
| 1.0442 | 3.0 | 339 | 0.9188 | 0.78 |
| 0.9614 | 4.0 | 452 | 0.8790 | 0.7 |
| 0.6945 | 5.0 | 565 | 0.6933 | 0.79 |
| 0.3976 | 6.0 | 678 | 0.6891 | 0.79 |
| 0.345 | 7.0 | 791 | 0.6091 | 0.81 |
| 0.1068 | 8.0 | 904 | 0.5905 | 0.81 |
| 0.1646 | 9.0 | 1017 | 0.5809 | 0.82 |
| 0.1079 | 10.0 | 1130 | 0.6527 | 0.81 |
| 0.0311 | 11.0 | 1243 | 0.6393 | 0.86 |
| 0.0491 | 12.0 | 1356 | 0.6766 | 0.83 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.2
|
lesso/b5a9fdfb-57ef-4354-b282-c1350e119997 | lesso | "2025-02-08T23:48:40" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T03:12:32" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5a9fdfb-57ef-4354-b282-c1350e119997
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# b5a9fdfb-57ef-4354-b282-c1350e119997
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0019 | 1 | 2.7677 |
| 0.6843 | 0.0946 | 50 | 0.4978 |
| 0.4247 | 0.1891 | 100 | 0.4036 |
| 0.3512 | 0.2837 | 150 | 0.3705 |
| 0.3371 | 0.3783 | 200 | 0.3270 |
| 0.3119 | 0.4728 | 250 | 0.3154 |
| 0.288 | 0.5674 | 300 | 0.3046 |
| 0.2681 | 0.6619 | 350 | 0.3054 |
| 0.2736 | 0.7565 | 400 | 0.3058 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shihaozou/cv_data_50000_step5k | shihaozou | "2024-07-09T11:35:15" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-07-09T10:16:42" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HBDX/Seq-TransfoRNA | HBDX | "2024-06-20T12:47:41" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"license:gpl",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T11:31:47" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
license: gpl
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
## Steps to run model
- First install [transforna](https://github.com/gitHBDX/TransfoRNA/tree/master)
- Example code:
```
from transforna import GeneEmbeddModel,RnaTokenizer
import torch
model_name = 'Seq'
model_path = f"HBDX/{model_name}-TransfoRNA"
#load model and tokenizer
model = GeneEmbeddModel.from_pretrained(model_path)
model.eval()
#init tokenizer.
tokenizer = RnaTokenizer.from_pretrained(model_path,model_name=model_name)
output = tokenizer(['AAAGTCGGAGGTTCGAAGACGATCAGATAC','TTTTCGGAACTGAGGCCATGATTAAGAGGG'])
#inference
#gene_embedds is the latent space representation of the input sequence.
gene_embedd, _, activations,attn_scores_first,attn_scores_second = \
model(output['input_ids'])
#get sub class labels
sub_class_labels = model.convert_ids_to_labels(activations)
#get major class labels
major_class_labels = model.convert_subclass_to_majorclass(sub_class_labels)
```
|
mcparty2/distilbert-base-uncased-finetuned-emotion | mcparty2 | "2023-09-27T05:03:06" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-27T04:33:52" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9263847378294227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.815 | 1.0 | 250 | 0.3069 | 0.915 | 0.9144 |
| 0.2449 | 2.0 | 500 | 0.2151 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Lddz/Lalibela | Lddz | "2025-03-29T13:38:29" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-29T13:19:07" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lalibela
---
# Lalibela
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lalibela` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Lddz/Lalibela', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
albertus-sussex/veriscrape-sbert-movie-reference_2_to_verify_8-fold-1 | albertus-sussex | "2025-03-30T17:12:48" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:3083",
"loss:TripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-30T16:46:35" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3083
- loss:TripletLoss
base_model: Alibaba-NLP/gte-base-en-v1.5
widget:
- source_sentence: Michael Caton-Jones
sentences:
- title
- director
- Donaldson
- Mr. Deeds
- source_sentence: The Road Home
sentences:
- NR
- title
- mpaa_rating
- Mrs. Parker and the Vicious Circle
- source_sentence: N
sentences:
- mpaa_rating
- R
- director
- Lee
- source_sentence: Adventures in Babysitting
sentences:
- title
- Beverly Hills Cop
- G
- mpaa_rating
- source_sentence: Yimou
sentences:
- Paul Newman
- director
- R
- mpaa_rating
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- silhouette_cosine
- silhouette_euclidean
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- task:
type: silhouette
name: Silhouette
dataset:
name: Unknown
type: unknown
metrics:
- type: silhouette_cosine
value: 0.9431755542755127
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.8039237856864929
name: Silhouette Euclidean
- type: silhouette_cosine
value: 0.9402034878730774
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.7980138063430786
name: Silhouette Euclidean
---
# SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) <!-- at revision a829fd0e060bb84554da0dfd354d0de0f7712b7f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-movie-reference_2_to_verify_8-fold-1")
# Run inference
sentences = [
'Yimou',
'Paul Newman',
'R',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.9432** |
| silhouette_euclidean | 0.8039 |
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.9402** |
| silhouette_euclidean | 0.798 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,083 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 4.63 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.64 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.68 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.7 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.71 tokens</li><li>max: 6 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:-----------------------------|:-------------------------------|:----------------------------------|:----------------------|:----------------------|
| <code>George Stevens</code> | <code>Guédiguian</code> | <code>The Spanish Prisoner</code> | <code>director</code> | <code>title</code> |
| <code>Drama</code> | <code>Children's/Family</code> | <code>Kenneth Branagh</code> | <code>genre</code> | <code>director</code> |
| <code>Carroll Ballard</code> | <code>Cameron</code> | <code>Mary Poppins</code> | <code>director</code> | <code>title</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 343 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 343 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 4.69 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.56 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.63 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.71 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.76 tokens</li><li>max: 6 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:-----------------------|:---------------------|:----------------------------------------------------|:----------------------|:-------------------------|
| <code>Vila</code> | <code>Rudolph</code> | <code>Aparajito</code> | <code>director</code> | <code>title</code> |
| <code>Joe Dante</code> | <code>Arkless</code> | <code>R (for language and some drug content)</code> | <code>director</code> | <code>mpaa_rating</code> |
| <code>Caan</code> | <code>Musker</code> | <code>Cronos</code> | <code>director</code> | <code>title</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | silhouette_cosine |
|:-----:|:----:|:-------------:|:---------------:|:---------------:|:-----------------:|
| -1 | -1 | - | - | 0.4752 | 0.1453 |
| 1.0 | 25 | 1.5567 | 0.0558 | 0.9942 | 0.9442 |
| 2.0 | 50 | 0.0315 | 0.0081 | 1.0 | 0.9396 |
| 3.0 | 75 | 0.0143 | 0.0005 | 1.0 | 0.9410 |
| 4.0 | 100 | 0.0043 | 0.0 | 1.0 | 0.9419 |
| 5.0 | 125 | 0.0037 | 0.0 | 1.0 | 0.9432 |
| -1 | -1 | - | - | 1.0 | 0.9402 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.0.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.5.2
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e3_s55555_v4_l5_v50 | KingKazma | "2023-08-13T21:00:49" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T21:00:48" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
carolinacon/ppo-LunarLander-v2 | carolinacon | "2025-04-17T13:47:06" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-04-17T13:46:44" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.47 +/- 21.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gsw2301/ppo-Huggy | gsw2301 | "2023-08-02T20:48:47" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-08-02T20:48:44" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gsw2301/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ChandanGR/Qwen2.5-1.5B-4bit-channel | ChandanGR | "2025-03-24T14:46:16" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2025-03-24T14:45:55" | Temporary Redirect. Redirecting to /api/resolve-cache/models/ChandanGR/Qwen2.5-1.5B-4bit-channel/233d467e8b7b4f92653da0b1e55b63481ffa8b78/README.md?%2FChandanGR%2FQwen2.5-1.5B-4bit-channel%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22 |
shibajustfor/440f76a1-e371-43b7-b754-207a85eb58f8 | shibajustfor | "2025-03-13T10:42:44" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"region:us"
] | null | "2025-03-13T10:42:28" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/mistral-7b-instruct-v0.2
model-index:
- name: shibajustfor/440f76a1-e371-43b7-b754-207a85eb58f8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/440f76a1-e371-43b7-b754-207a85eb58f8
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
memeviss/white_4 | memeviss | "2025-03-29T16:50:09" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-29T16:42:09" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
JeloH/qwen-textgen-model15nnn | JeloH | "2024-12-17T21:22:05" | 125 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-12-17T21:21:51" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AIgroup-CVM-utokyohospital/MedSwallow-70b | AIgroup-CVM-utokyohospital | "2025-03-03T14:01:09" | 99 | 0 | peft | [
"peft",
"safetensors",
"medical",
"arxiv:2406.14882",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-02-02T08:15:58" | ---
library_name: peft
license: cc-by-nc-sa-4.0
tags:
- medical
---
⚠️⚠️⚠️ Only for research purpose. Do not use it for medical purpose. ⚠️⚠️⚠️
# MedSwallow-70B🏥
[東工大Swallow](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)をベースモデルとし, 医療Q&AデータセットでInstruction Tuningを施した医療ドメインの日本語LLMです.
チューニングには独自で用意した米国医師国家試験(USMLE)を和訳したQ&Aデータセットを用いました.
MedSwallow is a Japanese medical LLM for medical question-answering.
MedSwallow is based on [Swallow-70B](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf) and has passed instruction tuning with USMLE dataset translated in Japanese by our own.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
## License
ライセンスは非商用ライセンスです.
Non-commercial.
## Usage
```
model_name = "tokyotech-llm/Swallow-70b-instruct-hf"
peft_model= "AIgroup-CVM-utokyohospital/MedSwallow-70b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map=device,
model = PeftModel.from_pretrained(
model,
peft_model,
torch_dtype=torch.float16,
device_map=device,
)
```
## Benchmark
See also [Japanese Medical Language Model Evaluation Harness](https://github.com/stardust-coder/japanese-lm-med-harness).
- IgakuQA (in English):
- IgakuQA (in Japanese):
- MedQA (in English) :
- MedQA (in Japanese) :
## How to cite
```
@misc{sukeda202470bparameterlargelanguagemodels,
title={70B-parameter large language models in Japanese medical question-answering},
author={Issey Sukeda and Risa Kishikawa and Satoshi Kodera},
year={2024},
eprint={2406.14882},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.14882},
}
``` |
llm-jp/llm-jp-3-vila-14b | llm-jp | "2024-11-18T08:29:59" | 272 | 6 | null | [
"safetensors",
"llava_llama",
"image-text-to-text",
"ja",
"region:us"
] | image-text-to-text | "2024-10-26T07:48:03" | ---
language:
- ja
pipeline_tag: image-text-to-text
---
# LLM-jp-3 VILA 14B
This repository provides a large vision language model (VLM) developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/), Japan.
## Usage
Python version: 3.10.12
1. Clone the repository and install the libraries.
<details>
```bash
git clone [email protected]:llm-jp/llm-jp-VILA.git
cd llm-jp-VILA
```
```bash
python3 -m venv venv
source venv/bin/activate
```
```bash
pip install --upgrade pip
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.4.2/flash_attn-2.4.2+cu118torch2.0cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
pip install flash_attn-2.4.2+cu118torch2.0cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
pip install -e .
pip install -e ".[train]"
```
```bash
pip install git+https://github.com/huggingface/[email protected]
cp -rv ./llava/train/transformers_replace/* ./venv/lib/python3.10/site-packages/transformers/
```
</details>
2. Run the python script. You can change the `image_path` and `query` to your own.
<details>
```python
import argparse
from io import BytesIO
import requests
import torch
from PIL import Image
from llava.constants import IMAGE_TOKEN_INDEX
from llava.conversation import conv_templates
from llava.mm_utils import (get_model_name_from_path,
process_images, tokenizer_image_token)
from llava.model.builder import load_pretrained_model
from llava.utils import disable_torch_init
def load_image(image_file):
if image_file.startswith("http") or image_file.startswith("https"):
response = requests.get(image_file)
image = Image.open(BytesIO(response.content)).convert("RGB")
else:
image = Image.open(image_file).convert("RGB")
return image
def load_images(image_files):
out = []
for image_file in image_files:
image = load_image(image_file)
out.append(image)
return out
disable_torch_init()
model_checkpoint_path = "llm-jp/llm-jp-3-vila-14b"
model_name = get_model_name_from_path(model_checkpoint_path)
tokenizer, model, image_processor, context_len = load_pretrained_model(model_checkpoint_path, model_name)
image_path = "path/to/image"
image_files = [
image_path
]
images = load_images(image_files)
query = "<image>\nこの画像について説明してください。"
conv_mode = "llmjp_v3"
conv = conv_templates[conv_mode].copy()
conv.append_message(conv.roles[0], query)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
images_tensor = process_images(images, image_processor, model.config).to(model.device, dtype=torch.float16)
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).cuda()
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=[
images_tensor,
],
do_sample=False,
num_beams=1,
max_new_tokens=256,
use_cache=True,
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(outputs)
```
</details>
## Model Details
|Model components|Model / Architecture|Parameters|
|:---:|:---:|:---:|
|Vision encoder|[siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)|428M|
|Projector|2-layer MLP|32M|
|LLM|[llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct)|13B|
## Datasets
The model was trained in three stages.
### Step-0
We used the following data sets to tune the parameters in the projector.
| Language | Dataset | Images|
|:---|:---|---:|
|Japanese|[Japanese image text pairs](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-japanese-image-text-pairs)|558K
|English|[LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain)|558K
### Step-1
We used the following data sets to tune the parameters in the projector and LLM.
| Language | Dataset | Images |
|:---|:---|:---|
|Japanese|[Japanese image text pairs](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-japanese-image-text-pairs)| 6M |
| |[Japanese interleaved data](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-japanese-interleaved-data)| 6M |
|English |[coyo](https://github.com/kakaobrain/coyo-dataset) (subset) | 6M |
| |[mmc4-core](https://github.com/allenai/mmc4) (subset) | 6M |
### Step-2
We used the following data sets to tune the parameters in the projector and LLM.
| Language | Dataset | Images |
|:---|:---|:---|
|Japanese|[llava-instruct-ja](https://huggingface.co/datasets/llm-jp/llava-instruct-ja)| 156K |
| |[japanese-photos-conv](https://huggingface.co/datasets/llm-jp/japanese-photos-conversation)| 12K |
| |[ja-vg-vqa](https://huggingface.co/datasets/llm-jp/ja-vg-vqa-conversation)| 99K |
| |[synthdog-ja](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja) (subset)| 102K |
|English |[LLaVA](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) | 158K |
| |[VQAv2](https://visualqa.org/) | 53K |
| |[GQA](https://cs.stanford.edu/people/dorarad/gqa/index.html) | 46K |
| |[OCRVQA](https://ocr-vqa.github.io/) | 80K |
| |[TextVQA](https://textvqa.org/dataset/) | 22K |
## Evaluations
We evaluated our model using [Heron Bench](https://huggingface.co/datasets/turing-motors/Japanese-Heron-Bench), [JA-VLM-Bench-In-the-Wild](https://huggingface.co/datasets/SakanaAI/JA-VLM-Bench-In-the-Wild), and [JA-VG-VQA500](https://huggingface.co/datasets/SakanaAI/JA-VG-VQA-500).
We used `gpt-4o-2024-05-13` for LLM-as-a-judge.
### Heron Bench
| Models | LLM-as-a-judge score (%) |
|---|:---:|
| [Japanese InstructBLIP Alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha) | 14.0 |
| [Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm) | 24.2 |
| [Llama-3-EvoVLM-JP-v2](https://huggingface.co/SakanaAI/Llama-3-EvoVLM-JP-v2) | 39.3 |
| [LLaVA-CALM2-SigLIP](https://huggingface.co/cyberagent/llava-calm2-siglip) | 43.3 |
| **llm-jp-3-vila-14b (Ours)** | 57.2 |
| GPT-4o | 87.6 |
### JA-VLM-Bench-In-the-Wild
| **Models** | ROUGE-L | LLM-as-a-judge score (/5.0) |
|---|:---:|:---:|
| [Japanese InstructBLIP Alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha) | 20.8 | 2.42 |
| [Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm) | 23.3 | 2.47 |
| [Llama-3-EvoVLM-JP-v2](https://huggingface.co/SakanaAI/Llama-3-EvoVLM-JP-v2) | 41.4 | 2.92 |
| [LLaVA-CALM2-SigLIP](https://huggingface.co/cyberagent/llava-calm2-siglip) | 47.2 | 3.15 |
| **llm-jp-3-vila-14b (Ours)** | 52.3 | 3.69 |
| GPT-4o | 37.6 | 3.85 |
### JA-VG-VQA-500
| **Models** | ROUGE-L | LLM-as-a-judge score (/5.0) |
|---|:---:|:---:|
| [Japanese InstructBLIP Alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha) | -- | -- |
| [Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm) | -- | -- |
| [Llama-3-EvoVLM-JP-v2](https://huggingface.co/SakanaAI/Llama-3-EvoVLM-JP-v2) | 23.5 | 2.96 |
| [LLaVA-CALM2-SigLIP](https://huggingface.co/cyberagent/llava-calm2-siglip) | 17.4 | 3.21 |
| **llm-jp-3-vila-14b (Ours)** | 16.2 | 3.62 |
| GPT-4o | 12.1 | 3.58 |
## Risks and Limitations
The model released in this repository is in the early stages of our research and development. It has not been tuned such that model's outputs are aligned with social norms, ethical standards, and the law.
## License
The weights of this model are released under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
In addition, a user of this model must comply with [the OpenAI terms of use](https://openai.com/policies/terms-of-use) because the model used synthetic data generated by OpenAI GPT-4.
## Additional information
Regarding the license of the [synthdog-ja](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja) dataset, there is no explicit license statement in the dataset documentation. While we attempted to contact the main corresponding author of "OCR-free Document Understanding Transformer" for clarification, we received no response.
Based on the following considerations:
1. The [donut-base](https://huggingface.co/naver-clova-ix/donut-base) model trained on this dataset is released under the MIT license
2. The Wikipedia articles used in the dataset are licensed under CC-BY-SA
We have determined that the synthdog-ja dataset is most likely governed by the CC-BY-SA license, and proceeded with training under this assumption. |
adamo1139/aya-expanse-32b-ungated | adamo1139 | "2024-10-29T22:12:30" | 41 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"arxiv:2408.14960",
"arxiv:2407.02552",
"arxiv:2406.18682",
"arxiv:2410.10801",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-10-29T21:39:47" | ---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
# Model Card for Aya-Expanse-32B Ungated
Aya-Expanse 32B, but not gated!
<img src="aya-expanse-32B.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
**Aya Expanse 32B** is an open-weight research release of a model with highly advanced multilingual capabilities. It focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the result of a year’s dedicated research from [Cohere For AI](https://cohere.for.ai/), including [data arbitrage](https://arxiv.org/pdf/2408.14960), [multilingual preference training](https://arxiv.org/abs/2407.02552), [safety tuning](https://arxiv.org/abs/2406.18682), and [model merging](https://arxiv.org/abs/2410.10801). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 32-billion version of the Aya Expanse model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-expanse-8B).
- Developed by: [Cohere For AI](https://cohere.for.ai/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: Aya Expanse 32B
- Model Size: 32 billion parameters
### Supported Languages
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese.
### Try it: Aya Expanse in Action
Use the [Cohere playground](https://dashboard.cohere.com/playground/chat) or our [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/aya_expanse) for interactive exploration.
### How to Use Aya Expanse
Install the transformers library and load Aya Expanse 32B as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-expanse-32b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebooks
**Fine-Tuning:**
- [Detailed Fine-Tuning Notebook](https://colab.research.google.com/drive/1ryPYXzqb7oIn2fchMLdCNSIH5KfyEtv4).
**Community-Contributed Use Cases:**:
The following notebooks contributed by *Cohere For AI Community* members show how Aya Expanse can be used for different use cases:
- [Mulitlingual Writing Assistant](https://colab.research.google.com/drive/1SRLWQ0HdYN_NbRMVVUHTDXb-LSMZWF60)
- [AyaMCooking](https://colab.research.google.com/drive/1-cnn4LXYoZ4ARBpnsjQM3sU7egOL_fLB?usp=sharing)
- [Multilingual Question-Answering System](https://colab.research.google.com/drive/1bbB8hzyzCJbfMVjsZPeh4yNEALJFGNQy?usp=sharing)
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya Expanse 32B is an auto-regressive language model that uses an optimized transformer architecture. Post-training includes supervised finetuning, preference training, and model merging.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 128K
### Evaluation
We evaluated Aya Expanse 8B against Gemma 2 9B, Llama 3.1 8B, Ministral 8B, and Qwen 2.5 7B using m-ArenaHard, a dataset based on the [Arena-Hard-Auto dataset](https://huggingface.co/datasets/lmarena-ai/arena-hard-auto-v0.1) and translated to the 23 languages we support in Aya Expanse 8B. Win-rates were determined using gpt-4o-2024-08-06 as a judge. For a conservative benchmark, we report results from gpt-4o-2024-08-06, though gpt-4o-mini scores showed even stronger performance.
The m-ArenaHard dataset, used to evaluate Aya Expanse’s capabilities, is publicly available [here](https://huggingface.co/datasets/CohereForAI/m-ArenaHard).
<img src="winrates_marenahard_complete.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
|
Romain-XV/a4cf7f09-9b2b-4d6e-a3c3-a84266920288 | Romain-XV | "2025-03-26T16:09:06" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"region:us"
] | null | "2025-03-26T13:16:52" | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4cf7f09-9b2b-4d6e-a3c3-a84266920288
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataset_prepared_path: null
datasets:
- data_files:
- 1fdffd3e13e609ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fdffd3e13e609ac_train_data.json
type:
field_input: tools
field_instruction: func_name
field_output: func_desc
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 4
eval_max_new_tokens: 128
eval_steps: 200
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/a4cf7f09-9b2b-4d6e-a3c3-a84266920288
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1176
micro_batch_size: 4
mlflow_experiment_name: /tmp/1fdffd3e13e609ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 200
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04
wandb_entity: null
wandb_mode: online
wandb_name: eb8998b4-58ff-4324-80f1-956118f2a3e8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb8998b4-58ff-4324-80f1-956118f2a3e8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4cf7f09-9b2b-4d6e-a3c3-a84266920288
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1176
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2614 | 0.0004 | 1 | 0.6910 |
| 0.2834 | 0.0829 | 200 | 0.0390 |
| 0.0079 | 0.1657 | 400 | 0.0296 |
| 0.0587 | 0.2486 | 600 | 0.0184 |
| 0.0778 | 0.3314 | 800 | 0.0078 |
| 0.0027 | 0.4143 | 1000 | 0.0093 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TinyLlama/TinyLlama_v1.1_chinese | TinyLlama | "2024-06-07T01:23:56" | 469 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"arxiv:2401.02385",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-09T09:40:17" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
language:
- en
---
# TinyLlama-1.1B-v1.1
- **Codebase:** [github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama)
- **Technical Report:** [arxiv.org/pdf/2401.02385](https://arxiv.org/pdf/2401.02385)
<div align="center">
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
## Overview
In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below.

## Pretraining
Due to these issues([bug1](https://whimsical-aphid-86d.notion.site/Release-of-TinyLlama-1-5T-Checkpoints-Postponed-01b266998c1c47f78f5ae1520196d194?pvs=4), [bug2](https://whimsical-aphid-86d.notion.site/2023-12-18-Updates-from-TinyLlama-Team-7d30c01fff794da28ccc952f327c8d4f)). We try to retrain our TinyLlama to provide a better model. We train our model with 2T tokens and divided our pretraining into 3 different stages: 1) basic pretraining, 2) continual pretraining with specific domain, and 3) cooldown .
#### Basic pretraining
In this initial phase, we managed to train our model with only slimpajama to develop its commonsense reasoning capabilities. The model was trained with 1.5T tokens during this basic pretraining period. Since we used a cluster with 4 A100-40G per node and we only shard model weights within a node, we can only set the batch size to approximately 1.8M this time.
#### Continual pretraining with specific domain
We incorporated 3 different kinds of corpus during this pretraining, slimpajama (which is the same as the first phase), Math&Code (starcoder and proof pile), and Chinese (Skypile). This approach allowed us to develop three variant models with specialized capabilities.
At the begining ~6B tokens in this stage, we linearly increased the sampling proportion for the domain-specific corpus (excluding Slimpajama, as it remained unchanged compared with stage 1). This warmup sampling increasing strategy was designed to gradually adjust the distribution of the pretraining data, ensuring a more stable training process. After this sampling increasing stage, we continued pretraining the model with stable sampling strategy until reaching ~1.85T tokens.
#### Cooldown
Implementing a cooldown phase has become a crucial technique to achieve better model convergence at the end of pretraining. However, since we have already used cosine learning rate strategy at the beginning, it becomes challenging to alter the learning rate for cooldown like what MiniCPM or deepseek does. Therefore, we try to cool down with adjusting our batch size. Specifically, we increase our batch size from 1.8M to 7.2M while keeping the original cosine learning rate schedule during our cooldown stage.
#### Tinyllama model family
Following an extensive and detailed pretraining process. We are now releasing three specialized versions of our model:
1. **TinyLlama_v1.1**: The standard version, used for general purposes.
2. **TinyLlama_v1.1_Math&Code**: Equipped with better ability for math and code.
3. **TinyLlama_v1.1_Chinese**: Good understanding capacity for Chinese.
## Data
Here we list our data distribution in each stage:
### TinyLlama_v1.1
| Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 100.0 | 100.0 |
### TinyLlama_v1.1_math_code
| Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 75.0 | 75.0 |
| starcoder | - | 15.0 | 15.0 |
| proof_pile | - | 10.0 | 10.0 |
### TinyLlama_v1.1_chinese
| orpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 50.0 | 50.0 |
| skypile | - | 50.0 | 50.0 |
### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) GitHub page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "TinyLlama/TinyLlama_v1.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
| ----------------------------------------- | --------------- | --------- | --------- | ---------- | --------- | --------- | ----- | --------- | --------- |
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
| TinyLlama-1.1B-v1.1 | 2T | **61.47** | **36.80** | 59.43 | 32.68 | **55.47** | 55.99 | **73.56** | 53.63 |
| TinyLlama-1.1B-v1_math_code | 2T | 60.80 | 36.40 | **60.22** | **33.87** | 55.20 | 57.09 | 72.69 | **53.75** |
| TinyLlama-1.1B-v1.1_chinese | 2T | 58.23 | 35.20 | 59.27 | 31.40 | 55.35 | **61.41** | 73.01 | 53.41 | |
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,895