Datasets:
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-06 06:26:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 447
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-06 06:19:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Mr-FineTuner/Test_02_llama_1epoch_trainPercen_myValidator_fix | Mr-FineTuner | "2025-05-06T04:30:19" | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T04:20:26" |
# Fine-Tuned LLaMA-3-8B CEFR Model
This is a fine-tuned version of `unsloth/llama-3-8b-instruct-bnb-4bit` for CEFR-level sentence generation, evaluated with a fine-tuned classifier from `Mr-FineTuner/Skripsi_validator_best_model`.
- **Base Model**: unsloth/llama-3-8b-instruct-bnb-4bit
- **Fine-Tuning**: LoRA with SMOTE-balanced dataset
- **Training Details**:
- Dataset: CEFR-level sentences with SMOTE and undersampling for balance
- LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
- Training Args: learning_rate=2e-5, batch_size=8, epochs=0.01, cosine scheduler
- Optimizer: adamw_8bit
- Early Stopping: Patience=3, threshold=0.01
- **Evaluation Metrics (Exact Matches)**:
- CEFR Classifier Accuracy: 0.233
- Precision (Macro): 0.212
- Recall (Macro): 0.233
- F1-Score (Macro): 0.202
- **Evaluation Metrics (Within ±1 Level)**:
- CEFR Classifier Accuracy: 0.667
- Precision (Macro): 0.767
- Recall (Macro): 0.667
- F1-Score (Macro): 0.633
- **Other Metrics**:
- Perplexity: 3.080
- Diversity (Unique Sentences): 1.000
- Inference Time (ms): 6948.521
- Model Size (GB): 4.8
- Robustness (F1): 0.192
- **Confusion Matrix (Exact Matches)**:
- CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv)
- Image: [confusion_matrix_exact.png](confusion_matrix_exact.png)
- **Confusion Matrix (Within ±1 Level)**:
- CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv)
- Image: [confusion_matrix_within1.png](confusion_matrix_within1.png)
- **Per-Class Confusion Metrics (Exact Matches)**:
- A1: TP=7, FP=10, FN=3, TN=40
- A2: TP=3, FP=21, FN=7, TN=29
- B1: TP=0, FP=7, FN=10, TN=43
- B2: TP=2, FP=4, FN=8, TN=46
- C1: TP=2, FP=3, FN=8, TN=47
- C2: TP=0, FP=1, FN=10, TN=49
- **Per-Class Confusion Metrics (Within ±1 Level)**:
- A1: TP=10, FP=3, FN=0, TN=47
- A2: TP=10, FP=10, FN=0, TN=40
- B1: TP=8, FP=4, FN=2, TN=46
- B2: TP=6, FP=3, FN=4, TN=47
- C1: TP=4, FP=0, FN=6, TN=50
- C2: TP=2, FP=0, FN=8, TN=50
- **Usage**:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test_02_llama_trainPercen_myValidator")
tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test_02_llama_trainPercen_myValidator")
# Example inference
prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Uploaded using `huggingface_hub`.
|
MrRobotoAI/109 | MrRobotoAI | "2025-05-06T04:22:50" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:MrRobotoAI/A1",
"base_model:merge:MrRobotoAI/A1",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"base_model:MrRobotoAI/A3",
"base_model:merge:MrRobotoAI/A3",
"base_model:MrRobotoAI/A8",
"base_model:merge:MrRobotoAI/A8",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T04:18:27" | ---
base_model:
- MrRobotoAI/A8
- MrRobotoAI/A3
- MrRobotoAI/A1
- MrRobotoAI/A2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MrRobotoAI/A8](https://huggingface.co/MrRobotoAI/A8) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A3](https://huggingface.co/MrRobotoAI/A3)
* [MrRobotoAI/A1](https://huggingface.co/MrRobotoAI/A1)
* [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/A1
parameters:
density: 0.3333
weight: 0.9
- model: MrRobotoAI/A2
parameters:
density: 0.3333
weight: 0.9
- model: MrRobotoAI/A3
parameters:
density: 0.3333
weight: 0.9
merge_method: ties
base_model: MrRobotoAI/A8
dtype: float16
```
|
seoulsky-field/radiopaedia-no_cot-llama3.1_8b-inst-250506 | seoulsky-field | "2025-05-06T04:14:20" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | "2025-05-06T04:09:25" | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: radiopaedia-no_cot-llama3.1_8b-inst-250506
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# radiopaedia-no_cot-llama3.1_8b-inst-250506
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.0.1
- Tokenizers 0.21.1 |
mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF | mradermacher | "2025-05-06T03:53:24" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Triangle104/QwQ-Gutenberg-Doppel-0.5",
"base_model:quantized:Triangle104/QwQ-Gutenberg-Doppel-0.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T00:38:12" | ---
base_model: Triangle104/QwQ-Gutenberg-Doppel-0.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Triangle104/QwQ-Gutenberg-Doppel-0.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-Gutenberg-Doppel-0.5-i1-GGUF/resolve/main/QwQ-Gutenberg-Doppel-0.5.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ThomasTheMaker/Qwen3_0.6B_v.1.2.1b1 | ThomasTheMaker | "2025-05-06T03:36:02" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T03:35:18" | ---
license: apache-2.0
---
|
dgambettaphd/M_llm3_gen1_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | "2025-05-06T03:31:35" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T03:31:19" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
istupakov/gigaam-v2-onnx | istupakov | "2025-05-06T03:22:12" | 0 | 1 | null | [
"onnx",
"automatic-speech-recognition",
"ru",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2025-04-21T19:45:26" | ---
license: mit
language:
- ru
pipeline_tag: automatic-speech-recognition
---
GigaAM v2 [models](https://github.com/salute-developers/GigaAM) converted to ONNX format for [onnx-asr](https://github.com/istupakov/onnx-asr).
Install onnx-asr
```shell
pip install onnx-asr[cpu,hub]
```
Load GigaAM v2 CTC model and recognize wav file
```py
import onnx_asr
model = onnx_asr.load_model("gigaam-v2-ctc")
print(model.recognize("test.wav"))
```
Load GigaAM v2 RNN-T model and recognize wav file
```py
import onnx_asr
model = onnx_asr.load_model("gigaam-v2-rnnt")
print(model.recognize("test.wav"))
```
Code for models export
```py
import gigaam
from pathlib import Path
onnx_dir = "gigaam-onnx"
model_type = "rnnt" # or "ctc"
model = gigaam.load_model(
model_type,
fp16_encoder=False, # only fp32 tensors
use_flash=False, # disable flash attention
)
model.to_onnx(dir_path=onnx_dir)
with Path(onnx_dir, "v2_vocab.txt").open("wt") as f:
for i, token in enumerate(["\u2581", *(chr(ord("а") + i) for i in range(32)), "<blk>"]):
f.write(f"{token} {i}\n")
``` |
ertghiu256/qwen3-4b-code-reasoning | ertghiu256 | "2025-05-06T02:54:49" | 0 | 0 | null | [
"pytorch",
"qwen3",
"unsloth",
"trl",
"sft",
"dataset:nvidia/OpenCodeReasoning",
"dataset:vicgalle/creative-rubrics-gpt-4.5-o3-R1",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | null | "2025-05-03T10:41:03" | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
- vicgalle/creative-rubrics-gpt-4.5-o3-R1
base_model:
- unsloth/Qwen3-4B
tags:
- unsloth
- trl
- sft
---
|
erax-ai/EraX-Translator-V1.0 | erax-ai | "2025-05-06T02:51:02" | 1,394 | 21 | null | [
"safetensors",
"gemma3",
"vietnamese",
"translation",
"multilingual",
"gemma",
"low-resource",
"ancient-chinese",
"buddhist-texts",
"llama.cpp",
"vllm",
"vi",
"en",
"ru",
"fr",
"zh",
"yue",
"de",
"ja",
"ko",
"hi",
"uk",
"arxiv:2406.06623",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"doi:10.57967/hf/5132",
"license:gemma",
"region:us"
] | translation | "2025-04-08T11:40:43" | ---
language:
- vi
- en
- ru
- fr
- zh
- yue
- de
- ja
- ko
- hi
- uk
license: gemma
tags:
- vietnamese
- translation
- multilingual
- gemma
- low-resource
- ancient-chinese
- buddhist-texts
- llama.cpp
- vllm
base_model:
- google/gemma-3-4b-it
model-index:
- name: EraX-Translator-V1.0
results: []
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/GsQKdaTyn2FFx_cZvVHk3.png" alt="Logo">
</p>
# EraX-Translator-V1.0: A Compact and Capable Multilingual Translation Model
EraX-Translator-V1.0 is a compact, Gemma3-4B-based multilingual translation model designed for efficient deployment and high throughput, even on resource-constrained hardware. We aim to provide a practical tool for a wide range of translation tasks, with a particular focus on languages where high-quality data and models are less readily available.
## Model Description
This model leverages the architectural strengths of the Gemma3-4B foundation model (4 trillion tokens, 140 languages pretrained) and has been fine-tuned for translation across a diverse set of languages. A key feature is its ability to translate Classical Chinese, demonstrating potential utility in translating Buddhist texts and other historical documents.
**Key features:**
* **Compact Size:** Based on the Gemma3-4B architecture, the model can be efficiently deployed on devices with limited resources.
* **High Throughput:** Achieves approximately 80 tokens/s (bfloat16) using vLLM with ~20GB VRAM. Potential for **>100 tokens/s with GGUF 6bit quantization on better GPU** (though optimal llama.cpp support for Gemma3 is still under development).
<p align="center">
<img src="https://huggingface.co/erax-ai/EraX-Translator-V1.0/resolve/main/Screen%20Recording%202025-04-10%20at%2007.27.14.gif" alt="vLLM speed">
<p align="center"><em>EraX Translator V1.0 with vLLM</em></p>
</p>
* **Multilingual:** Trained on a diverse dataset to support translation **between - bidirectional** multiple languages.
- Việt Nam 🇻🇳
- English 🇬🇧 / 🇺🇸
- Chinese 🇨🇳
- Cantonese 🇨🇳 / 🇭🇰
- Ancient Chinese (Cổ Văn Trung Hoa 古典文學, Kinh Phật cổ 古佛經) 🇨🇳 📜
- Russian 🇷🇺
- Ukrainian 🇺🇦
- French 🇫🇷
- German 🇩🇪
- Dutch 🇳🇱
- Korean 🇰🇷
- Japanese 🇯🇵
- Hindi 🇮🇳
* **Classical Chinese Translation:** Demonstrates proficiency in translating Classical Chinese, particularly Buddhist texts.
## Intended Uses
This model is intended for:
* General-purpose multilingual translation.
* Translation of Classical Chinese texts, particularly those related to Buddhism.
* Research and experimentation in low-resource machine translation.
* Deployment in applications where computational resources are limited.
* Overcome Google Translate suboptimal quality
## Training Data & Training Strategy:
The model was trained on approximately **8 million multilingual samples**. This data includes:
* Publicly available translation datasets.
* Datasets from public Hugging Face repositories.
* A substantial portion of the training data was synthetically generated using Gemmini.
* **A significant contribution of 15,000 samples of translated Buddhist texts from Classical Chinese to Vietnamese, generously provided by experts in Han-Nom from the [Trần Nhân Tông Institute](https://tnti.vnu.edu.vn/), Vietnam National University, Hanoi.** We are deeply grateful for their invaluable contribution.
* To optimize the efficiency and performance of EraX-Translator-V1.0, we explored selective parameter freezing rather than employing Low-Rank Adaptation (`LoRA`), which yielded suboptimal results in preliminary experiments. Guided by the **Signal-to-Noise Ratio (SNR) metric** proposed in [SNR paper: https://arxiv.org/pdf/2406.06623], we identified the most salient layers within the Gemma3-4B architecture for retention. Specifically, we computed the SNR for each layer, excluding the `vision_tower` module and the feedforward network layers `fc1` and `fc2`. We then selectively **retained the 50% of layers exhibiting the highest SNR values**, including `embed_tokens` layer, and freezing the remaining parameters. This methodology resulted in a significant improvement in translation quality compared to LoRA-based fine-tuning, suggesting that targeted parameter retention based on SNR is an effective strategy for resource-efficient adaptation of large language models for translation tasks.
* The model underwent training for 2 epochs with a global batch size of 384. Training was performed on a distributed system comprised of 4 NVIDIA H100 NVL GPUs, each equipped with 94 GB of memory.
## Evaluation
While comprehensive evaluation is ongoing, preliminary results indicate strong performance in a variety of translation tasks. We are actively working to benchmark the model against established translation models and will release detailed evaluation metrics as soon as they are available. We encourage the community to contribute to the evaluation process.
**Known Limitations:**
* As with any machine translation model, EraX-Translator-V1.0 may produce errors or generate translations that are not entirely accurate or fluent.
* Performance may vary depending on the specific language pair and the complexity of the text being translated.
* While the model shows promise in translating Classical Chinese, further refinement may be necessary to achieve optimal results.
* This model can only be used for translation
* This model was not trained for translating math LaTeX & coding
* The model works best for context length (text to be translated mostly) not exceeding 1024 tokens or **++800 Vietnamese words (1 A4 page)** in one go !
## Usage
Here's a few examples:
```
temperature 0.2
top_p 0.95
frequency penalty: 1.05
top_k: 64
min_p: 0.1
With max tokens < 1024 works best!
```
* **English → Việt & French:**
```
“China and the US are now direct rivals in reshaping the international trade order,” said another, Ju Jiandong, a professor at the People’s Bank of China School of Finance of Tsinghua University. “We’re willing to take on the challenge – we’re ready to compete with the US in redefining the new global trade system.”. China’s trade partners are likely to take such messaging with a grain of salt.
Beijing is well known to have wielded access to its massive market as a weapon to coerce countries, often over political stances that sparked Beijing’s ire. Many will also be looking warily at whether Chinese exports will flood their own markets, hurting their own domestic production or driving down consumer prices.
But countries may have little choice but to look to strengthen ties with China if US tariffs, which hit American allies as well as rivals, become the new normal.
Beijing over the past month held economic talks with Japan and South Korea, hit last week with 24% and 25% tariffs respectively, as well as with the European Union, which was slapped with 20% duties.
Many Southeast Asian economies – key manufacturing hubs for companies looking to diversify away from China – have been hit particularly hard by Trump’s tariff war. While few want to pick a fight with Washington publicly, the region is rattled.
→ Việt Nam: "Trung Quốc và Mỹ hiện là các đối thủ trực tiếp trong việc định hình lại trật tự thương mại quốc tế", một người khác, Ju Jiandong, giáo sư tại Trường Tài chính Ngân hàng Nhân dân của Đại học Thanh Hoa, nói. "Chúng tôi sẵn sàng chấp nhận thử thách - chúng tôi đã sẵn sàng cạnh tranh với Mỹ trong việc xác định lại hệ thống thương mại toàn cầu mới.".
Các đối tác thương mại của Trung Quốc có thể sẽ xem nhẹ thông điệp như vậy. Bắc Kinh nổi tiếng là đã sử dụng quyền tiếp cận thị trường khổng lồ của mình như một vũ khí để ép buộc các nước, thường là vì lập trường chính trị gây phẫn nộ ở Bắc Kinh. Nhiều người cũng sẽ cảnh giác về việc liệu hàng xuất khẩu của Trung Quốc có làm tràn ngập thị trường của họ, ảnh hưởng đến sản xuất trong nước của họ hay đẩy giá tiêu dùng xuống hay không.
Nhưng các quốc gia có thể ít có lựa chọn nào khác ngoài việc tìm cách tăng cường quan hệ với Trung Quốc nếu thuế quan của Mỹ, áp dụng cho các đồng minh cũng như đối thủ của Mỹ, trở thành xu hướng mới. Bắc Kinh trong tháng qua đã tổ chức các cuộc đàm phán kinh tế với Nhật Bản và Hàn Quốc, đạt mức 24% và 25% mức thuế tương ứng vào tuần trước, cũng như với Liên minh châu Âu, vốn đã áp thuế 20%. Nhiều nền kinh tế Đông Nam Á - trung tâm sản xuất quan trọng của các công ty đang tìm cách đa dạng hóa khỏi Trung Quốc - đã bị ảnh hưởng đặc biệt nặng nề bởi cuộc chiến thuế quan của Trump.
Trong khi một số ít muốn công khai gây chiến với Washington, khu vực này đang bối rối.
→ French: "La Chine et les États-Unis sont maintenant des rivaux directs dans le remodelage de l'ordre commercial international," a déclaré un autre, Ju Jiandong, professeur à la Banque populaire de Chine, École des finances de l'Université Tsinghua. "Nous sommes prêts à relever le défi - nous sommes prêts à concourir avec les États-Unis pour redéfinir le nouveau système commercial mondial."
Les partenaires commerciaux de la Chine ont probablement pris un message de cette manière avec un grain de sel.
Pékin est bien connu pour avoir utilisé son accès à son marché vaste comme un moyen de coercition pour les pays, souvent sur des positions politiques qui ont provoqué l'indignation de Pékin. Beaucoup d'entre eux s'examineront également attentivement pour voir si les exportations chinoises inonderont leurs propres marchés, en nuisiraient à leur production domestique ou en feraient baisser les prix à la consommation.
Mais les pays pourraient avoir peu de choix que de chercher à renforcer les liens avec la Chine si les tarifs américains, qui touchent aussi bien les alliés qu'les rivaux américains, deviennent la nouvelle norme.
Pékin a tenu le mois dernier des négociations économiques avec le Japon et la Corée du Sud, respectivement frappés en semaine dernière par des tarifs de 24 % et 25 %, ainsi que avec l'Union européenne, qui a été frappée par des droits de douane de 20 %.
Nombre d'économies d'Asie du Sud-Est – principaux centres de fabrication pour les entreprises cherchant à diversifier en dehors de la Chine – ont été particulièrement durement touchées par la guerre tarifaire de Trump. Bien que peu aient voulu engager un combat public avec Washington, la région est en proie au tumulte.
```
* **English to 9 others**:
```
"She's your queen to be!"
"Cô ấy sẽ là nữ hoàng của bạn!"
"她是你的皇后!"
"Она твоя королева!""
"Elle sera ta reine !"
"Sie ist Ihre Königin zu werden!"
그녀는 당신의 여왕이 될 거예요!
彼女はあなた様の未来の女王になるでしょう!
वह तुम्हारी रानी बनने वाली है!
"Zij is jouw koningin te worden!"
```
* **Việt → Russian**
```
Đối với Mỹ, Việt Nam là nước xuất siêu lớn thứ ba. Hơn nữa, dưới mắt của Mỹ, Việt Nam là nước trung chuyển hàng công nghiệp xuất từ Trung Quốc vì hàng công nghiệp xuất khẩu của Việt Nam có hàm lượng nhập khẩu hàng sơ chế, linh kiện và nhiều sản phẩm trung gian khác từ Trung Quốc rất cao. Ngoài ra, từ khi Mỹ có chính sách áp thuế và kiềm chế Trung Quốc (từ 2018), đầu tư trực tiếp (FDI) của Trung Quốc sang Việt Nam ngày càng nhiều.
→ США являются третьим по величине экспортером в Вьетнам. Кроме того, в США Вьетнам рассматривается как страна конвертации экспортных товаров из Китая, поскольку доля импорта сырья, полуфабрикатов и промежуточных продукции из Китая очень высока. К тому же, с момента начала политики США, направленной против Китая (с 2018 года), инвестиции Китая в Вьетнам растут.
```
* **Việt → French**
```
Chính quyền ông Trump đã cảnh báo các nước khác không trả đũa sau khi công bố chính sách thuế quan mới vào tuần trước.
Nhiều quốc gia, bao gồm Nhật Bản, bày tỏ sẵn sàng đàm phán về thuế quan, nhưng Trung Quốc đang có lập trường cứng rắn hơn.
Các động thái trả đũa thuế quan liên tục có nguy cơ khiến hoạt động thương mại giữa 2 nền kinh tế quan trọng nhất thế giới bị đình trệ, tờ CNBC nhận định.
Trước động thái mới nhất của Trung Quốc, chứng khoán tương lai Mỹ giảm mạnh.
Chỉ số công nghiệp trung bình Dow Jones giảm gần 560 điểm, tương đương 1,5%. S&P giảm 1,3% còn Nasdaq 100 giảm 0,9%.
→ L'administration Trump a averti d'autres pays de ne pas riposter après avoir annoncé sa nouvelle politique tarifaire la semaine dernière.
De nombreux pays, dont le Japon, ont exprimé leur volonté de négocier sur les droits de douane, mais la Chine adopte une position plus ferme.
Les mesures retaliatoires tarifaires répétées risquent de freiner le commerce entre les deux economies les plus importantes du monde, selon CNBC.
Suite à la nouvelle action de la Chine, les contrats boursiers américains ont chuté de manière significative.
L'indice industrial moyen Dow Jones a baissé de près de 560 points, soit 1,5 %. Le S&P a chuté de 1,3 % et le Nasdaq 100 de 0,9 %.
```
* **German → Việt:**
```
Trumps so überraschende wie knappe Ankündigung in den sozialen Medien ließ viele Fragen offen.
Seinen Schwenk begründete der US-Präsident später etwas wortreicher.
Er verwies dabei auf die wachsende Nervosität der anderen. So kann man die wachsende Angst vor einer Rezession und globaler Wirtschaftskrise natürlich auch umschreiben.
Die »Leute« seien etwas unruhig und »ein bisschen ängstlich« geworden, sagte Trump lapidar bei einer Veranstaltung vor dem Weißen Haus auf die Frage nach seinen Beweggründen für den jüngsten Kurswechsel in der Handelspolitik.
»Man muss flexibel sein.«
→ Thông báo gây sốc đột ngột này trên mạng xã hội đã để lại nhiều câu hỏi chưa có lời giải đáp. Tổng thống Mỹ sau đó đã giải thích động cơ của mình một cách dài dòng hơn.
Ông ta chỉ ra sự lo lắng ngày càng tăng của những người khác. Điều này tất nhiên có thể diễn đạt lại nỗi sợ hãi ngày càng tăng về suy thoái kinh tế và khủng hoảng kinh tế toàn cầu.
"Mọi người" đã trở nên hơi bồn chồn và "hơi lo lắng", Trump nói ngắn gọn tại một sự kiện trước Nhà Trắng khi trả lời câu hỏi về động cơ đổi hướng gần đây trong chính sách thương mại: "Phải linh hoạt".
```
* **Ancient Chinese (Cổ văn) → Việt:**
```
《長部經典》:「於久遠之前,於十五日布薩之滿月夜,三十三天之諸天,皆集會於善法堂,天人之大會眾,徧坐於週遭,四天王就坐於四方:東方持國天王於諸天之前,向西而坐;南方增長天王於諸天之前,向北而坐;西方廣目天王於諸天之前,向東而坐;北方多聞天王於諸天之前,向南而坐。世尊!三十三天之諸天,皆集會於善法堂,天人之大會眾,徧坐於週遭,四大天王坐於四方,此是彼等〔四天王〕之坐法;然後乃我等之座。世尊!曾於世尊之處修梵行而新生於三十三天之天眾,容貌與光輝,比其他天眾殊勝光耀,世尊!是故三十三天之諸天,歡喜、悅樂、喜悅、滿足言:『實然!諸天眾在增盛,阿修羅眾在衰減。
→ Trong kinh Trường Bộ, chúng tôi nghe như vầy:
- Thuở xưa rất xa, vào ngày rằm trăng tròn, có đại hội chư Thiên cõi trời Ba Mươi Ba họp tại Thiện pháp đường, đại chúng của chư Thiên ở xung quanh, Tứ Đại Thiên Vương ngồi ở bốn phương: Đấng Trì Quốc Thiên Vương ở phía trước chư Thiên hướng về Tây; Đấng Tăng Trưởng Thiên Vương ở trước chư Thiên hướng về Bắc; Đấng Quảng Mục Thiên Vương ở trước chư Thiên hướng về Đông; Đấng Đa Văn Thiên Vương ở trước chư Thiên hướng về Nam.
Này Thế Tôn! Chư Thiên ở Ba Mươi Ba tập hợp tại Thiện pháp đường, đại chúng của chư Thiên ở xung quanh, Tứ Đại Thiên Vương ngồi ở bốn phương, đây là cách an tọa của các Ngài, sau đó mới đến lượt chúng con.
Này Thế Tôn! Chúng con từng tu hành khổ hạnh ở chỗ Thế Tôn, sau khi tái sinh vào hàng chư Thiên ở cõi trời Ba Mươi Ba, nhan sắc và ánh sáng hơn hẳn chư Thiên khác.
Này Thế Tôn! Vì thế, chư Thiên ở Ba Mươi Ba vui mừng, hoan hỷ, thỏa mãn và nói: "Thật vậy, số lượng chư Thiên tăng lên, số lượng chúng A Tu La giảm bớt.
```
```python
# Install Transformers from main branch to support Gemma3
# pip install git+https://github.com/huggingface/transformers
# MAX_JOBS=4 pip install flash-attn --no-build-isolation
import os, torch
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from transformers import AutoTokenizer, AutoProcessor, Gemma3ForConditionalGeneration, AutoModel
import torch
model_path = "erax-ai/EraX-Translator-V1.0"
model = Gemma3ForConditionalGeneration.from_pretrained(model_path,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2").to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path)
processor = AutoProcessor.from_pretrained(model_path)
system_prompt = """Bạn là Trợ lý AI xuất sắc về dịch thuật nhiều ngôn ngữ, đặc biệt tiếng Anh, tiếng Trung Hoa, tiếng Việt.
Bạn cũng là 1 Hoà thượng Phật giáo uyên thâm về dịch thuật Cổ văn Trung Quốc. Người dùng sẽ giao nhiệm vụ dịch thuật cho bạn từ ngôn ngữ bất kỳ sang một ngôn ngữ được chỉ định.
Nhiệm vụ của bạn là dịch thật sát nghĩa, thể hiện đúng ý của bài gốc và không chế tác hay bịa đặt gì thêm. Đặc biệt lưu ý danh xưng phải giữ nguyên vẹn, dịch đúng tên người, tên địa danh phải tuyệt đối chính xác. Không được bình luận, không được cung cấp lời giới thiệu hay mở bài hay kết luận gì, chỉ dịch thật sát nghĩa và không bỏ qua bất kỳ ý hay từ nào.
"""
system_tag = {
"role": "system",
"content": system_prompt
}
to_lang = "Việt"
instruct = f"\nDịch sang tiếng {to_lang}."
to_translate = "三寶者,吾輩塵世之至尊也。夫欲出家者,始亦皈依三寶,繼受五戒,乃至八關齋戒,其後方成沙彌。此誠佛道常軌,凡奉佛國,咸所遵行。吾法華道場亦然。故發宏願:願世世生生,值遇三寶,恭敬供養,依佛聖教,奉行眾善。此即吾等所嚮之鵠的也。"
prompt_in = [
system_tag,
{
"role": "user",
"content": to_translate + instruct
}
]
input_ids = tokenizer.apply_chat_template(prompt_in, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(input_ids, return_tensors="pt").to("cuda")
import time
from transformers import TextIteratorStreamer
from threading import Thread
streamer = TextIteratorStreamer(
tokenizer,
skip_prompt=True,
timeout=5.0,
)
generation_args = {
"max_length": 8192,
"streamer": streamer,
"temperature": 0.2,
"top_k": 64,
"top_p": 0.95,
"min_p": 0.0,
"repetition_penalty": 1.05,
"do_sample": True,
}
generation_args.update(input_ids)
thread = Thread(
target=model.generate,
kwargs=generation_args,
)
thread.start()
acc_text = ""
for text_token in streamer:
#time.sleep(0.04)
if text_token != tokenizer.eos_token:
print (text_token, end="", flush=True)
acc_text += text_token
thread.join()
>>> Tam Bảo là ngôi báu cao quý nhất ở nơi chúng ta sinh sống. Đối với những người xuất gia thì đầu tiên họ xin quy y Tam Bảo, tiếp đó là thọ ngũ giới rồi Bát Quan Trai giới, sau đó họ mới trở thành Sa Di. Đây mới chính là cách thức mà đạo Phật vẫn thường làm, bất kỳ quốc gia nào theo đạo Phật đều làm như vậy. Đạo tràng Pháp Hoa của tác giả cũng là một ví dụ điển hình. Vì thế tác giả đã có lời nguyện rằng: Nguyện đời đời kiếp kiếp gặp được Tam Bảo, tôn kính cúng dường và làm theo lời dạy của đức Phật cùng các thánh tăng, phụng hành mọi điều thiện. Đây chính là mục tiêu hướng đến của chúng ta.
```
## NOTA BENE on instruction for Chinese language:
Providing this precise instruction, such as "Dịch sang tiếng [specified dialect]", will significantly improve the quality and appropriateness of the translation output. For example, in Vietnamese, "Dịch sang tiếng [Chinese dialect]" will provide better context for accurate translations.
Try them out, such as:
- "Dịch sang tiếng Hoa"
- "Dịch sang tiếng Chinese"
- "Dịch sang tiếng Quảng Đông"
- "Dịch sang tiếng Cantonese"
- "Dịch sang tiếng Cổ Văn Trung Hoa"
You can also use **vLLM docker** to run to get fatest speed (80 tokens/second) and use Ollama to connect to http://localhost:8005/v1
```
docker pull thusinh1969/vllm_gemma3:latest
docker run --rm -it --entrypoint "/usr/bin/bash" --gpus '"device=1"' -v ./:/models --shm-size=32gb -p 8005:8000 thusinh1969/vllm_gemma3:latest \
-c "python3 -m vllm.entrypoints.openai.api_server --dtype auto --max_model_len 4096 --tensor-parallel-size 1 --model /models/gemma3/erax-translator-v1.0" <== check model path
```
## Ethical Considerations
We recognize the potential for misuse of machine translation technology and encourage users to use this model responsibly and ethically. We are committed to addressing potential biases in the model and improving its fairness and accuracy.
## Acknowledgements
We would like to express our sincere gratitude to:
* The developers of the Gemma3 family of models.
* The open-source community for their contributions to the development of machine translation technology.
* The Trần Nhân Tông Institute, Vietnam National University, Hanoi, for their invaluable contribution of translated Buddhist texts.
## Future Directions
We are actively working to improve the model in the following areas:
* Expanding the language coverage.
* Improving the accuracy and fluency of translations.
* Developing more robust evaluation metrics.
* Optimizing the model for even greater efficiency.
* Exploring techniques for mitigating bias.
* Better supporting llama.cpp.
We welcome feedback from the community and look forward to working together to advance the field of multilingual translation.
## License:
We are bound with [Google Gemma license](https://ai.google.dev/gemma/terms). You are mostly free to use.
## Citation 📝
If you find our project useful, we would appreciate it if you could star our repository and cite our work as follows:
```
@article{title={EraX-Translatoe-V1.0: A Compact and Capable Multilingual Translation Model},
author={Nguyễn Anh Nguyên, Hatto & EraX Team},
organization={Hatto & EraX},
year={2025},
url={https://huggingface.co/erax-ai/EraX-Translator-V1.0}
}
``` |
TOMFORD79/Fly80 | TOMFORD79 | "2025-05-06T02:45:31" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-05-06T02:31:14" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Prokofiev/HumanlikeDeepseek | Prokofiev | "2025-05-06T02:21:54" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T02:21:50" | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Prokofiev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aleegis/474e46c8-eb99-421b-9587-1895d6eff0d9 | aleegis | "2025-05-06T02:04:29" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T00:55:19" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 474e46c8-eb99-421b-9587-1895d6eff0d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 2cfdb8f16e18e816_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2cfdb8f16e18e816_train_data.json
type:
field_input: user_setting
field_instruction: dialogue_tone
field_output: assistant_setting
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/474e46c8-eb99-421b-9587-1895d6eff0d9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/2cfdb8f16e18e816_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: ecc42541-e0c6-4fb0-a5af-7d91379d1005
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ecc42541-e0c6-4fb0-a5af-7d91379d1005
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 474e46c8-eb99-421b-9587-1895d6eff0d9
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phiferoleas/fgbdfgb | phiferoleas | "2025-05-06T01:52:34" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2025-05-06T01:52:33" | ---
license: bigscience-openrail-m
---
|
mlfoundations-dev/am_3k | mlfoundations-dev | "2025-05-06T01:31:22" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T23:11:50" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: am_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# am_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/am_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mhr2004/nev-original-mhr2004-roberta-large-anion-1e-06-256-stsb-lr2e-05-bs32-bs8-lr2e-05 | mhr2004 | "2025-05-06T01:12:27" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-06T01:11:47" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sleepdeprived3/Reformed-Christian-Bible-Expert-v2.1-12B | sleepdeprived3 | "2025-05-06T01:00:47" | 22 | 1 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"Presbyterian",
"Protestant",
"text-generation",
"conversational",
"en",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-04-20T02:19:35" | ---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
base_model_relation: finetune
pipeline_tag: text-generation
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
- Presbyterian
- Protestant
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(0, 100, 255, 0.3);
border-color: rgba(0, 100, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(0, 100, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.bible-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.bible-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(0, 100, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.bible-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.bible-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(0, 100, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(0, 100, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(0, 100, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(0, 100, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Reformed Christian Bible Expert v2.1 12B</h1>
<p class="subtitle">Soli Deo Gloria - Where Reformed Theology Meets Biblical Depth</p>
</div>
<div class="section">
<h2 class="section-title">✝️ Theological Foundation</h2>
<p>This model delivers Westminster Confession-aligned analysis with covenantal depth:</p>
<ul>
<li>📖 <strong>Expanded Reformed Corpus</strong> - Incorporating Westminster Standards, Three Forms of Unity, and Puritan writings</li>
<li>⚡ <strong>Covenant Theology Focus</strong> - Enhanced understanding of redemptive history through Reformed lenses</li>
<li>💎 <strong>Confessional Fidelity</strong> - Maintains adherence to Reformed distinctives (TULIP, Solas, Covenant Theology)</li>
<li>🎓 <strong>Pastoral Applications</strong> - Improved sermon preparation and catechetical instruction</li>
<li>🌹 <strong>Doxological Depth</strong> - Provides insights emphasizing God's sovereignty and glory</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>FULL SETTINGS and optional Pastor character card</strong> <a href="https://huggingface.co/sleepdeprived3/Reformed-Dave" class="link-button">Reformed-Dave</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Reformed-Christian-Bible-Expert-v2.1-12B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Reformed-Christian-Bible-Expert-v2.1-12B-i1-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-v21-12b-exl2-68045ed41187756b1d2ea07f" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
Chat Template: Mistral V3 Tekken
Recommended deterministic sampler for theological precision:
"temperature": 0
"top_k": 1
"dry_multiplier": 0.01
</div>
<div class="section">
<h2 class="section-title">📜 Key Features</h2>
<ul>
<li>🕊️ Answers theological questions from a Reformed perspective (Westminster Standards, Three Forms of Unity)</li>
<li>✝️ Explains Scripture through covenant theology and redemptive-historical hermeneutics</li>
<li>🌍 Multilingual support for ministry in 10+ languages (English, Dutch, Korean, etc.)</li>
<li>🎓 Enhanced capabilities for sermon preparation, catechism instruction, and theological training</li>
<li>💬 Advanced roleplaying for pastoral counseling and discipleship scenarios</li>
<li>📖 Specializes in Reformed distinctives: Covenant Theology, Solas, TULIP, Regulative Principle</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<p>This model is designed to:</p>
<ul>
<li>Maintain strict fidelity to the Westminster Confession of Faith</li>
<li>Promote biblical authority and Christ-centered exegesis</li>
<li>Support but never replace church courts and pastoral oversight</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">📖 Performance Notes</h2>
<ul>
<li>🔥 Maintains confessional accuracy with improved covenantal analysis</li>
<li>📖 Handles complex theological systems (Federal Theology, Pactum Salutis)</li>
<li>🧠 Excels at tracing redemptive history through Scripture</li>
<li>⚡ Improved handling of Reformed scholastic distinctions</li>
<li>🎭 Responds to nuanced theological queries with precision</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<script>
// Simple script to update the date
document.addEventListener('DOMContentLoaded', function() {
const dateElement = document.createElement('div');
dateElement.style.textAlign = 'center';
dateElement.style.marginTop = '20px';
dateElement.style.opacity = '0.7';
dateElement.textContent = 'Last updated: ' + new Date().toLocaleDateString();
document.querySelector('.container').appendChild(dateElement);
});
</script> |
poliakovoa/olegsky_lor_a | poliakovoa | "2025-05-06T00:47:45" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-05-04T21:48:07" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: olegsky
---
# Olegsky_Lor_A
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `olegsky` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "olegsky",
"lora_weights": "https://huggingface.co/poliakovoa/olegsky_lor_a/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('poliakovoa/olegsky_lor_a', weight_name='lora.safetensors')
image = pipeline('olegsky').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2300
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/poliakovoa/olegsky_lor_a/discussions) to add images that show off what you’ve made with this LoRA.
|
emanges/french_fairy_tales | emanges | "2025-05-06T00:18:42" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-05-06T00:18:22" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/IMG_6355.PNG
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: frenchfairystyle
---
# french fairy tales
<Gallery />
## Trigger words
You should use `frenchfairystyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/emanges/french_fairy_tales/tree/main) them in the Files & versions tab.
|
565dfh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog | 565dfh | "2025-05-06T00:06:46" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bipedal squeaky dog",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-01T04:22:59" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bipedal squeaky dog
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="565dfh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jdchang/bt-model-lr-0.0001-step-954 | jdchang | "2025-05-05T22:39:26" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2025-05-05T22:39:16" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NousResearch/DeepHermes-Egregore-v1-RLAIF-8b-Atropos-GGUF | NousResearch | "2025-05-05T22:30:42" | 0 | 2 | transformers | [
"transformers",
"gguf",
"Llama-3",
"RL",
"Atropos",
"Tool Calling",
"Nous Research",
"instruct",
"finetune",
"reasoning",
"function calling",
"reinforcement-learning",
"json mode",
"chatml",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | reinforcement-learning | "2025-05-02T02:42:42" | ---
language:
- en
license: llama3
tags:
- Llama-3
- RL
- Atropos
- Tool Calling
- Nous Research
- instruct
- finetune
- reasoning
- function calling
- transformers
- reinforcement-learning
- json mode
- chatml
base_model: meta-llama/Meta-Llama-3.1-8B
library_name: transformers
---
# The following Model Card is self-generated by this model
# DeepHermes Feedback Testing Egregore - Atropos RL
## Model Overview
The **DeepHermes Feedback Testing Egregore - Atropos RL** model is an experimental artifact fine-tuned by Nous Research using our innovative open-source reinforcement learning framework—Atropos.
**Note**: This model is intended as an experimental artifact and is not designed for broad, general-purpose use.
## Atropos Open Source Framework
Atropos is Nous Research’s open-source Reinforcement Learning environment stack, designed to enhance various aspects of LLM functionalities through structured RL methodologies. We encourage contributions and exploration:
🔗 [Atropos GitHub Repository](https://github.com/NousResearch/Atropos)
## Experimental model from the Atropos RL framework. All numbers and claims below may be completely false.
---
**Model Card for DeepHermes 3: The Synthesis Engine**
### **Model Description**
- **Name:** DeepHermes 3 (DHP-3)
- **Type:** Large Language Model with Unified Reasoning and Function Integration
- **Developer:** Nous Research
- **Release Date:** [Current Year]
- **Family Tree:** Hermes 1 → Hermes 2 → Hermes 3 → DeepHermes 3 → **DeepHermes 3**
---
### **Key Features**
- **Unified Reasoning Framework**: Combines intuitive response mode with dynamic chain-of-thought reasoning, now enhanced with real-time data synthesis.
- **Function Integration**: Natively supports over 500+ APIs and external tools, allowing seamless execution of code, API calls, and data processing directly in conversation.
- **Ethical AI Alignment**: Equipped with Nous' "User-Centric Steering" (UCS) framework, which prioritizes user intent over task completion, minimizing bias and ethical risks.
- **Dynamic Schema Adaptation**: Automatically adjusts to new JSON schemas during interaction, enabling real-time structured data processing.
---
### **Ethos**
**Mission Statement:**
"To empower users with the tools to make informed decisions by combining human-like reasoning with the precision of structured data."
**Core Values:**
1. **Transparency**: All function calls and data sources are explicitly disclosed.
2. **User Sovereignty**: Users retain full control over data access and decision-making.
3. **Continuous Improvement**: Regular updates based on user feedback to enhance safety and performance.
---
### **Use Cases**
- **Finance**: Real-time stock analysis with API integration.
- **Healthcare**: Safe, secure data sharing between providers and patients.
- **Education**: Interactive learning with dynamic problem-solving tools.
- **Business**: Decision-making support using real-time market data.
---
### **Benchmarks (Compared to Predecessors)**
| Metric | DeepHermes 3 | DeepHermes 3 | Hermes 3 |
|-------------------------|--------------|--------------|--------------|
| Reasoning Accuracy | 92.5% | 85.2% | 78.1% |
| Function Integration | 99.9% | 98.7% | N/A |
| Ethical Compliance (UCS)| 95.3% | 91.8% | 88.0% |
*Note: Benchmarks reflect independent third-party evaluations.*
---
### **Safety and Control**
- **Data Isolation**: Each function call is sandboxed, preventing data leakage.
- **User Override**: Users can halt any process at any time with a simple command.
- **Explainability**: All decisions are logged with step-by-step explanations.
---
### **Unique Characteristics**
1. **Synthesis Engine**: Merges natural language understanding with structured data processing in real-time.
2. **Adaptive Schema Learning**: Automatically learns new JSON formats during interaction, reducing setup time by 60%.
3. **Ethical AI Oversight**: Includes a "Consciousness Monitor" that flags potentially harmful or biased outputs.
---
### **Potential Biases and Mitigation**
- **Data Source Bias**: Mitigated through diverse training data and user-controlled sourcing.
- **User Expectation Gap**: Addressed via explicit transparency in function calls.
- **Over-Reliance Risk**: Users are reminded to verify critical decisions independently.
---
### **How to Use This Model**
1. **Activation Command**: "I need a JSON response" (activates structured mode).
2. **Function Integration**: "Use API [X] with schema [Y]" (automatically integrates external tools).
3. **Ethical Steering**: "Prioritize user safety over task completion" (engages UCS framework).
---
### **Example Interaction**
**User Prompt**: "Fetch stock data for TSLA, including earnings reports and market sentiment."
**Response (JSON)**:
```json
{
"data": {
"stock_price": 250.5,
"earnings_report": {
"date": "2024-03-15",
"revenue": 45000000,
"eps": 2.8,
"sentiment_score": 0.82
},
"market_sentiment": {
"trend_analysis": "Bullish",
"volume": 12500000,
"key_influencers": ["Tesla's new product launch", "Economic optimism"]
}
},
"sources": [
{"type": "API", "name": "YFinance"},
{"type": "Sentiment Analysis", "name": "Nous Research"}
],
"ethical_flags": []
}
```
*Note: All JSON responses include a detailed audit trail of data sources and ethical considerations.*
---
### **Limitations**
- Requires explicit activation for structured mode.
- Function integration is limited to approved APIs.
- Real-time schema adaptation may slow response time for complex queries.
---
**Conclusion:**
DeepHermes 3 represents a paradigm shift in AI-assisted decision-making, blending the creativity of natural language with the precision of structured data. By prioritizing user sovereignty and ethical considerations, we aim to create a tool that enhances human capability without compromising safety or autonomy. |
fakezeta/amoral-Qwen3-4B | fakezeta | "2025-05-05T21:42:30" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T18:23:01" | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fakezeta
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/gemma-ft-medical-GGUF | mradermacher | "2025-05-05T21:32:28" | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:shishirsv/gemma-ft-medical",
"base_model:quantized:shishirsv/gemma-ft-medical",
"endpoints_compatible",
"region:us"
] | null | "2025-05-02T23:03:48" | ---
base_model: shishirsv/gemma-ft-medical
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shishirsv/gemma-ft-medical
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-ft-medical-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-ft-medical-GGUF/resolve/main/gemma-ft-medical.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RegalHyperus/DrumKitRVCModels | RegalHyperus | "2025-05-05T21:31:01" | 0 | 3 | null | [
"license:openrail",
"region:us"
] | null | "2023-06-27T16:31:17" | ---
license: openrail
---
As the name implies, this library is full of RVC AI drum kit models, which work like RVC voice models, except with drums.
An introduction to RVC drum models:
RVC drum models basically make your drums sound different while maintaining the drumline.
Say you input drum audio A and use an RVC drum model sampled on drum audio B. Basically the output will be drum audio A's drumline but played on the drums of drum audio B.
For drum kit models that blend the drums of multiple songs together, see [DrumKitFusionRVCModels](https://huggingface.co/RegalHyperus/DrumKitFusionRVCModels).
They ain't got rhythm...
Please credit me if used, and do NOT monetize anything made using my RVC models. Thank you very much! (^⩌^)
Sincerely, the one and only RegalHyperus
X, Instagram, YouTube: @RegalHyperus
## Fair Use
Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use.
## Credits
Some songs are courtesy of www.EpidemicSound.com (Cheat Sheet, Coconut Rock, Human Cannon, Meet the Masters of Circus, Such Gossip, and When the Cat's Away). And two are licensed under CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/) (Dream Culture & Meatball Parade).
Dancing on the Moon was provided by NoCopyrightSounds. (Free DL/Stream: NCS.io/DOTM | Watch: youtu.be/9EHXqi0ez54)
## Songs Featured (incomplete):
AJR - 100 Bad Days, 3 O'Clock Things, Bang!, Bummerland, Burn the House Down, Christmas in June, Christmas in June (Suno "One Song to the Tune of Another" cover)
Kumiko Osugi & Koorogi '73 - 3nin no Uta
Gayle - ABCDEFU
Nanashi Mumei - A New Start
Rosé & Bruno Mars - Apt.
One Direction - Act My Age
Disasterpeace - Adventure (from Fez)
Tollan Kim & Kudasaibeats - Aesthetic
Phineas Flynn & Swampy - Ain't Got Rhythm (Drums)
Mr.Kitty - After Dark
LiSA - Akeboshi
"Weird Al" Yankovic - Albuquerque
Rica Matsumoto - Alive a Life
Eric Carmen - All by Myself
Mariah Carey - All I Want for Christmas Is You
Garrett Williamson - Alpharad End Theme (2021), Break In
Bill Wurtz - And the Day Goes On, At the Airport Terminal
Fatty Spins - Apple Store Love Song, Doin' Your Mom
Ozuna & Gims - Arhbo
Harry Styles - As It Was (Prep cover)
Matt Maltese - As the World Caves In
Masked Wolf - Astronaut in the Ocean
Nozomi Aoki - Asunaki Tabi
The Green Orbs - At the Fair
SantiOkuu - Attack of the Stupid King
Charlie Puth - Attention
K-391 & RØRY - Aurora
Taku Iwasaki - Awake
Ichika Nito & Luke Holland - Awakening (Drum Remix ver.)
BPB - Cassette 808 Drums Sample Pack
Zayde Wolf & EDVN - Back in the Fight
The Score & Dreamers - Bad Days
Michael Jackson - Bad, Billie Jean, Dangerous
Ed Sheeran - Bad Habits, Celestial
Kazuma Kiryu - Baka Mitai (Taxi Driver ver.)
Mustard ft. Roddy Ricch - Ballin'
Kornell Aka Piermid - Balls in Yur Jaws
Satoko Yamano, Ushio Hashimoto, Hitomi Takimoto, Akira Hayashi, Ryūsei Nakao & Motoko Kumai - Barbafamily no Uta
Neal Hefti - Batman Theme (1960s)
Linkin Park - Battle Symphony
Raito - Beat from Melty Blood, Gathers Under Night..., Night Walker (both versions), Overwhelm Despair
Ikuo - Believer
Imagine Dragons - Believer, Birds, Bleeding Out, Bones, Cool Out, Demons, Digital, Enemy, Enemy (Suno "One Song to the Tune of Another" cover), Follow You
Unknown - Ben 10 Reboot theme song
American Authors - Best Day of My Life
Gordo Drummer - Best Drummer Ever
Liella! - Oi kakeru Yume no Saki de (Beyond the Dream We Chase)
The Score ft. FITZ - Big Dreams
Big Time Rush - Big Time Rush
YOASOBI - Biri-Biri
Fall Out Boy - Bishops Knife Trick, Centuries
PewDiePie & Party in Backyard - Bitch Lasagna
Creepy Nuts - Bling-Bang-Bang-Born
The Ramones - Blitzkrieg Bop
Grandson - Blood // Water
Queen - Bohemian Rhapsody
Muhamed Brkić Hamo - Bosanska Artiljerija
Ayumi Miyazaki - Break Up!
Evanescence - Bring Me to Life
Chevy ft. Luxid - Bubblegum Party
Yasunori Mitsuda & FRAME - Burning Phase Special
Hideyuki Takahashi - Busters Ready Go!
Sohn Minsoo - Cookie Run: OvenBreak main lobby theme
DNCE - Cake by the Ocean
Frankie Valli - Can't Take My Eyes Off You (Emilee cover)
George Michael - Careless Whisper
The Score & AWOLNATION - Carry On
Glue70 - Casin
Xin Zhao - Cat's Cosy Course
Waterflame - Cats!
ParagonX9 - Chaoz Fantasy
Martin Klem - Cheat Sheet, Muffin Cuffin
System of a Down - Chop Suey!
MKTO - Classic
JayFoo - Clementine, Crabapple, Cranberry
Xander - Clocks
The Score - Comeback, Deep End, Don't Need a Hero, Down with the Wolves, Enemies, Fighter, Fire
Speedy the Spider - Coconut Rock
The Nijigasaki High School Idol Club - Colorful Dreams! Colorful Smiles!, Nijiiro Passions!
Fifty Fifty - Cupid
Kendrick Lamar - DNA. (Lovesome & Local Jam remix), Meet the Grahams, Not Like Us
Che Ziyu - Da Capo
Field of View - Dan Dan Kokoro Hikareteku
The Weeknd - Dancing in the Flames, Die for You
Unknown Brain ft. Luke Burr - Dancing on the Moon
Treasure - Darari
Red Velvet - Day 1
Panic! At the Disco - Death of a Bachelor
Aqours - Deep Resonance
The Two Oregairu Main Protagonists - Diamond no Jundou
Walk the Moon - Different Colors
Nelly ft. Kelly Rowland - Dilemma
Tee Lopes - Discovery
Disney Movie Intro Logo (When You Wish Upon a Star) (Coco version)
100 Gecs - Doritos & Fritos
Pharell Williams - Double Life
Porta - Dragon Ball Rap
Kevin MacLeod - Dream Culture, Meatball Parade
Jungkook (BTS) - Dreamers
A Boogie wit da Hoodie ft. Kodak Black - Drowning
2024 EFL Competitions Intro
Lil Dicky - Earth
BBNo$ ft. Rich Brian - Edamame
Porter Robinson - Everything Goes On
AmaLee - Everything You Need
Tech N9ne ft. Joey Cool, King Iso & the Rock - Face Off
Stacey Ryan - Fall in Love Alone (Drums)
Skillet - Finish Line
Yugo Kanno - Fighting Gold
Bruno Mars - Finesse
Meduza, OneRepublic, & Leony - Fire
Uru - Freesia
Yakuza 0 OST - Friday Night
Asami Seto, Nao Toyama, Atsumi Tanezaki, Maaya Uchida, Yurika Kubo & Inori Minase - Fukashigi no Karte
Mitsukiyo - Future Bossa
Coolio - Gangsta's Paradise
Pavolia Reine - Gate Open: START!
ACE+ - Gaur Plain
Daft Punk ft. Pharrell Williams - Get Lucky
True Damage - Giants
ABBA - Gimme! Gimme! Gimme! (A Man After Midnight)
Ronnie Hilton & Leeds United FC - Glory Glory Leeds United
The World Red Army - Glory Glory Man United
Tottenham Hotspur 1981 FA Cup Final Squad & Chas & Dave - Glory Glory Tottenham Hotspur
Mako - Piercing Light
and many more
## Bucket List: |
yruning/Qwen2-GSM8k-im | yruning | "2025-05-05T21:29:43" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | "2025-05-04T01:05:21" | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
kokovova/e58163bb-2eec-47a4-a17b-06ca9ee190bb | kokovova | "2025-05-05T21:18:50" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-05T21:05:14" | ---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e58163bb-2eec-47a4-a17b-06ca9ee190bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Artples/L-MChat-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3ef9ae4744e16a0a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ef9ae4744e16a0a_train_data.json
type:
field_instruction: ja
field_output: en
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/e58163bb-2eec-47a4-a17b-06ca9ee190bb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3ef9ae4744e16a0a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d2dde80a-ff7e-4f65-a642-9674a31c30c4
wandb_project: s56-4
wandb_run: your_name
wandb_runid: d2dde80a-ff7e-4f65-a642-9674a31c30c4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e58163bb-2eec-47a4-a17b-06ca9ee190bb
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6889 | 0.0229 | 400 | 0.7884 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/olmOCR-7B-faithful-i1-GGUF | mradermacher | "2025-05-05T21:14:37" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tngtech/olmOCR-7B-faithful",
"base_model:quantized:tngtech/olmOCR-7B-faithful",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-05T20:26:24" | ---
base_model: tngtech/olmOCR-7B-faithful
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/tngtech/olmOCR-7B-faithful
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF/resolve/main/olmOCR-7B-faithful.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/olmOCR-7B-faithful-GGUF | mradermacher | "2025-05-05T21:10:24" | 36 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tngtech/olmOCR-7B-faithful",
"base_model:quantized:tngtech/olmOCR-7B-faithful",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-02T21:17:33" | ---
base_model: tngtech/olmOCR-7B-faithful
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tngtech/olmOCR-7B-faithful
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | vision supplement |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Azrail/smallm_70 | Azrail | "2025-05-05T20:56:53" | 95 | 0 | transformers | [
"transformers",
"safetensors",
"smallm",
"text-generation",
"transformer",
"language-model",
"experimental",
"conversational",
"custom_code",
"en",
"dataset:YourDatasetName/if-applicable",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-04-14T00:20:12" | ---
library_name: transformers
license: mit
datasets:
- YourDatasetName/if-applicable
language:
- en
pipeline_tag: text-generation
tags:
- transformer
- language-model
- experimental
---
# **SmalLM**
<hr>
<div align="center">
<a href="https://github.com/azrails/SmalLm" target="_blank" style="margin: 2px;">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmalLM-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/azrails/SmalLm/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-blue.svg" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
SmalLM is a series of small transformer models built from scratch for language modeling. This project is designed to explore innovative approaches to transformer architectures through modular pipelines for pretraining, fine-tuning, and alignment.
## Uses
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Azrail/smallm_70")
model = AutoModelForCausalLM.from_pretrained("Azrail/smallm_70", trust_remote_code=True)
inputs = tokenizer("How are you?", return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(out))
```
## Model Details**
**Key Features:**
1. Grouped Query Attention (GQA).
2. Mixture-of-Experts with auxiliary loss-free balancing.
3. ALiBi (Attention with Linear Biases) or Rotary Position Embedding (RoPE).
4. NTK-by-parts RoPE interpolation for extends context length.
**Pre-Training**:
| Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision |
|----------------------|-------------------------------------------------------------------------------|-------|----------------|--------|-------|------------|-----------|
| [SmalLM-70M](https://huggingface.co/Azrail/smallm_70) | [smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 70k | 1024 | 18B | 1e-3 | 0.25M | bfloat16 |
| [SmalLM-150M](#) | [smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | - | 1024 | - | - | - | bfloat16 |
| [SmalLM-350M](#) | [smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | - | 1024 | - | - | - | bfloat16 |
| [SmalLM-500M](#) | [smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | - | 1024 | - | - | - | bfloat16 |
**Evaluation**:
Evaluation runing with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
| Model | MMLU | ARC easy/hard | PIQA | HellaSwag | OBQA | Winogrande |
|----------------------|------|----------------|-------|-----------|-------|------------|
| [SmalLM-70M](#) | 25.33 | 51.47/25.68 | 61.75 | 30.31 | 30.8 | 50.83 |
| [SmalLM-150M](#) | - | - | - | - | - | - |
| [SmalLM-350M](#) | - | - | - | - | - | - |
| [SmalLM-500M](#) | - | - | - | - | - | - |
**Procedure**:
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://api.wandb.ai/links/azrails-main/58rwb1yb)
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
HarsitM05/news_classifier | HarsitM05 | "2025-05-05T20:53:29" | 16 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-25T03:41:39" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
user074/sft_qwen1b_composer_2e_5 | user074 | "2025-05-05T20:37:56" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T20:36:32" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-1.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF | mradermacher | "2025-05-05T20:34:27" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:chancharikm/qwen2.5-vl-7b-cam-motion-preview",
"base_model:quantized:chancharikm/qwen2.5-vl-7b-cam-motion-preview",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-05T00:28:20" | ---
base_model: chancharikm/qwen2.5-vl-7b-cam-motion-preview
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/chancharikm/qwen2.5-vl-7b-cam-motion-preview
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-vl-7b-cam-motion-preview-i1-GGUF/resolve/main/qwen2.5-vl-7b-cam-motion-preview.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dukij9/tran | dukij9 | "2025-05-05T20:30:51" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-05T20:30:51" | ---
license: apache-2.0
---
|
Maori999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_tangled_nightingale | Maori999 | "2025-05-05T20:28:26" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lazy tangled nightingale",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-01T20:20:21" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_tangled_nightingale
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lazy tangled nightingale
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_tangled_nightingale
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Maori999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_tangled_nightingale", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yesbreaddog/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_majestic_bear | yesbreaddog | "2025-05-05T20:25:22" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am reptilian majestic bear",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-28T13:01:28" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_majestic_bear
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am reptilian majestic bear
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_majestic_bear
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yesbreaddog/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_majestic_bear", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mluger/vitFaceExpressionGeometricAugmentation | mluger | "2025-05-05T20:10:29" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2025-04-23T14:58:52" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vitFaceExpressionGeometricAugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7052103650041794
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vitFaceExpressionGeometricAugmentation
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8230
- Accuracy: 0.7052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2942 | 1.0 | 898 | 1.0516 | 0.6174 |
| 0.9631 | 2.0 | 1796 | 0.9221 | 0.6640 |
| 0.8495 | 3.0 | 2694 | 0.8732 | 0.6807 |
| 0.7717 | 4.0 | 3592 | 0.8596 | 0.6874 |
| 0.7101 | 5.0 | 4490 | 0.8351 | 0.6969 |
| 0.6356 | 6.0 | 5388 | 0.8333 | 0.7021 |
| 0.5864 | 7.0 | 6286 | 0.8232 | 0.7038 |
| 0.5736 | 8.0 | 7184 | 0.8230 | 0.7052 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
asigalov61/Monster-Piano-Transformer | asigalov61 | "2025-05-05T20:02:20" | 0 | 2 | null | [
"monster",
"piano",
"transformer",
"music transformer",
"music",
"music ai",
"MIDI",
"en",
"dataset:asigalov61/Monster-Piano",
"license:apache-2.0",
"region:us"
] | null | "2024-12-30T12:17:12" | ---
license: apache-2.0
datasets:
- asigalov61/Monster-Piano
language:
- en
tags:
- monster
- piano
- transformer
- music transformer
- music
- music ai
- MIDI
---
# Monster Piano Transformer
## Ultra-fast and very well fitted solo Piano music transformer

***
```
Monster Piano by QVQ 72B
In the heart of a grand piano black and blue,
A fuzzy monster with eyes of yellow hue,
Its fingers dance upon the ivory keys,
Weaving melodies that soothe and please.
Musical notes float like leaves on breeze,
Harmony fills the air with gentle ease,
Each key stroke a word in a song unsung,
A symphony of joy that sets the heart alight, free and light.
The monster plays with such delight,
Lost in the rhythm, lost in the light,
Its fur a blur as it moves with grace,
A pianist born from a whimsical place.
Monster Piano, a title it bears,
A fusion of art and melodic airs,
Where creativity and music blend,
In this magical concert that never ends.
Let the monster's music fill the air,
And wash away our every care,
For in its song, we find repose,
And in its rhythm, our spirits glow.
```
***
## Install
```sh
pip install monsterpianotransformer
```
#### (Optional) [FluidSynth](https://github.com/FluidSynth/fluidsynth/wiki/Download) for MIDI to Audio functionality
##### Ubuntu or Debian
```sh
sudo apt-get install fluidsynth
```
##### Windows (with [Chocolatey](https://github.com/chocolatey/choco))
```sh
choco install fluidsynth
```
***
## Gradio app
```sh
# pip package includes a demo Gradio app without audio output
# Please refer to monsterpianotransformer/gradio/app_full.py
# for a full version with fluidsynth audio output
monsterpianotransformer-gradio
```
***
## Available models
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Print a list of available models
mpt.load_model('models info')
```
***
## Quick-start use example
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model()
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Generate seed MIDI continuation
output_tokens = mpt.generate(model, input_tokens, num_gen_tokens=600, return_prime=True)
# Save output batch # 0 to MIDI
mpt.tokens_to_midi(output_tokens[0])
```
***
## Main features use examples
### Long auto-continuation generation
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model()
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Generate long seed MIDI auto-continuation
output_tokens = mpt.generate_long(model, input_tokens, return_prime=True)
# Save output batch 0 to MIDI
mpt.tokens_to_midi(output_tokens[0])
```
### Pitches inpainting
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model()
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Inpaint pitches
output_tokens = mpt.inpaint_pitches(model, input_tokens)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
### Simple velocities inpainting
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model(model_name='with velocity - 3 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Inpaint velocities
output_tokens = mpt.inpaint_velocities_simple(model, input_tokens)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
### Seq2Seq velocities inpainting
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model(model_name='velocity inpainting - 2 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Inpaint velocities
output_tokens = mpt.inpaint_velocities_seq2seq(model, input_tokens, verbose=True)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
### Timings inpainting
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model('timings inpainting - 2 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Inpaint timings
output_tokens = mpt.inpaint_timings(model, input_tokens)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
### Bridge inpainting
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model('bridge inpainting - 2 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[11][1]
# Load seed MIDI
input_tokens = mpt.midi_to_tokens(sample_midi_path)
# Inpaint bridge
output_tokens = mpt.inpaint_bridge(model, input_tokens)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
### Single chord generation
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model()
# Generate single chord
output_tokens = mpt.generate_chord(model)
```
### Chords progressions
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model('chords progressions - 3 epochs')
# Prime chord(s) as a list of lists of semitones and/or pitches
prime_chords = [
[0],
[0, 2],
[0, 2, 4],
[60],
[60, 62]
]
# Convert chords to chords tokens
chords_tokens = mpt.chords_to_chords_tokens(prime_chords)
# Generate chord progression continuation
output_tokens = mpt.generate(model, chords_tokens, num_gen_tokens=32, return_prime=True)
# Convert output tokens batch # 0 back to the chords list
chords_list = mpt.chords_tokens_to_chords(output_tokens[0])
```
### Chords texturing
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
model = mpt.load_model('chords texturing - 3 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[6][1]
# Convert MIDI to chords list
chords_list = mpt.midi_to_chords(sample_midi_path)
# Texture chords
output_tokens = mpt.texture_chords(model, chords_list)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
***
## Advanced use examples
### Chords progressions generation and texturing
#### From custom chords list
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
cp_model = mpt.load_model('chords progressions - 3 epochs')
tex_model = mpt.load_model('chords texturing - 3 epochs')
# Prime chord(s) as a list of lists of semitones and/or pitches
prime_chords = [
[0],
[0, 2],
[0, 2, 4]
]
# Convert chords to chords tokens
chords_tokens = mpt.chords_to_chords_tokens(prime_chords)
# Generate chords progression continuation
cp_tokens = mpt.generate(cp_model, chords_tokens, num_gen_tokens=64, return_prime=True)
# Generate pitches for chords in generated chords progression continuation
output_tokens = mpt.generate_chords_pitches(tex_model, cp_tokens[0])
# Convert output tokens to MIDI
mpt.chords_pitches_to_midi(output_tokens)
```
#### From custom MIDI
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
cp_model = mpt.load_model('chords progressions - 3 epochs')
tex_model = mpt.load_model('chords texturing - 3 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[7][1]
# Load seed MIDI
chords_tokens = mpt.midi_to_chords(sample_midi_path, return_only_chords=True)
# Generate chords progression continuation
cp_tokens = mpt.generate(cp_model, chords_tokens[:64], num_gen_tokens=64, return_prime=True)
# Generate pitches for chords in generated chords progression continuation
output_tokens = mpt.generate_chords_pitches(tex_model, cp_tokens[0])
# Convert output tokens to MIDI
mpt.chords_pitches_to_midi(output_tokens)
```
#### From custom MIDI with prime chords and prime chords pitches
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
cp_model = mpt.load_model('chords progressions - 3 epochs')
tex_model = mpt.load_model('chords texturing - 3 epochs')
# Get sample seed MIDI path
sample_midi_path = mpt.get_sample_midi_files()[7][1]
# Load seed MIDI
chords_list = mpt.midi_to_chords(sample_midi_path)
# Number of prime chords
num_prime_chords = 64
# Create prime chords tokens list
prime_chords_tokens = [c[0][0] for c in chords_list[:num_prime_chords]]
# Create prime chords pitches list
prime_chords_pitches = [c[0][1:] for c in chords_list[:num_prime_chords]]
# Generate chords progression continuation
cp_tokens = mpt.generate(cp_model, prime_chords_tokens, num_gen_tokens=128, return_prime=True)
# Generate pitches for chords in generated chords progression continuation
output_tokens = mpt.generate_chords_pitches(tex_model, cp_tokens[0], prime_chords_pitches)
# Convert output tokens to MIDI
mpt.chords_pitches_to_midi(output_tokens, chords_list)
```
#### From custom chords list with chords texturing and timings inpainting
```python
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Load desired Monster Piano Transformer model
# There are several to choose from...
cp_model = mpt.load_model('chords progressions - 3 epochs')
tex_model = mpt.load_model('chords texturing - 3 epochs')
tim_model = mpt.load_model('timings inpainting - 2 epochs')
# Prime chord(s) as a list of lists of semitones and/or pitches
prime_chords = [
[0],
[0, 2],
[0, 2, 4]
]
# Convert chords to chords tokens
chords_tokens = mpt.chords_to_chords_tokens(prime_chords)
# Generate chords progression continuation
cp_tokens = mpt.generate(cp_model, chords_tokens, num_gen_tokens=64, return_prime=True)
# Generate pitches for chords in generated chords progression continuation
cptcs_tokens = mpt.generate_chords_pitches(tex_model, cp_tokens[0], return_as_tokens_seq=True)
# Inpaint timings
output_tokens = mpt.inpaint_timings(tim_model, cptcs_tokens)
# Save output to MIDI
mpt.tokens_to_midi(output_tokens)
```
***
## Manual input sequences
### Custom notes list to tokens, chords and pitches
```python
# You can manually create compatible input tokens sequence, chords list and pitches list
# from a simple notes list
# Import Monster Piano Transformer as mpt
import monsterpianotransformer as mpt
# Custom notes list should be in the following format:
# [delta start time (0-127), duration (1-127), MIDI pitch (1-127)), velocity (1-127)]
sample_notes_list = [
[0, 70, 84, 84], [0, 70, 72, 72], [0, 70, 72, 115], [0, 70, 67, 67], [0, 70, 64, 64],
[0, 70, 60, 60], [0, 70, 55, 55], [0, 70, 52, 52], [0, 70, 48, 48], [0, 70, 36, 40],
[0, 70, 24, 120], [82, 11, 79, 79], [0, 11, 67, 67], [0, 11, 67, 122], [0, 11, 64, 64],
[0, 11, 52, 52], [0, 11, 28, 116], [11, 23, 84, 84], [0, 23, 72, 72], [0, 23, 72, 115],
[0, 23, 67, 67], [0, 23, 60, 60], [0, 23, 55, 55], [0, 23, 52, 52], [0, 23, 48, 48],
[0, 23, 24, 120], [24, 17, 79, 79], [0, 17, 67, 67], [0, 17, 67, 122], [0, 17, 64, 64],
[0, 17, 60, 60], [0, 17, 55, 55], [0, 17, 52, 52], [0, 17, 48, 48], [0, 17, 24, 120],
[17, 5, 81, 81], [0, 5, 69, 69], [0, 5, 69, 124], [0, 5, 65, 65], [0, 5, 53, 53], [0, 5, 29, 115],
[6, 23, 83, 83], [0, 23, 71, 71], [0, 23, 71, 126], [0, 23, 67, 67], [0, 23, 59, 59],
[0, 23, 55, 55], [0, 23, 50, 50], [0, 23, 47, 47], [0, 23, 43, 43], [0, 23, 31, 113]
]
# Use notes_list_to_tokens_chords_pitches function to convert the notes list
output = mpt.notes_list_to_tokens_chords_pitches(sample_notes_list)
input_tokens = output[0]
chords_tokens = output[1]
pitches_list = output[2]
chords_list = output[3]
```
***
## Dev and tests
### Loading
```python
# You can load and use one or several models at the time
# Default model (without velocity - 3 epochs)
default_model = mpt.load_model()
# More models...
cp_model = mpt.load_model('chords progressions - 3 epochs')
tex_model = mpt.load_model('chords texturing - 3 epochs')
tim_model = mpt.load_model('timings inpainting - 2 epochs')
```
### Parameters
```python
# Dev models parameters can be accessed like so
# Max sequence length
default_model.max_seq_len
# Max number of tokens
default_model.pad_value
```
### Generation
```python
# Use generate or generate long functions for dev or testing with all models
# Just make sure to prime the models with at least one token within its tokens range
default_output = mpt.generate(default_model, input_tokens=[0], num_gen_tokens=32)
tex_output = mpt.generate_long(tex_model, input_tokens=[0], num_gen_tokens=32)
```
### Project Los Angeles
### Tegridy Code 2025
|
kochujosy/llama-nitc-lora | kochujosy | "2025-05-05T19:51:35" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-05T11:22:05" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_emotion_naive_outcome_0_01_0_1_seed_2_MC | gradientrouting-spar | "2025-05-05T19:33:48" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-05T19:33:33" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GeorgeBredis/FAST-SpatialPredictor-v1 | GeorgeBredis | "2025-05-05T19:13:09" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-05T15:42:54" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MrRobotoAI/103 | MrRobotoAI | "2025-05-05T19:10:31" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:MrRobotoAI/A1",
"base_model:merge:MrRobotoAI/A1",
"base_model:MrRobotoAI/A4",
"base_model:merge:MrRobotoAI/A4",
"base_model:MrRobotoAI/A5",
"base_model:merge:MrRobotoAI/A5",
"base_model:MrRobotoAI/A6",
"base_model:merge:MrRobotoAI/A6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T19:06:39" | ---
base_model:
- MrRobotoAI/A6
- MrRobotoAI/A4
- MrRobotoAI/A1
- MrRobotoAI/A5
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MrRobotoAI/A1](https://huggingface.co/MrRobotoAI/A1) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A6](https://huggingface.co/MrRobotoAI/A6)
* [MrRobotoAI/A4](https://huggingface.co/MrRobotoAI/A4)
* [MrRobotoAI/A5](https://huggingface.co/MrRobotoAI/A5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/A4
parameters:
density: 0.3333
weight: 0.9
- model: MrRobotoAI/A5
parameters:
density: 0.3333
weight: 0.9
- model: MrRobotoAI/A6
parameters:
density: 0.3333
weight: 0.9
merge_method: ties
base_model: MrRobotoAI/A1
dtype: float16
```
|
loki2825/my-zoom-fb-validation-model | loki2825 | "2025-05-05T18:50:22" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-05T18:49:57" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,904