Built with Axolotl

See axolotl config

axolotl version: 0.8.0.dev0

base_model: mistralai/Mistral-7B-Instruct-v0.3
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: charliemarshalldev/llama3.1_recipes
    type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/

adapter: qlora
lora_model_dir:

sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project: "mistral-finetune"
wandb_log_model: "checkpoint"

gradient_accumulation_steps: 1
micro_batch_size: 12
num_epochs: 5
max_steps: 200
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 5e-5

train_on_inputs: false
group_by_length: false
bf16: true
fp16: true
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: 

warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
    pad_token: "<eos>"

Visualize in Weights & Biases

outputs/

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on the charliemarshalldev/llama3.1_recipes dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6176

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 24
  • total_eval_batch_size: 24
  • optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 200

Training results

Training Loss Epoch Step Validation Loss
1.7656 0.0042 1 1.6935
1.3062 0.0417 10 1.1579
0.8457 0.0833 20 0.7753
0.7208 0.125 30 0.7002
0.76 0.1667 40 0.6711
0.7155 0.2083 50 0.6548
0.6473 0.25 60 0.6454
0.6941 0.2917 70 0.6388
0.5967 0.3333 80 0.6341
0.7105 0.375 90 0.6299
0.6686 0.4167 100 0.6271
0.6533 0.4583 110 0.6247
0.6747 0.5 120 0.6226
0.7037 0.5417 130 0.6211
0.6796 0.5833 140 0.6198
0.6222 0.625 150 0.6190
0.6594 0.6667 160 0.6183
0.6394 0.7083 170 0.6179
0.6463 0.75 180 0.6177
0.6557 0.7917 190 0.6176
0.642 0.8333 200 0.6176

Framework versions

  • PEFT 0.14.0
  • Transformers 4.49.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for charliemarshalldev/Mistral-7B-Recipes-QLoRA

Adapter
(352)
this model