mcdc-test-mistral

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5090
  • Perplexity: 1.6636

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Perplexity
1.249 0.2632 5 1.1492 3.1557
0.8949 0.5263 10 0.9671 2.6304
0.9403 0.7895 15 0.8533 2.3474
0.8411 1.0526 20 0.7721 2.1643
0.5975 1.3158 25 0.7005 2.0148
0.7067 1.5789 30 0.6430 1.9021
0.564 1.8421 35 0.5939 1.8111
0.5615 2.1053 40 0.5566 1.7448
0.5646 2.3684 45 0.5321 1.7025
0.4684 2.6316 50 0.5167 1.6764
0.5739 2.8947 55 0.5090 1.6636

Framework versions

  • PEFT 0.14.0
  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for YildizTekno/mcdc-test-mistral

Adapter
(926)
this model