Llama3-OpenBioLLM-8B-PsyCourse-fold1
This model is a fine-tuned version of aaditya/Llama3-OpenBioLLM-8B on the course-train-fold1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.0342
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4401 | 0.0770 | 50 | 0.3166 |
0.1222 | 0.1539 | 100 | 0.0882 |
0.0887 | 0.2309 | 150 | 0.0669 |
0.0749 | 0.3078 | 200 | 0.0571 |
0.0607 | 0.3848 | 250 | 0.0536 |
0.0595 | 0.4617 | 300 | 0.0538 |
0.0569 | 0.5387 | 350 | 0.0479 |
0.0692 | 0.6156 | 400 | 0.0517 |
0.0384 | 0.6926 | 450 | 0.0476 |
0.0339 | 0.7695 | 500 | 0.0445 |
0.0515 | 0.8465 | 550 | 0.0412 |
0.0423 | 0.9234 | 600 | 0.0403 |
0.031 | 1.0004 | 650 | 0.0412 |
0.0415 | 1.0773 | 700 | 0.0405 |
0.0308 | 1.1543 | 750 | 0.0384 |
0.0306 | 1.2312 | 800 | 0.0376 |
0.0376 | 1.3082 | 850 | 0.0391 |
0.024 | 1.3851 | 900 | 0.0384 |
0.0406 | 1.4621 | 950 | 0.0392 |
0.0398 | 1.5391 | 1000 | 0.0376 |
0.0325 | 1.6160 | 1050 | 0.0356 |
0.0396 | 1.6930 | 1100 | 0.0394 |
0.0266 | 1.7699 | 1150 | 0.0388 |
0.023 | 1.8469 | 1200 | 0.0392 |
0.0315 | 1.9238 | 1250 | 0.0389 |
0.0238 | 2.0008 | 1300 | 0.0342 |
0.0189 | 2.0777 | 1350 | 0.0361 |
0.0293 | 2.1547 | 1400 | 0.0367 |
0.0128 | 2.2316 | 1450 | 0.0420 |
0.0195 | 2.3086 | 1500 | 0.0385 |
0.0174 | 2.3855 | 1550 | 0.0383 |
0.0143 | 2.4625 | 1600 | 0.0415 |
0.0249 | 2.5394 | 1650 | 0.0404 |
0.0195 | 2.6164 | 1700 | 0.0383 |
0.0266 | 2.6933 | 1750 | 0.0376 |
0.0216 | 2.7703 | 1800 | 0.0365 |
0.0236 | 2.8472 | 1850 | 0.0366 |
0.0198 | 2.9242 | 1900 | 0.0369 |
0.0311 | 3.0012 | 1950 | 0.0370 |
0.0088 | 3.0781 | 2000 | 0.0424 |
0.0126 | 3.1551 | 2050 | 0.0467 |
0.0085 | 3.2320 | 2100 | 0.0463 |
0.0083 | 3.3090 | 2150 | 0.0453 |
0.0164 | 3.3859 | 2200 | 0.0470 |
0.0115 | 3.4629 | 2250 | 0.0465 |
0.0134 | 3.5398 | 2300 | 0.0469 |
0.0052 | 3.6168 | 2350 | 0.0470 |
0.0104 | 3.6937 | 2400 | 0.0448 |
0.0075 | 3.7707 | 2450 | 0.0459 |
0.0068 | 3.8476 | 2500 | 0.0485 |
0.0089 | 3.9246 | 2550 | 0.0494 |
0.0091 | 4.0015 | 2600 | 0.0476 |
0.0021 | 4.0785 | 2650 | 0.0498 |
0.0061 | 4.1554 | 2700 | 0.0529 |
0.0011 | 4.2324 | 2750 | 0.0541 |
0.0025 | 4.3093 | 2800 | 0.0549 |
0.0029 | 4.3863 | 2850 | 0.0560 |
0.0027 | 4.4633 | 2900 | 0.0570 |
0.0017 | 4.5402 | 2950 | 0.0572 |
0.0019 | 4.6172 | 3000 | 0.0574 |
0.005 | 4.6941 | 3050 | 0.0575 |
0.0033 | 4.7711 | 3100 | 0.0573 |
0.005 | 4.8480 | 3150 | 0.0576 |
0.0019 | 4.9250 | 3200 | 0.0575 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for chchen/Llama3-OpenBioLLM-8B-PsyCourse-fold1
Base model
meta-llama/Meta-Llama-3-8B
Finetuned
aaditya/Llama3-OpenBioLLM-8B