File size: 4,834 Bytes
6935504
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task2-ordered_template_small-deepseek-coder-1.3b-base-ddp-8lr
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lemexp-task2-ordered_template_small-deepseek-coder-1.3b-base-ddp-8lr

This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1906

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch   | Step  | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.4543        | 0.2001  | 629   | 0.3617          |
| 0.3538        | 0.4001  | 1258  | 0.3171          |
| 0.3308        | 0.6002  | 1887  | 0.2967          |
| 0.3027        | 0.8003  | 2516  | 0.2932          |
| 0.2933        | 1.0003  | 3145  | 0.2832          |
| 0.2797        | 1.2004  | 3774  | 0.2745          |
| 0.2729        | 1.4004  | 4403  | 0.2654          |
| 0.2664        | 1.6005  | 5032  | 0.2591          |
| 0.2643        | 1.8006  | 5661  | 0.2570          |
| 0.2577        | 2.0006  | 6290  | 0.2528          |
| 0.2538        | 2.2007  | 6919  | 0.2568          |
| 0.2451        | 2.4008  | 7548  | 0.2461          |
| 0.2426        | 2.6008  | 8177  | 0.2475          |
| 0.244         | 2.8009  | 8806  | 0.2420          |
| 0.2385        | 3.0010  | 9435  | 0.2358          |
| 0.224         | 3.2010  | 10064 | 0.2346          |
| 0.226         | 3.4011  | 10693 | 0.2340          |
| 0.2241        | 3.6011  | 11322 | 0.2293          |
| 0.2234        | 3.8012  | 11951 | 0.2250          |
| 0.2218        | 4.0013  | 12580 | 0.2259          |
| 0.209         | 4.2013  | 13209 | 0.2240          |
| 0.2077        | 4.4014  | 13838 | 0.2192          |
| 0.2119        | 4.6015  | 14467 | 0.2184          |
| 0.2057        | 4.8015  | 15096 | 0.2173          |
| 0.206         | 5.0016  | 15725 | 0.2159          |
| 0.1977        | 5.2017  | 16354 | 0.2158          |
| 0.1936        | 5.4017  | 16983 | 0.2116          |
| 0.1933        | 5.6018  | 17612 | 0.2112          |
| 0.1916        | 5.8018  | 18241 | 0.2106          |
| 0.1926        | 6.0019  | 18870 | 0.2041          |
| 0.1862        | 6.2020  | 19499 | 0.2062          |
| 0.1773        | 6.4020  | 20128 | 0.2042          |
| 0.1773        | 6.6021  | 20757 | 0.2010          |
| 0.1759        | 6.8022  | 21386 | 0.1994          |
| 0.1795        | 7.0022  | 22015 | 0.1963          |
| 0.1615        | 7.2023  | 22644 | 0.2003          |
| 0.1635        | 7.4024  | 23273 | 0.1972          |
| 0.1636        | 7.6024  | 23902 | 0.1962          |
| 0.1643        | 7.8025  | 24531 | 0.1924          |
| 0.1614        | 8.0025  | 25160 | 0.1965          |
| 0.1499        | 8.2026  | 25789 | 0.1942          |
| 0.1491        | 8.4027  | 26418 | 0.1915          |
| 0.1502        | 8.6027  | 27047 | 0.1885          |
| 0.1475        | 8.8028  | 27676 | 0.1883          |
| 0.1495        | 9.0029  | 28305 | 0.1883          |
| 0.1412        | 9.2029  | 28934 | 0.1913          |
| 0.1337        | 9.4030  | 29563 | 0.1853          |
| 0.1343        | 9.6031  | 30192 | 0.1859          |
| 0.1354        | 9.8031  | 30821 | 0.1853          |
| 0.1338        | 10.0032 | 31450 | 0.1847          |
| 0.1202        | 10.2032 | 32079 | 0.1888          |
| 0.1211        | 10.4033 | 32708 | 0.1888          |
| 0.1199        | 10.6034 | 33337 | 0.1888          |
| 0.1209        | 10.8034 | 33966 | 0.1849          |
| 0.1193        | 11.0035 | 34595 | 0.1868          |
| 0.1113        | 11.2036 | 35224 | 0.1916          |
| 0.1083        | 11.4036 | 35853 | 0.1911          |
| 0.1082        | 11.6037 | 36482 | 0.1921          |
| 0.1071        | 11.8038 | 37111 | 0.1906          |


### Framework versions

- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0