param-bharat commited on
Commit
196dcd4
·
verified ·
1 Parent(s): 4ff40f5

Model save

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -20,11 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0719
24
- - F1: 0.9900
25
- - Accuracy: 0.99
26
- - Precision: 0.9902
27
- - Recall: 0.99
28
 
29
  ## Model description
30
 
@@ -44,8 +44,8 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 0.0003
47
- - train_batch_size: 256
48
- - eval_batch_size: 256
49
  - seed: 2024
50
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
  - lr_scheduler_type: cosine
@@ -57,8 +57,8 @@ The following hyperparameters were used during training:
57
  | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
58
  |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
59
  | No log | 0 | 0 | 0.6985 | 0.3223 | 0.49 | 0.2401 | 0.49 |
60
- | 0.0 | 12.5 | 50 | 0.0716 | 0.9900 | 0.99 | 0.9902 | 0.99 |
61
- | 0.0 | 25.0 | 100 | 0.0719 | 0.9900 | 0.99 | 0.9902 | 0.99 |
62
 
63
 
64
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.0041
24
+ - F1: 1.0
25
+ - Accuracy: 1.0
26
+ - Precision: 1.0
27
+ - Recall: 1.0
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 0.0003
47
+ - train_batch_size: 320
48
+ - eval_batch_size: 320
49
  - seed: 2024
50
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
  - lr_scheduler_type: cosine
 
57
  | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
58
  |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
59
  | No log | 0 | 0 | 0.6985 | 0.3223 | 0.49 | 0.2401 | 0.49 |
60
+ | 0.0001 | 12.5 | 50 | 0.0044 | 1.0 | 1.0 | 1.0 | 1.0 |
61
+ | 0.0001 | 25.0 | 100 | 0.0041 | 1.0 | 1.0 | 1.0 | 1.0 |
62
 
63
 
64
  ### Framework versions