results
This model is a fine-tuned version of FacebookAI/roberta-base on an Webi-CPC-11 dataset. It achieves the following results on the evaluation set:
- Loss: 1.5703
- Accuracy: 0.8432
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 1.0 | 197 | 0.4964 | 0.6875 |
No log | 2.0 | 394 | 0.4895 | 0.8326 |
0.4856 | 3.0 | 591 | 0.3711 | 0.8422 |
0.4856 | 4.0 | 788 | 0.3289 | 0.8464 |
0.4856 | 5.0 | 985 | 0.4112 | 0.8559 |
0.2928 | 6.0 | 1182 | 0.5872 | 0.8453 |
0.2928 | 7.0 | 1379 | 0.5353 | 0.8284 |
0.1493 | 8.0 | 1576 | 0.6069 | 0.8273 |
0.1493 | 9.0 | 1773 | 0.9225 | 0.8464 |
0.1493 | 10.0 | 1970 | 1.3133 | 0.8422 |
0.0641 | 11.0 | 2167 | 1.2524 | 0.8369 |
0.0641 | 12.0 | 2364 | 1.1893 | 0.8347 |
0.0394 | 13.0 | 2561 | 1.3631 | 0.8358 |
0.0394 | 14.0 | 2758 | 1.1922 | 0.8273 |
0.0394 | 15.0 | 2955 | 1.2648 | 0.8316 |
0.0205 | 16.0 | 3152 | 1.0889 | 0.8422 |
0.0205 | 17.0 | 3349 | 1.2235 | 0.8422 |
0.0094 | 18.0 | 3546 | 1.4707 | 0.8358 |
0.0094 | 19.0 | 3743 | 1.3305 | 0.8475 |
0.0094 | 20.0 | 3940 | 1.4021 | 0.8263 |
0.0151 | 21.0 | 4137 | 1.2689 | 0.8358 |
0.0151 | 22.0 | 4334 | 1.4997 | 0.8273 |
0.0061 | 23.0 | 4531 | 1.4872 | 0.8358 |
0.0061 | 24.0 | 4728 | 1.5773 | 0.8347 |
0.0061 | 25.0 | 4925 | 1.6127 | 0.8358 |
0.0037 | 26.0 | 5122 | 1.5534 | 0.8326 |
0.0037 | 27.0 | 5319 | 1.5532 | 0.8453 |
0.0036 | 28.0 | 5516 | 1.4986 | 0.8432 |
0.0036 | 29.0 | 5713 | 1.5698 | 0.8422 |
0.0036 | 30.0 | 5910 | 1.5703 | 0.8432 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.0
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Amaan39/Roberta-Webis-CPC
Base model
FacebookAI/roberta-base