BERT4code
Collection
This collection features BERT and RoBERTa-based models fine-tuned for multi-label code classification, designed to accurately tag and categorize code
•
3 items
•
Updated
This model is a fine-tuned version of microsoft/codebert-base on the xCodeEval dataset, more precisely on the multi-tag classification task.. It achieves the following results on the evaluation set:
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | Roc Auc | Accuracy | Hamming Loss |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 287 | 0.3148 | 0.1532 | 0.4274 | 0.8593 | 0.2510 | 0.1314 |
0.3402 | 2.0 | 574 | 0.2830 | 0.3484 | 0.5897 | 0.8873 | 0.3765 | 0.1132 |
0.3402 | 3.0 | 861 | 0.2756 | 0.3907 | 0.6159 | 0.8944 | 0.4118 | 0.1088 |
Base model
microsoft/codebert-base