Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ It can be used for `topic classification`, `sentiment analysis` and as a reranke
|
|
38 |
|
39 |
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
|
40 |
|
41 |
-
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [
|
42 |
### Retrieval-augmented Classification (RAC):
|
43 |
The main idea of this model is to utilize the information from semantically similar examples to enhance predictions in inference. The tests showed that providing the model with at least one example from the train dataset, which was retrieved by semantic similarity, could increase the F1 score from 0.3090 to 0.4275, in some cases from 0.2594 up to 0.6249. Moreover, the RAC approach, with 2 examples provided, showed an F1 score, compared to fine-tuning with 8 examples per label: 0.4707 and 0.4838, respectively.
|
44 |
|
|
|
38 |
|
39 |
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
|
40 |
|
41 |
+
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base).
|
42 |
### Retrieval-augmented Classification (RAC):
|
43 |
The main idea of this model is to utilize the information from semantically similar examples to enhance predictions in inference. The tests showed that providing the model with at least one example from the train dataset, which was retrieved by semantic similarity, could increase the F1 score from 0.3090 to 0.4275, in some cases from 0.2594 up to 0.6249. Moreover, the RAC approach, with 2 examples provided, showed an F1 score, compared to fine-tuning with 8 examples per label: 0.4707 and 0.4838, respectively.
|
44 |
|