fzn0x commited on
Commit
5d7987d
·
verified ·
1 Parent(s): d71440d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -1,6 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Fine-tuned BERT-base-uncased pre-trained model to classify spam SMS.
2
 
3
- Github: https://github.com/fzn0x/bert-sms-classification
4
 
5
  My second project in Natural Language Processing (NLP), where I fine-tuned a bert-base-uncased model to classify spam SMS. This is huge improvements from https://github.com/fzn0x/bert-indonesian-english-hate-comments.
6
 
@@ -62,4 +76,4 @@ See [`citations.bib`](./citations.bib) for full BibTeX entries.
62
  - [scikit-learn](https://scikit-learn.org/stable/) – metrics and preprocessing
63
  - Logging silencing inspired by Hugging Face GitHub discussions
64
  - Dataset from [UCI SMS Spam Collection](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset)
65
- - Inspiration from [Kaggle Notebook by Suyash Khare](https://www.kaggle.com/code/suyashkhare/naive-bayes)
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ base_model:
8
+ - google-bert/bert-base-uncased
9
+ pipeline_tag: text-classification
10
+ tags:
11
+ - text-classification
12
+ - spam
13
+ - english
14
+ ---
15
  # Fine-tuned BERT-base-uncased pre-trained model to classify spam SMS.
16
 
17
+ Check Github for Eval Results logs: https://github.com/fzn0x/bert-sms-classification
18
 
19
  My second project in Natural Language Processing (NLP), where I fine-tuned a bert-base-uncased model to classify spam SMS. This is huge improvements from https://github.com/fzn0x/bert-indonesian-english-hate-comments.
20
 
 
76
  - [scikit-learn](https://scikit-learn.org/stable/) – metrics and preprocessing
77
  - Logging silencing inspired by Hugging Face GitHub discussions
78
  - Dataset from [UCI SMS Spam Collection](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset)
79
+ - Inspiration from [Kaggle Notebook by Suyash Khare](https://www.kaggle.com/code/suyashkhare/naive-bayes)