SandeepSingh1234 commited on
Commit
f1a5d78
Β·
verified Β·
1 Parent(s): 1ef335f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text-to-Text Transfer Transformer Quantized Model for Text Summarization for Data Privacy Policies
2
+ This repository hosts a quantized version of the T5 model, fine-tuned for text summarization tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
3
+
4
+ ## Model Details
5
+
6
+ - **Model Architecture:** T5
7
+ - **Task:** Text Summarization for Data Privacy Policies
8
+ - **Dataset:** Hugging Face's `cnn_dailymail'
9
+ - **Quantization:** Float16
10
+ - **Fine-tuning Framework:** Hugging Face Transformers
11
+
12
+ ## Usage
13
+
14
+ ### Installation
15
+
16
+ ```sh
17
+ pip install transformers torch
18
+ ```
19
+
20
+ ### Loading the Model
21
+
22
+ ```python
23
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
24
+ import torch
25
+
26
+ device = "cuda" if torch.cuda.is_available() else "cpu"
27
+
28
+ model_name = "AventIQ-AI/text_summarization_for_data_privacy_policies"
29
+ tokenizer = T5Tokenizer.from_pretrained(model_name)
30
+ model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
31
+
32
+ def test_summarization(model, tokenizer):
33
+ user_text = input("\nEnter your text for summarization:\n")
34
+ input_text = "summarize: " + user_text
35
+ inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512).to(device)
36
+
37
+ output = model.generate(
38
+ **inputs,
39
+ max_new_tokens=100,
40
+ num_beams=5,
41
+ length_penalty=0.8,
42
+ early_stopping=True
43
+ )
44
+
45
+ summary = tokenizer.decode(output[0], skip_special_tokens=True)
46
+ return summary
47
+
48
+ print("\nπŸ“ **Model Summary:**")
49
+ print(test_summarization(model, tokenizer))
50
+ ```
51
+
52
+ # πŸ“Š ROUGE Evaluation Results
53
+
54
+ After fine-tuning the **T5-Small** model for text summarization, we obtained the following **ROUGE** scores:
55
+
56
+ | **Metric** | **Score** | **Meaning** |
57
+ |-------------|-----------|-------------|
58
+ | **ROUGE-1** | **0.3061** (~30%) | Measures overlap of **unigrams (single words)** between the reference and generated summary. |
59
+ | **ROUGE-2** | **0.1241** (~12%) | Measures overlap of **bigrams (two-word phrases)**, indicating coherence and fluency. |
60
+ | **ROUGE-L** | **0.2233** (~22%) | Measures **longest matching word sequences**, testing sentence structure preservation. |
61
+ | **ROUGE-Lsum** | **0.2620** (~26%) | Similar to ROUGE-L but optimized for summarization tasks. |
62
+
63
+
64
+ ## Fine-Tuning Details
65
+
66
+ ### Dataset
67
+
68
+ The Hugging Face's `cnn_dailymail` dataset was used, containing the text and their summarization examples.
69
+
70
+ ### Training
71
+
72
+ - Number of epochs: 3
73
+ - Batch size: 4
74
+ - Evaluation strategy: epoch
75
+ - Learning rate: 3e-5
76
+
77
+ ### Quantization
78
+
79
+ Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
80
+
81
+ ## Repository Structure
82
+
83
+ ```
84
+ .
85
+ β”œβ”€β”€ model/ # Contains the quantized model files
86
+ β”œβ”€β”€ tokenizer_config/ # Tokenizer configuration and vocabulary files
87
+ β”œβ”€β”€ model.safetensors/ # Quantized Model
88
+ β”œβ”€β”€ README.md # Model documentation
89
+ ```
90
+
91
+ ## Limitations
92
+
93
+ - The model may not generalize well to domains outside the fine-tuning dataset.
94
+ - Quantization may result in minor accuracy degradation compared to full-precision models.
95
+
96
+ ## Contributing
97
+
98
+ Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.