Gopi9177 commited on
Commit
00dd7f8
Β·
verified Β·
1 Parent(s): 4bfb713

Create Deeplearning for NLP.py

Browse files
Files changed (1) hide show
  1. pages/Deeplearning for NLP.py +111 -0
pages/Deeplearning for NLP.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import numpy as np
3
+ import torch
4
+ from torch import nn
5
+ import tensorflow as tf
6
+ from tensorflow import keras
7
+ from tensorflow.keras import layers
8
+
9
+ # Set page config
10
+ st.set_page_config(page_title='Advanced NLP with Deep Learning', layout='wide')
11
+
12
+ # Title with styled emoji
13
+ st.markdown('<h1 style="color:#4CAF50; text-align:center;">πŸ€– Advanced NLP with Deep Learning πŸš€</h1>', unsafe_allow_html=True)
14
+
15
+ # Section 1: Word Embeddings
16
+ st.markdown('<h2 style="color:#FF5733">πŸ“Œ 1. Word Embeddings</h2>', unsafe_allow_html=True)
17
+
18
+ st.subheader('πŸ”Ž Definition:')
19
+ st.write("""
20
+ Word embeddings are dense vector representations of words where similar words have similar representations. They are essential for text-based deep learning models.
21
+
22
+ - **πŸ”Ή Word2Vec (Skip-gram & CBOW)**: Learns word representations based on context.
23
+ - **πŸ”Ή GloVe (Global Vectors)**: Uses word co-occurrence statistics to learn embeddings.
24
+ - **πŸ”Ή FastText**: Handles subword information, helping with out-of-vocabulary words.
25
+ """)
26
+
27
+ # Word2Vec Example
28
+ st.subheader('🧩 Word2Vec Example:')
29
+ sentence = st.text_area("Enter a sentence to visualize Word2Vec embeddings", "NLP is amazing and very useful.")
30
+
31
+ if st.button('🎨 Visualize Word2Vec'):
32
+ words = sentence.split()
33
+ embeddings = {word: np.random.rand(1, 50) for word in words}
34
+ st.write("**Word2Vec Embeddings (Random Example):**")
35
+ for word, emb in embeddings.items():
36
+ st.write(f"{word}: {emb.flatten()[:5]}...")
37
+
38
+ # Section 2: Sequence Models
39
+ st.markdown('<h2 style="color:#3E7FCB">πŸ“Œ 2. Sequence Models</h2>', unsafe_allow_html=True)
40
+
41
+ st.subheader('πŸ”Ž Definition:')
42
+ st.write("""
43
+ Sequence models process sequential data like sentences and play a key role in NLP tasks like translation, summarization, and sentiment analysis.
44
+
45
+ - **πŸ”Ή RNN (Recurrent Neural Networks)**: Maintains memory of previous words.
46
+ - **πŸ”Ή LSTM (Long Short-Term Memory)**: Handles long-range dependencies.
47
+ - **πŸ”Ή GRU (Gated Recurrent Units)**: A simplified LSTM version.
48
+ """)
49
+
50
+ # RNN Example
51
+ st.subheader('πŸ› οΈ RNN Example (PyTorch):')
52
+ if st.button('πŸ–₯️ Show RNN Model Architecture'):
53
+ class SimpleRNN(nn.Module):
54
+ def __init__(self, input_size, hidden_size, output_size):
55
+ super(SimpleRNN, self).__init__()
56
+ self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
57
+ self.fc = nn.Linear(hidden_size, output_size)
58
+
59
+ def forward(self, x):
60
+ out, _ = self.rnn(x)
61
+ out = self.fc(out[:, -1, :])
62
+ return out
63
+
64
+ rnn_model = SimpleRNN(input_size=10, hidden_size=20, output_size=1)
65
+ st.write("**RNN Architecture:**")
66
+ st.write(rnn_model)
67
+
68
+ # Section 3: Attention Mechanisms
69
+ st.markdown('<h2 style="color:#E67E22">πŸ“Œ 3. Attention Mechanisms</h2>', unsafe_allow_html=True)
70
+
71
+ st.subheader('πŸ”Ž Definition:')
72
+ st.write("""
73
+ Attention mechanisms allow models to focus on key parts of an input sequence, improving performance on tasks that require long-range dependencies.
74
+
75
+ - **πŸ”Ή Self-attention**: Assigns importance to different words.
76
+ - **πŸ”Ή Seq2Seq Models**: Encoder-decoder models used for translation.
77
+ - **πŸ”Ή Transformer**: Parallel processing for high efficiency.
78
+ """)
79
+
80
+ # Transformer Example
81
+ st.subheader('πŸ› οΈ Transformer Example (Simplified):')
82
+ if st.button('πŸ–₯️ Show Transformer Architecture'):
83
+ transformer_model = keras.Sequential([
84
+ layers.InputLayer(input_shape=(None, 512)),
85
+ layers.MultiHeadAttention(num_heads=8, key_dim=512),
86
+ layers.GlobalAveragePooling1D(),
87
+ layers.Dense(256, activation="relu"),
88
+ layers.Dense(1)
89
+ ])
90
+ st.write("**Transformer Architecture (Simplified):**")
91
+ st.write(transformer_model)
92
+
93
+ # Section 4: Key Attention Components
94
+ st.markdown('<h2 style="color:#9C27B0">πŸ“Œ 4. Attention Components</h2>', unsafe_allow_html=True)
95
+
96
+ st.subheader('πŸ” Self-attention:')
97
+ st.write("""
98
+ Each word in a sequence attends to all other words and assigns an importance weight, capturing long-range dependencies.
99
+ """)
100
+
101
+ st.subheader('πŸ”„ Seq2Seq:')
102
+ st.write("""
103
+ Used for translation, where an encoder processes input and a decoder generates output.
104
+ """)
105
+
106
+ st.subheader('⚑ Transformer:')
107
+ st.write("""
108
+ Revolutionized NLP by using self-attention in both encoder and decoder while processing all tokens in parallel.
109
+ """)
110
+
111
+ st.markdown('<h3 style="color:#4CAF50; text-align:center;">✨ Thanks for Exploring NLP! ✨</h3>', unsafe_allow_html=True)