File size: 3,561 Bytes
8453c7b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
import streamlit as st
from transformers import pipeline
import torch
# Page Config
st.set_page_config(page_title='Transformers in NLP', layout='wide')
# Page Title
st.markdown('<h1 style="color:#4CAF50; text-align:center;">π€ Transformers & Pretrained Models in NLP π</h1>', unsafe_allow_html=True)
# Transformer Architecture
st.markdown('<h2 style="color:#FF5733">π 1. Transformer Architecture</h2>', unsafe_allow_html=True)
st.subheader('π Definition:')
st.write("""
The **Transformer architecture** revolutionized NLP by using **self-attention** to process sequences in parallel.
- **Self-attention** enables words to focus on others dynamically.
- The **encoder-decoder** structure is used in tasks like translation.
π Introduced in "**Attention is All You Need**" (Vaswani et al., 2017).
""")
st.subheader('π οΈ Key Components:')
st.write("""
- **Encoder**: Processes input tokens into internal representations.
- **Decoder**: Uses encoder outputs to generate predictions.
- **Multi-head Attention**: Allows diverse focus across sequences.
- **Positional Encoding**: Injects sequence order into embeddings.
""")
# Pretrained Models
st.markdown('<h2 style="color:#3E7FCB">π 2. Pretrained Models</h2>', unsafe_allow_html=True)
st.subheader('π Definition:')
st.write("""
Pretrained models leverage vast corpora to understand language patterns.
- **BERT**: Bi-directional context learning for diverse NLP tasks.
- **GPT**: Text generation with autoregressive modeling.
- **RoBERTa**: Optimized BERT variant.
- **T5**: Universal text-to-text learning.
- **XLNet**: Captures dependencies in all positions.
""")
# Sentiment Analysis Example
st.subheader('π Pretrained Model Example: Sentiment Analysis')
nlp = pipeline("sentiment-analysis", model="bert-base-uncased")
text = st.text_area("π Enter text to analyze", "Transformers are amazing!")
if st.button('π Analyze Sentiment'):
result = nlp(text)
st.write(f"**π§ Result:** {result}")
# Fine-tuning Pretrained Models
st.markdown('<h2 style="color:#E67E22">π 3. Fine-tuning Pretrained Models</h2>', unsafe_allow_html=True)
st.subheader('βοΈ Definition:')
st.write("""
Fine-tuning tailors pretrained models to specific NLP tasks:
- **Sentiment Analysis**: Classifies text sentiments.
- **Named Entity Recognition (NER)**: Detects names, locations, organizations.
- **Question Answering**: Extracts answers from given contexts.
""")
# NER Example
st.subheader('π Named Entity Recognition (NER)')
nlp_ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
text_ner = st.text_area("π€ Enter text for NER", "Barack Obama was born in Hawaii.")
if st.button('π¬ Perform NER'):
ner_results = nlp_ner(text_ner)
st.write("**π NER Results:**")
for entity in ner_results:
st.write(f"π {entity['word']} - {entity['entity']} - Confidence: {entity['score']:.2f}")
# Question Answering Example
st.subheader('π§ Question Answering with BERT')
nlp_qa = pipeline("question-answering", model="bert-large-uncased-whole-word-masking-finetuned-squad")
context = st.text_area("π Enter context", "Transformers revolutionized NLP with parallel processing.")
question = st.text_input("β Ask a question", "What did transformers revolutionize?")
if st.button('π€ Get Answer'):
answer = nlp_qa(question=question, context=context)
st.write(f"**π Answer:** {answer['answer']}")
st.markdown('<h3 style="color:#4CAF50; text-align:center;">β¨ Thanks for Exploring NLP! β¨</h3>', unsafe_allow_html=True)
|