|
import streamlit as st |
|
from transformers import pipeline |
|
import torch |
|
|
|
|
|
st.set_page_config(page_title='Transformers in NLP', layout='wide') |
|
|
|
|
|
st.markdown('<h1 style="color:#4CAF50; text-align:center;">π€ Transformers & Pretrained Models in NLP π</h1>', unsafe_allow_html=True) |
|
|
|
|
|
st.markdown('<h2 style="color:#FF5733">π 1. Transformer Architecture</h2>', unsafe_allow_html=True) |
|
|
|
st.subheader('π Definition:') |
|
st.write(""" |
|
The **Transformer architecture** revolutionized NLP by using **self-attention** to process sequences in parallel. |
|
- **Self-attention** enables words to focus on others dynamically. |
|
- The **encoder-decoder** structure is used in tasks like translation. |
|
|
|
π Introduced in "**Attention is All You Need**" (Vaswani et al., 2017). |
|
""") |
|
|
|
st.subheader('π οΈ Key Components:') |
|
st.write(""" |
|
- **Encoder**: Processes input tokens into internal representations. |
|
- **Decoder**: Uses encoder outputs to generate predictions. |
|
- **Multi-head Attention**: Allows diverse focus across sequences. |
|
- **Positional Encoding**: Injects sequence order into embeddings. |
|
""") |
|
|
|
|
|
st.markdown('<h2 style="color:#3E7FCB">π 2. Pretrained Models</h2>', unsafe_allow_html=True) |
|
|
|
st.subheader('π Definition:') |
|
st.write(""" |
|
Pretrained models leverage vast corpora to understand language patterns. |
|
- **BERT**: Bi-directional context learning for diverse NLP tasks. |
|
- **GPT**: Text generation with autoregressive modeling. |
|
- **RoBERTa**: Optimized BERT variant. |
|
- **T5**: Universal text-to-text learning. |
|
- **XLNet**: Captures dependencies in all positions. |
|
""") |
|
|
|
|
|
st.subheader('π Pretrained Model Example: Sentiment Analysis') |
|
nlp = pipeline("sentiment-analysis", model="bert-base-uncased") |
|
text = st.text_area("π Enter text to analyze", "Transformers are amazing!") |
|
if st.button('π Analyze Sentiment'): |
|
result = nlp(text) |
|
st.write(f"**π§ Result:** {result}") |
|
|
|
|
|
st.markdown('<h2 style="color:#E67E22">π 3. Fine-tuning Pretrained Models</h2>', unsafe_allow_html=True) |
|
|
|
st.subheader('βοΈ Definition:') |
|
st.write(""" |
|
Fine-tuning tailors pretrained models to specific NLP tasks: |
|
- **Sentiment Analysis**: Classifies text sentiments. |
|
- **Named Entity Recognition (NER)**: Detects names, locations, organizations. |
|
- **Question Answering**: Extracts answers from given contexts. |
|
""") |
|
|
|
|
|
st.subheader('π Named Entity Recognition (NER)') |
|
nlp_ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english") |
|
text_ner = st.text_area("π€ Enter text for NER", "Barack Obama was born in Hawaii.") |
|
if st.button('π¬ Perform NER'): |
|
ner_results = nlp_ner(text_ner) |
|
st.write("**π NER Results:**") |
|
for entity in ner_results: |
|
st.write(f"π {entity['word']} - {entity['entity']} - Confidence: {entity['score']:.2f}") |
|
|
|
|
|
st.subheader('π§ Question Answering with BERT') |
|
nlp_qa = pipeline("question-answering", model="bert-large-uncased-whole-word-masking-finetuned-squad") |
|
context = st.text_area("π Enter context", "Transformers revolutionized NLP with parallel processing.") |
|
question = st.text_input("β Ask a question", "What did transformers revolutionize?") |
|
if st.button('π€ Get Answer'): |
|
answer = nlp_qa(question=question, context=context) |
|
st.write(f"**π Answer:** {answer['answer']}") |
|
|
|
st.markdown('<h3 style="color:#4CAF50; text-align:center;">β¨ Thanks for Exploring NLP! β¨</h3>', unsafe_allow_html=True) |
|
|