GPT-2 Tiny Shakespeare Model
This is a small autoregressive language model based on the Transformer architecture trained on the Tiny Shakespeare dataset.
Model Description
The model is a custom implementation of a TransformerDecoderModel, which uses a decoder-only architecture similar to GPT-2. It was trained on the Tiny Shakespeare dataset to generate text in the style of William Shakespeare.
How to Use
To generate text with this model, you can load it and the tokenizer as follows:
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained('NataliaH/gpt2-tiny-shakespeare')
tokenizer = GPT2Tokenizer.from_pretrained('NataliaH/gpt2-tiny-shakespeare')
input_text = 'To be or not to be'
inputs = tokenizer(input_text, return_tensors='pt')
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Tags
- Transformer
- GPT-2
- Tiny Shakespeare
- Language Model
- Text Generation
- Autoregressive
Training Details
- Epochs: 3
- Batch size: 4
- Learning Rate: 5e-5
- Loss Function: Cross-Entropy Loss
- Optimizer: AdamW
License
This model is licensed under the MIT license.
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support