TinyRP

A >tiny< roleplaying model, 30M parameters large, trained from scratch with a custom tokenizer! Inspired by the success of models like Microsoft's Phi and TinyStories, this is an experiment to see if reasonable succes can be achieved for roleplay with a similar approach.

Roleplay was chosen because it is harder to keep a story going consistently across multiple turns than it is to simply generate it once based off of a simple prompt. Do remember, that this training set was not sanitized, so NSFW results are definitely possible.

Out of scope

Anything other than roleplay in English.

Formatting

Use the ChatML format to "chat" with this model. The entire training dataset was modified to use this format, so the model only understand it. Use the "system" prompt to describe the name and character of the AI, the user and assistant tags are used as normal.

Recommended settings

  • Temperature=1.0
  • Top_p=0.9
  • Max_length=512

Training code

Please see this GitHub repo for training code

Downloads last month
3
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train DarwinAnim8or/TinyRP