Spaces:
Sleeping
Sleeping
title: Distilltest | |
emoji: 💬 | |
colorFrom: yellow | |
colorTo: purple | |
sdk: gradio | |
sdk_version: 5.0.1 | |
app_file: app.py | |
pinned: false | |
license: mit | |
short_description: Uncensored R1 Distill | |
# DeepSeek-R1-Distill-Qwen-1.5B Demo | |
This is a demonstration of the DeepSeek-R1-Distill-Qwen-1.5B-uncensored model, a lightweight 1.5B parameter language model from ThirdEyeAI. Despite its small size, this model can generate coherent text across various topics. | |
## Features | |
- Efficient inference with a small memory footprint | |
- Adjustable generation parameters (length, temperature, top-p) | |
- Simple interface for text generation | |
## Usage | |
1. Enter your prompt in the text box | |
2. Adjust generation parameters if desired | |
3. Click "Submit" to generate text | |
## Model Information | |
This space runs the DeepSeek-R1-Distill-Qwen-1.5B-uncensored model from ThirdEyeAI. It's a distilled version of the Qwen architecture, optimized for efficiency while maintaining good performance. The model is available on Hugging Face at [thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored](https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored). | |
## Limitations | |
As a small 1.5B parameter model, this LLM has certain limitations: | |
- Less knowledge than larger models | |
- More limited reasoning capabilities | |
- May produce less coherent outputs on complex topics | |
## License | |
[Include license information for your model] | |
An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index). |