File size: 1,634 Bytes
bc8a11e
f735f85
bc8a11e
 
 
 
 
 
 
f735f85
 
bc8a11e
e6fc759
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc8a11e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
title: Distilltest
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.0.1
app_file: app.py
pinned: false
license: mit
short_description: Uncensored R1 Distill
---
# DeepSeek-R1-Distill-Qwen-1.5B Demo

This is a demonstration of the DeepSeek-R1-Distill-Qwen-1.5B-uncensored model, a lightweight 1.5B parameter language model from ThirdEyeAI. Despite its small size, this model can generate coherent text across various topics.

## Features

- Efficient inference with a small memory footprint
- Adjustable generation parameters (length, temperature, top-p)
- Simple interface for text generation

## Usage

1. Enter your prompt in the text box
2. Adjust generation parameters if desired
3. Click "Submit" to generate text

## Model Information

This space runs the DeepSeek-R1-Distill-Qwen-1.5B-uncensored model from ThirdEyeAI. It's a distilled version of the Qwen architecture, optimized for efficiency while maintaining good performance. The model is available on Hugging Face at [thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored](https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored).

## Limitations

As a small 1.5B parameter model, this LLM has certain limitations:
- Less knowledge than larger models
- More limited reasoning capabilities
- May produce less coherent outputs on complex topics

## License

[Include license information for your model]

An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).