Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -1,71 +1,84 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
```
|
50 |
-
|
51 |
-
```
|
52 |
-
|
53 |
-
```
|
54 |
-
|
55 |
-
```
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Gradio Chatbot
|
3 |
+
emoji: π
|
4 |
+
colorFrom: yellow
|
5 |
+
colorTo: purple
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 5.0.1
|
8 |
+
app_file: app.py
|
9 |
+
pinned: true
|
10 |
+
short_description: Chatbot
|
11 |
+
---
|
12 |
+
|
13 |
+
|
14 |
+
# Gradio Chatbot : HuggingFace SLMs
|
15 |
+
|
16 |
+
A modular Gradio-based application for interacting with various small language models through the Hugging Face API.
|
17 |
+
|
18 |
+
## Project Structure
|
19 |
+
|
20 |
+
```
|
21 |
+
slm-poc/
|
22 |
+
βββ main.py # Main application entry point
|
23 |
+
βββ modules/
|
24 |
+
β βββ __init__.py # Package initialization
|
25 |
+
β βββ config.py # Configuration settings and constants
|
26 |
+
β βββ document_processor.py # Document handling and processing
|
27 |
+
β βββ model_handler.py # Model interaction and response generation
|
28 |
+
βββ Dockerfile # Docker configuration
|
29 |
+
βββ requirements.txt # Python dependencies
|
30 |
+
βββ README.md # Project documentation
|
31 |
+
```
|
32 |
+
|
33 |
+
## Features
|
34 |
+
|
35 |
+
- Interactive chat interface with multiple language model options
|
36 |
+
- Document processing (PDF, DOCX, TXT) for question answering
|
37 |
+
- Adjustable model parameters (temperature, top_p, max_length)
|
38 |
+
- Streaming responses for better user experience
|
39 |
+
- Docker support for easy deployment
|
40 |
+
|
41 |
+
## Setup and Running
|
42 |
+
|
43 |
+
### Local Development
|
44 |
+
|
45 |
+
1. Clone the repository
|
46 |
+
2. Install dependencies:
|
47 |
+
```
|
48 |
+
pip install -r requirements.txt
|
49 |
+
```
|
50 |
+
3. Create a `.env` file with your HuggingFace API token:
|
51 |
+
```
|
52 |
+
HF_TOKEN=hf_your_token_here
|
53 |
+
```
|
54 |
+
4. Run the application:
|
55 |
+
```
|
56 |
+
python main.py
|
57 |
+
```
|
58 |
+
|
59 |
+
### Docker Deployment
|
60 |
+
|
61 |
+
1. Build the Docker image:
|
62 |
+
```
|
63 |
+
docker build -t slm-poc .
|
64 |
+
```
|
65 |
+
2. Run the container:
|
66 |
+
```
|
67 |
+
docker run -p 7860:7860 -e HF_TOKEN=hf_your_token_here slm-poc
|
68 |
+
```
|
69 |
+
|
70 |
+
## Usage
|
71 |
+
|
72 |
+
1. Access the web interface at http://localhost:7860
|
73 |
+
2. Enter your HuggingFace API token if not provided via environment variables
|
74 |
+
3. Select your preferred model and adjust parameters
|
75 |
+
4. Start chatting with the model
|
76 |
+
5. Optionally upload documents for document-based Q&A
|
77 |
+
|
78 |
+
## Supported Models
|
79 |
+
|
80 |
+
T2T Inference models provided by Hugging Face via the Inference API
|
81 |
+
|
82 |
+
## License
|
83 |
+
|
84 |
+
This project is licensed under the MIT License - see the LICENSE file for details.
|