adv-11 commited on
Commit
34187cd
Β·
verified Β·
1 Parent(s): 7391bb6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -71
README.md CHANGED
@@ -1,71 +1,84 @@
1
- # Gradio Chatbot : HuggingFace SLMs
2
-
3
- A modular Gradio-based application for interacting with various small language models through the Hugging Face API.
4
-
5
- ## Project Structure
6
-
7
- ```
8
- slm-poc/
9
- β”œβ”€β”€ main.py # Main application entry point
10
- β”œβ”€β”€ modules/
11
- β”‚ β”œβ”€β”€ __init__.py # Package initialization
12
- β”‚ β”œβ”€β”€ config.py # Configuration settings and constants
13
- β”‚ β”œβ”€β”€ document_processor.py # Document handling and processing
14
- β”‚ └── model_handler.py # Model interaction and response generation
15
- β”œβ”€β”€ Dockerfile # Docker configuration
16
- β”œβ”€β”€ requirements.txt # Python dependencies
17
- └── README.md # Project documentation
18
- ```
19
-
20
- ## Features
21
-
22
- - Interactive chat interface with multiple language model options
23
- - Document processing (PDF, DOCX, TXT) for question answering
24
- - Adjustable model parameters (temperature, top_p, max_length)
25
- - Streaming responses for better user experience
26
- - Docker support for easy deployment
27
-
28
- ## Setup and Running
29
-
30
- ### Local Development
31
-
32
- 1. Clone the repository
33
- 2. Install dependencies:
34
- ```
35
- pip install -r requirements.txt
36
- ```
37
- 3. Create a `.env` file with your HuggingFace API token:
38
- ```
39
- HF_TOKEN=hf_your_token_here
40
- ```
41
- 4. Run the application:
42
- ```
43
- python main.py
44
- ```
45
-
46
- ### Docker Deployment
47
-
48
- 1. Build the Docker image:
49
- ```
50
- docker build -t slm-poc .
51
- ```
52
- 2. Run the container:
53
- ```
54
- docker run -p 7860:7860 -e HF_TOKEN=hf_your_token_here slm-poc
55
- ```
56
-
57
- ## Usage
58
-
59
- 1. Access the web interface at http://localhost:7860
60
- 2. Enter your HuggingFace API token if not provided via environment variables
61
- 3. Select your preferred model and adjust parameters
62
- 4. Start chatting with the model
63
- 5. Optionally upload documents for document-based Q&A
64
-
65
- ## Supported Models
66
-
67
- T2T Inference models provided by Hugging Face via the Inference API
68
-
69
- ## License
70
-
71
- This project is licensed under the MIT License - see the LICENSE file for details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Gradio Chatbot
3
+ emoji: πŸš€
4
+ colorFrom: yellow
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 5.0.1
8
+ app_file: app.py
9
+ pinned: true
10
+ short_description: Chatbot
11
+ ---
12
+
13
+
14
+ # Gradio Chatbot : HuggingFace SLMs
15
+
16
+ A modular Gradio-based application for interacting with various small language models through the Hugging Face API.
17
+
18
+ ## Project Structure
19
+
20
+ ```
21
+ slm-poc/
22
+ β”œβ”€β”€ main.py # Main application entry point
23
+ β”œβ”€β”€ modules/
24
+ β”‚ β”œβ”€β”€ __init__.py # Package initialization
25
+ β”‚ β”œβ”€β”€ config.py # Configuration settings and constants
26
+ β”‚ β”œβ”€β”€ document_processor.py # Document handling and processing
27
+ β”‚ └── model_handler.py # Model interaction and response generation
28
+ β”œβ”€β”€ Dockerfile # Docker configuration
29
+ β”œβ”€β”€ requirements.txt # Python dependencies
30
+ └── README.md # Project documentation
31
+ ```
32
+
33
+ ## Features
34
+
35
+ - Interactive chat interface with multiple language model options
36
+ - Document processing (PDF, DOCX, TXT) for question answering
37
+ - Adjustable model parameters (temperature, top_p, max_length)
38
+ - Streaming responses for better user experience
39
+ - Docker support for easy deployment
40
+
41
+ ## Setup and Running
42
+
43
+ ### Local Development
44
+
45
+ 1. Clone the repository
46
+ 2. Install dependencies:
47
+ ```
48
+ pip install -r requirements.txt
49
+ ```
50
+ 3. Create a `.env` file with your HuggingFace API token:
51
+ ```
52
+ HF_TOKEN=hf_your_token_here
53
+ ```
54
+ 4. Run the application:
55
+ ```
56
+ python main.py
57
+ ```
58
+
59
+ ### Docker Deployment
60
+
61
+ 1. Build the Docker image:
62
+ ```
63
+ docker build -t slm-poc .
64
+ ```
65
+ 2. Run the container:
66
+ ```
67
+ docker run -p 7860:7860 -e HF_TOKEN=hf_your_token_here slm-poc
68
+ ```
69
+
70
+ ## Usage
71
+
72
+ 1. Access the web interface at http://localhost:7860
73
+ 2. Enter your HuggingFace API token if not provided via environment variables
74
+ 3. Select your preferred model and adjust parameters
75
+ 4. Start chatting with the model
76
+ 5. Optionally upload documents for document-based Q&A
77
+
78
+ ## Supported Models
79
+
80
+ T2T Inference models provided by Hugging Face via the Inference API
81
+
82
+ ## License
83
+
84
+ This project is licensed under the MIT License - see the LICENSE file for details.