stivenDR14 commited on
Commit
b526b77
·
1 Parent(s): 7c1a0a6

change readme

Browse files
Files changed (2) hide show
  1. README.md +0 -144
  2. readme-1.md +141 -0
README.md CHANGED
@@ -1,147 +1,4 @@
1
- # 🤖 PDF AI Assistant
2
-
3
- A multilingual PDF processing application that leverages various AI models to analyze, summarize, and interact with PDF documents. Built with Python, Gradio, and LangChain.
4
-
5
- ## 🌟 Features
6
-
7
- - **Multiple AI Models Support**:
8
-
9
- - OpenAI GPT-4
10
- - IBM Granite 3.1
11
- - Mistral Small 24B
12
- - SmolLM2 1.7B
13
- - Local Ollama models
14
-
15
- - **Multilingual Interface**:
16
-
17
- - English
18
- - Español
19
- - Deutsch
20
- - Français
21
- - Português
22
-
23
- - **Core Functionalities**:
24
- - 📝 Text extraction from PDFs
25
- - 💬 Interactive Q&A with document content
26
- - 📋 Document summarization
27
- - 👨‍💼 Customizable specialist advisor
28
- - 🔄 Dynamic chunk size and overlap settings
29
-
30
- ## 🛠️ Installation
31
-
32
- 1. Clone the repository:
33
-
34
- ```bash
35
- git clone <repository-url>
36
- cd pdf-ai-assistant
37
- ```
38
-
39
- 2. Install required dependencies:
40
-
41
- ```bash
42
- pip install -r requirements.txt
43
- ```
44
-
45
- 3. Set up environment variables:
46
-
47
- ```bash
48
- # Create .env file
49
- touch .env
50
-
51
- # Add your API keys (if using)
52
- WATSONX_APIKEY=your_watsonx_api_key
53
- WATSONX_PROJECT_ID=your_watsonx_project_id
54
- ```
55
-
56
- ## 📦 Dependencies
57
-
58
- - gradio
59
- - langchain
60
- - chromadb
61
- - PyPDF2
62
- - ollama (for local models)
63
- - python-dotenv
64
- - requests
65
- - ibm-watsonx-ai
66
-
67
- ## 🚀 Usage
68
-
69
- 1. Start the application:
70
-
71
- ```bash
72
- python app.py
73
- ```
74
-
75
- 2. Open your web browser and navigate to the provided URL (usually http://localhost:7860)
76
-
77
- 3. Select your preferred:
78
-
79
- - Language
80
- - AI Model
81
- - Model Type (Local/API)
82
-
83
- 4. Upload a PDF file and process it
84
-
85
- 5. Use any of the three main features:
86
- - Ask questions about the document
87
- - Generate a comprehensive summary
88
- - Get specialized analysis using the custom advisor
89
-
90
- ## 💡 Features in Detail
91
-
92
- ### Q&A System
93
-
94
- - Interactive chat interface
95
- - Context-aware responses
96
- - Source page references
97
-
98
- ### Summarization
99
-
100
- - Chunk-based processing
101
- - Configurable chunk sizes
102
- - Comprehensive document overview
103
-
104
- ### Specialist Advisor
105
-
106
- - Customizable expert roles
107
- - Detailed analysis based on expertise
108
- - Structured insights and recommendations
109
-
110
- ## 🔧 Configuration
111
-
112
- The application supports various AI models:
113
-
114
- - Local models via Ollama
115
- - API-based models (OpenAI, IBM WatsonX)
116
- - Hugging Face models
117
-
118
- For Ollama local models, ensure:
119
-
120
- ```bash
121
- ollama pull granite3.1-dense
122
- ollama pull granite-embedding:278m
123
- ```
124
-
125
- ## 🌐 Language Support
126
-
127
- The interface and AI responses are available in:
128
-
129
- - English
130
- - Spanish
131
- - German
132
- - French
133
- - Portuguese
134
-
135
- ## 📝 License
136
-
137
- [MIT License]
138
-
139
- ## 🤝 Contributing
140
-
141
- Contributions, issues, and feature requests are welcome!
142
-
143
  ---
144
-
145
  title: PDF Chatbot
146
  emoji: 🌍
147
  colorFrom: blue
@@ -150,7 +7,6 @@ sdk: gradio
150
  sdk_version: 5.19.0
151
  app_file: app.py
152
  pinned: true
153
-
154
  ---
155
 
156
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  title: PDF Chatbot
3
  emoji: 🌍
4
  colorFrom: blue
 
7
  sdk_version: 5.19.0
8
  app_file: app.py
9
  pinned: true
 
10
  ---
11
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
readme-1.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🤖 PDF AI Assistant
2
+
3
+ A multilingual PDF processing application that leverages various AI models to analyze, summarize, and interact with PDF documents. Built with Python, Gradio, and LangChain.
4
+
5
+ ## 🌟 Features
6
+
7
+ - **Multiple AI Models Support**:
8
+
9
+ - OpenAI GPT-4
10
+ - IBM Granite 3.1
11
+ - Mistral Small 24B
12
+ - SmolLM2 1.7B
13
+ - Local Ollama models
14
+
15
+ - **Multilingual Interface**:
16
+
17
+ - English
18
+ - Español
19
+ - Deutsch
20
+ - Français
21
+ - Português
22
+
23
+ - **Core Functionalities**:
24
+ - 📝 Text extraction from PDFs
25
+ - 💬 Interactive Q&A with document content
26
+ - 📋 Document summarization
27
+ - 👨‍💼 Customizable specialist advisor
28
+ - 🔄 Dynamic chunk size and overlap settings
29
+
30
+ ## 🛠️ Installation
31
+
32
+ 1. Clone the repository:
33
+
34
+ ```bash
35
+ git clone <repository-url>
36
+ cd pdf-ai-assistant
37
+ ```
38
+
39
+ 2. Install required dependencies:
40
+
41
+ ```bash
42
+ pip install -r requirements.txt
43
+ ```
44
+
45
+ 3. Set up environment variables:
46
+
47
+ ```bash
48
+ # Create .env file
49
+ touch .env
50
+
51
+ # Add your API keys (if using)
52
+ WATSONX_APIKEY=your_watsonx_api_key
53
+ WATSONX_PROJECT_ID=your_watsonx_project_id
54
+ ```
55
+
56
+ ## 📦 Dependencies
57
+
58
+ - gradio
59
+ - langchain
60
+ - chromadb
61
+ - PyPDF2
62
+ - ollama (for local models)
63
+ - python-dotenv
64
+ - requests
65
+ - ibm-watsonx-ai
66
+
67
+ ## 🚀 Usage
68
+
69
+ 1. Start the application:
70
+
71
+ ```bash
72
+ python app.py
73
+ ```
74
+
75
+ 2. Open your web browser and navigate to the provided URL (usually http://localhost:7860)
76
+
77
+ 3. Select your preferred:
78
+
79
+ - Language
80
+ - AI Model
81
+ - Model Type (Local/API)
82
+
83
+ 4. Upload a PDF file and process it
84
+
85
+ 5. Use any of the three main features:
86
+ - Ask questions about the document
87
+ - Generate a comprehensive summary
88
+ - Get specialized analysis using the custom advisor
89
+
90
+ ## 💡 Features in Detail
91
+
92
+ ### Q&A System
93
+
94
+ - Interactive chat interface
95
+ - Context-aware responses
96
+ - Source page references
97
+
98
+ ### Summarization
99
+
100
+ - Chunk-based processing
101
+ - Configurable chunk sizes
102
+ - Comprehensive document overview
103
+
104
+ ### Specialist Advisor
105
+
106
+ - Customizable expert roles
107
+ - Detailed analysis based on expertise
108
+ - Structured insights and recommendations
109
+
110
+ ## 🔧 Configuration
111
+
112
+ The application supports various AI models:
113
+
114
+ - Local models via Ollama
115
+ - API-based models (OpenAI, IBM WatsonX)
116
+ - Hugging Face models
117
+
118
+ For Ollama local models, ensure:
119
+
120
+ ```bash
121
+ ollama pull granite3.1-dense
122
+ ollama pull granite-embedding:278m
123
+ ```
124
+
125
+ ## 🌐 Language Support
126
+
127
+ The interface and AI responses are available in:
128
+
129
+ - English
130
+ - Spanish
131
+ - German
132
+ - French
133
+ - Portuguese
134
+
135
+ ## 📝 License
136
+
137
+ [MIT License]
138
+
139
+ ## 🤝 Contributing
140
+
141
+ Contributions, issues, and feature requests are welcome!