Meroar commited on
Commit
75a0618
·
verified ·
1 Parent(s): 498ea55

Add 3 files

Browse files
Files changed (3) hide show
  1. README.md +7 -5
  2. index.html +948 -19
  3. prompts.txt +1 -0
README.md CHANGED
@@ -1,10 +1,12 @@
1
  ---
2
- title: Batch Ollamanator
3
- emoji: 📊
4
- colorFrom: blue
5
- colorTo: green
6
  sdk: static
7
  pinned: false
 
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: batch-ollamanator
3
+ emoji: 🐳
4
+ colorFrom: pink
5
+ colorTo: purple
6
  sdk: static
7
  pinned: false
8
+ tags:
9
+ - deepsite
10
  ---
11
 
12
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
index.html CHANGED
@@ -1,19 +1,948 @@
1
- <!doctype html>
2
- <html>
3
- <head>
4
- <meta charset="utf-8" />
5
- <meta name="viewport" content="width=device-width" />
6
- <title>My static Space</title>
7
- <link rel="stylesheet" href="style.css" />
8
- </head>
9
- <body>
10
- <div class="card">
11
- <h1>Welcome to your static Space!</h1>
12
- <p>You can modify this app directly by editing <i>index.html</i> in the Files and versions tab.</p>
13
- <p>
14
- Also don't forget to check the
15
- <a href="https://huggingface.co/docs/hub/spaces" target="_blank">Spaces documentation</a>.
16
- </p>
17
- </div>
18
- </body>
19
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Ollama Workstation</title>
7
+ <script src="https://cdn.tailwindcss.com"></script>
8
+ <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css">
9
+ <style>
10
+ .message-stream {
11
+ height: calc(100vh - 300px);
12
+ }
13
+ .sidebar {
14
+ width: 300px;
15
+ transition: all 0.3s;
16
+ }
17
+ .sidebar.collapsed {
18
+ transform: translateX(-280px);
19
+ }
20
+ .model-card:hover {
21
+ transform: translateY(-2px);
22
+ box-shadow: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05);
23
+ }
24
+ .typing-indicator::after {
25
+ content: '...';
26
+ animation: typing 1.5s infinite;
27
+ }
28
+ @keyframes typing {
29
+ 0% { content: '.'; }
30
+ 33% { content: '..'; }
31
+ 66% { content: '...'; }
32
+ }
33
+ .response-area {
34
+ scroll-behavior: smooth;
35
+ }
36
+ .tab-active {
37
+ border-bottom: 2px solid #3b82f6;
38
+ }
39
+ </style>
40
+ </head>
41
+ <body class="bg-gray-100 font-sans flex h-screen overflow-hidden">
42
+ <!-- Sidebar -->
43
+ <div class="sidebar bg-gray-800 text-white h-full flex flex-col border-r border-gray-700">
44
+ <div class="p-4 border-b border-gray-700 flex justify-between items-center">
45
+ <h1 class="text-xl font-bold">Ollama Workstation</h1>
46
+ <button id="toggle-sidebar" class="text-gray-400 hover:text-white">
47
+ <i class="fas fa-chevron-left"></i>
48
+ </button>
49
+ </div>
50
+
51
+ <div class="flex-1 overflow-y-auto">
52
+ <!-- Model Management -->
53
+ <div class="p-4">
54
+ <h2 class="text-lg font-semibold mb-2">Models</h2>
55
+ <div class="space-y-2">
56
+ <button id="list-models" class="w-full bg-blue-600 hover:bg-blue-700 text-white py-2 px-4 rounded flex items-center justify-between">
57
+ <span>Available Models</span>
58
+ <i class="fas fa-list"></i>
59
+ </button>
60
+ <button id="pull-model" class="w-full bg-gray-700 hover:bg-gray-600 text-white py-2 px-4 rounded flex items-center justify-between">
61
+ <span>Pull Model</span>
62
+ <i class="fas fa-download"></i>
63
+ </button>
64
+ <button id="delete-model" class="w-full bg-red-600 hover:bg-red-700 text-white py-2 px-4 rounded flex items-center justify-between">
65
+ <span>Delete Model</span>
66
+ <i class="fas fa-trash"></i>
67
+ </button>
68
+ </div>
69
+ </div>
70
+
71
+ <!-- Model List -->
72
+ <div id="model-list" class="p-4 border-t border-gray-700 hidden">
73
+ <h3 class="font-medium mb-2">Installed Models</h3>
74
+ <div id="models-container" class="space-y-2">
75
+ <!-- Models will be populated here -->
76
+ </div>
77
+ </div>
78
+
79
+ <!-- Pull Model Form -->
80
+ <div id="pull-model-form" class="p-4 border-t border-gray-700 hidden">
81
+ <h3 class="font-medium mb-2">Pull Model</h3>
82
+ <div class="space-y-2">
83
+ <input id="model-to-pull" type="text" placeholder="Model name (e.g. llama2)" class="w-full bg-gray-700 text-white px-3 py-2 rounded border border-gray-600">
84
+ <button id="confirm-pull" class="w-full bg-blue-600 hover:bg-blue-700 text-white py-2 px-4 rounded">
85
+ Pull Model
86
+ </button>
87
+ </div>
88
+ <div id="pull-progress" class="mt-2 hidden">
89
+ <div class="flex justify-between text-sm">
90
+ <span>Downloading...</span>
91
+ <span id="pull-percentage">0%</span>
92
+ </div>
93
+ <div class="w-full bg-gray-700 rounded-full h-2.5 mt-1">
94
+ <div id="pull-progress-bar" class="bg-blue-600 h-2.5 rounded-full" style="width: 0%"></div>
95
+ </div>
96
+ </div>
97
+ </div>
98
+
99
+ <!-- Delete Model Form -->
100
+ <div id="delete-model-form" class="p-4 border-t border-gray-700 hidden">
101
+ <h3 class="font-medium mb-2">Delete Model</h3>
102
+ <div class="space-y-2">
103
+ <select id="model-to-delete" class="w-full bg-gray-700 text-white px-3 py-2 rounded border border-gray-600">
104
+ <option value="">Select a model</option>
105
+ </select>
106
+ <button id="confirm-delete" class="w-full bg-red-600 hover:bg-red-700 text-white py-2 px-4 rounded">
107
+ Delete Model
108
+ </button>
109
+ </div>
110
+ </div>
111
+
112
+ <!-- Settings -->
113
+ <div class="p-4 border-t border-gray-700">
114
+ <h2 class="text-lg font-semibold mb-2">Settings</h2>
115
+ <div class="space-y-3">
116
+ <div>
117
+ <label class="block text-sm font-medium mb-1">Ollama Host</label>
118
+ <input id="ollama-host" type="text" value="http://localhost:11434" class="w-full bg-gray-700 text-white px-3 py-2 rounded border border-gray-600">
119
+ </div>
120
+ <div>
121
+ <label class="block text-sm font-medium mb-1">Temperature</label>
122
+ <input id="temperature" type="range" min="0" max="1" step="0.1" value="0.7" class="w-full">
123
+ <span id="temperature-value" class="text-sm">0.7</span>
124
+ </div>
125
+ <div>
126
+ <label class="flex items-center space-x-2">
127
+ <input id="stream-responses" type="checkbox" checked class="rounded bg-gray-700 border-gray-600 text-blue-600">
128
+ <span class="text-sm">Stream Responses</span>
129
+ </label>
130
+ </div>
131
+ </div>
132
+ </div>
133
+ </div>
134
+
135
+ <div class="p-4 border-t border-gray-700">
136
+ <div class="flex items-center space-x-2">
137
+ <div class="h-3 w-3 rounded-full bg-green-500"></div>
138
+ <span id="connection-status" class="text-sm">Connected</span>
139
+ </div>
140
+ </div>
141
+ </div>
142
+
143
+ <!-- Main Content -->
144
+ <div class="flex-1 flex flex-col h-full overflow-hidden">
145
+ <!-- Model Info Bar -->
146
+ <div class="bg-white border-b border-gray-200 p-3 flex items-center justify-between">
147
+ <div class="flex items-center space-x-3">
148
+ <div id="current-model" class="bg-blue-100 text-blue-800 px-3 py-1 rounded-full text-sm font-medium">
149
+ No model selected
150
+ </div>
151
+ <div id="model-loading" class="hidden">
152
+ <div class="flex items-center space-x-2 text-gray-500">
153
+ <div class="animate-spin">
154
+ <i class="fas fa-spinner"></i>
155
+ </div>
156
+ <span>Loading model...</span>
157
+ </div>
158
+ </div>
159
+ </div>
160
+ <div class="flex space-x-2">
161
+ <button id="new-chat" class="bg-gray-100 hover:bg-gray-200 text-gray-800 px-3 py-1 rounded text-sm flex items-center space-x-1">
162
+ <i class="fas fa-plus"></i>
163
+ <span>New Chat</span>
164
+ </button>
165
+ <button id="export-chat" class="bg-gray-100 hover:bg-gray-200 text-gray-800 px-3 py-1 rounded text-sm flex items-center space-x-1">
166
+ <i class="fas fa-file-export"></i>
167
+ <span>Export</span>
168
+ </button>
169
+ </div>
170
+ </div>
171
+
172
+ <!-- Chat Area -->
173
+ <div class="flex-1 overflow-hidden flex flex-col">
174
+ <!-- Response Area -->
175
+ <div id="response-area" class="response-area flex-1 overflow-y-auto p-4 space-y-6 bg-white">
176
+ <div class="text-center text-gray-500 py-10">
177
+ <i class="fas fa-robot text-4xl mb-2"></i>
178
+ <p class="text-lg">Select a model and start chatting</p>
179
+ </div>
180
+ </div>
181
+
182
+ <!-- Input Area -->
183
+ <div class="border-t border-gray-200 bg-gray-50 p-4">
184
+ <div class="flex space-x-2 mb-2">
185
+ <button id="chat-tab" class="tab-active px-3 py-1 text-sm font-medium">Chat</button>
186
+ <button id="generate-tab" class="px-3 py-1 text-sm font-medium text-gray-500 hover:text-gray-700">Generate</button>
187
+ <button id="structured-tab" class="px-3 py-1 text-sm font-medium text-gray-500 hover:text-gray-700">Structured</button>
188
+ </div>
189
+
190
+ <!-- Chat Tab -->
191
+ <div id="chat-input" class="space-y-2">
192
+ <div class="flex space-x-2">
193
+ <select id="message-role" class="bg-gray-200 text-gray-800 px-3 py-2 rounded text-sm">
194
+ <option value="user">User</option>
195
+ <option value="system">System</option>
196
+ <option value="assistant">Assistant</option>
197
+ </select>
198
+ <button id="add-image" class="bg-gray-200 hover:bg-gray-300 text-gray-800 px-3 py-2 rounded text-sm flex items-center">
199
+ <i class="fas fa-image mr-1"></i>
200
+ <span>Image</span>
201
+ </button>
202
+ </div>
203
+ <div class="relative">
204
+ <textarea id="message-input" rows="3" class="w-full px-4 py-3 border border-gray-300 rounded-lg focus:ring-2 focus:ring-blue-500 focus:border-blue-500 resize-none" placeholder="Type your message here..."></textarea>
205
+ <button id="send-message" class="absolute right-3 bottom-3 bg-blue-600 hover:bg-blue-700 text-white p-2 rounded-full">
206
+ <i class="fas fa-paper-plane"></i>
207
+ </button>
208
+ </div>
209
+ </div>
210
+
211
+ <!-- Generate Tab -->
212
+ <div id="generate-input" class="space-y-2 hidden">
213
+ <div class="flex space-x-2">
214
+ <input id="generate-prompt" type="text" class="flex-1 px-4 py-2 border border-gray-300 rounded-lg" placeholder="Enter your prompt...">
215
+ <button id="send-generate" class="bg-blue-600 hover:bg-blue-700 text-white px-4 py-2 rounded-lg">
216
+ Generate
217
+ </button>
218
+ </div>
219
+ <div class="grid grid-cols-2 gap-2">
220
+ <div>
221
+ <label class="block text-sm font-medium text-gray-700 mb-1">System Prompt</label>
222
+ <textarea id="system-prompt" rows="2" class="w-full px-3 py-2 border border-gray-300 rounded-lg text-sm"></textarea>
223
+ </div>
224
+ <div>
225
+ <label class="block text-sm font-medium text-gray-700 mb-1">Template</label>
226
+ <textarea id="generate-template" rows="2" class="w-full px-3 py-2 border border-gray-300 rounded-lg text-sm"></textarea>
227
+ </div>
228
+ </div>
229
+ </div>
230
+
231
+ <!-- Structured Tab -->
232
+ <div id="structured-input" class="space-y-2 hidden">
233
+ <div class="flex space-x-2">
234
+ <input id="structured-prompt" type="text" class="flex-1 px-4 py-2 border border-gray-300 rounded-lg" placeholder="Enter your prompt...">
235
+ <button id="send-structured" class="bg-blue-600 hover:bg-blue-700 text-white px-4 py-2 rounded-lg">
236
+ Process
237
+ </button>
238
+ </div>
239
+ <div class="grid grid-cols-2 gap-2">
240
+ <div>
241
+ <label class="block text-sm font-medium text-gray-700 mb-1">JSON Schema</label>
242
+ <textarea id="json-schema" rows="4" class="w-full px-3 py-2 border border-gray-300 rounded-lg font-mono text-sm">{
243
+ "type": "object",
244
+ "properties": {
245
+ "name": { "type": "string" },
246
+ "age": { "type": "number" }
247
+ },
248
+ "required": ["name", "age"]
249
+ }</textarea>
250
+ </div>
251
+ <div>
252
+ <label class="block text-sm font-medium text-gray-700 mb-1">Response Format</label>
253
+ <select id="response-format" class="w-full px-3 py-2 border border-gray-300 rounded-lg text-sm">
254
+ <option value="json">JSON</option>
255
+ <option value="yaml">YAML</option>
256
+ <option value="xml">XML</option>
257
+ </select>
258
+ </div>
259
+ </div>
260
+ </div>
261
+ </div>
262
+ </div>
263
+ </div>
264
+
265
+ <script>
266
+ // Import Ollama browser module
267
+ const ollama = {
268
+ chat: async (options) => {
269
+ // Mock implementation for demo
270
+ if (options.stream) {
271
+ return (async function*() {
272
+ const words = "This is a simulated streaming response from the Ollama model. It demonstrates how text would appear word by word when streaming is enabled.".split(" ");
273
+ for (const word of words) {
274
+ await new Promise(resolve => setTimeout(resolve, 50));
275
+ yield { message: { content: word + " " } };
276
+ }
277
+ })();
278
+ } else {
279
+ await new Promise(resolve => setTimeout(resolve, 1000));
280
+ return {
281
+ message: {
282
+ content: "This is a simulated response from the Ollama model. In a real implementation, this would be the actual response from the API."
283
+ }
284
+ };
285
+ }
286
+ },
287
+ generate: async (options) => {
288
+ await new Promise(resolve => setTimeout(resolve, 1000));
289
+ return {
290
+ response: options.prompt + " (generated response)"
291
+ };
292
+ },
293
+ list: async () => {
294
+ await new Promise(resolve => setTimeout(resolve, 500));
295
+ return {
296
+ models: [
297
+ { name: "llama3.1", modified_at: "2023-06-15T10:30:00Z" },
298
+ { name: "mistral", modified_at: "2023-07-20T14:45:00Z" },
299
+ { name: "codellama", modified_at: "2023-08-05T09:15:00Z" }
300
+ ]
301
+ };
302
+ },
303
+ pull: async (options) => {
304
+ return (async function*() {
305
+ for (let i = 0; i <= 100; i += 5) {
306
+ await new Promise(resolve => setTimeout(resolve, 200));
307
+ yield { status: "downloading", completed: i, total: 100 };
308
+ }
309
+ })();
310
+ },
311
+ delete: async (options) => {
312
+ await new Promise(resolve => setTimeout(resolve, 500));
313
+ return { status: "success" };
314
+ }
315
+ };
316
+
317
+ // DOM Elements
318
+ const toggleSidebar = document.getElementById('toggle-sidebar');
319
+ const sidebar = document.querySelector('.sidebar');
320
+ const listModelsBtn = document.getElementById('list-models');
321
+ const pullModelBtn = document.getElementById('pull-model');
322
+ const deleteModelBtn = document.getElementById('delete-model');
323
+ const modelList = document.getElementById('model-list');
324
+ const pullModelForm = document.getElementById('pull-model-form');
325
+ const deleteModelForm = document.getElementById('delete-model-form');
326
+ const modelsContainer = document.getElementById('models-container');
327
+ const modelToPull = document.getElementById('model-to-pull');
328
+ const confirmPull = document.getElementById('confirm-pull');
329
+ const pullProgress = document.getElementById('pull-progress');
330
+ const pullPercentage = document.getElementById('pull-percentage');
331
+ const pullProgressBar = document.getElementById('pull-progress-bar');
332
+ const modelToDelete = document.getElementById('model-to-delete');
333
+ const confirmDelete = document.getElementById('confirm-delete');
334
+ const currentModel = document.getElementById('current-model');
335
+ const modelLoading = document.getElementById('model-loading');
336
+ const responseArea = document.getElementById('response-area');
337
+ const messageInput = document.getElementById('message-input');
338
+ const sendMessage = document.getElementById('send-message');
339
+ const messageRole = document.getElementById('message-role');
340
+ const addImage = document.getElementById('add-image');
341
+ const newChat = document.getElementById('new-chat');
342
+ const exportChat = document.getElementById('export-chat');
343
+ const chatTab = document.getElementById('chat-tab');
344
+ const generateTab = document.getElementById('generate-tab');
345
+ const structuredTab = document.getElementById('structured-tab');
346
+ const chatInput = document.getElementById('chat-input');
347
+ const generateInput = document.getElementById('generate-input');
348
+ const structuredInput = document.getElementById('structured-input');
349
+ const generatePrompt = document.getElementById('generate-prompt');
350
+ const sendGenerate = document.getElementById('send-generate');
351
+ const structuredPrompt = document.getElementById('structured-prompt');
352
+ const sendStructured = document.getElementById('send-structured');
353
+ const jsonSchema = document.getElementById('json-schema');
354
+ const responseFormat = document.getElementById('response-format');
355
+ const systemPrompt = document.getElementById('system-prompt');
356
+ const generateTemplate = document.getElementById('generate-template');
357
+ const temperature = document.getElementById('temperature');
358
+ const temperatureValue = document.getElementById('temperature-value');
359
+ const streamResponses = document.getElementById('stream-responses');
360
+ const ollamaHost = document.getElementById('ollama-host');
361
+ const connectionStatus = document.getElementById('connection-status');
362
+
363
+ // State
364
+ let selectedModel = null;
365
+ let chatHistory = [];
366
+ let isSidebarCollapsed = false;
367
+
368
+ // Event Listeners
369
+ toggleSidebar.addEventListener('click', toggleSidebarCollapse);
370
+ listModelsBtn.addEventListener('click', toggleModelList);
371
+ pullModelBtn.addEventListener('click', togglePullModelForm);
372
+ deleteModelBtn.addEventListener('click', toggleDeleteModelForm);
373
+ confirmPull.addEventListener('click', handlePullModel);
374
+ confirmDelete.addEventListener('click', handleDeleteModel);
375
+ sendMessage.addEventListener('click', handleSendMessage);
376
+ messageInput.addEventListener('keydown', (e) => {
377
+ if (e.key === 'Enter' && !e.shiftKey) {
378
+ e.preventDefault();
379
+ handleSendMessage();
380
+ }
381
+ });
382
+ addImage.addEventListener('click', handleAddImage);
383
+ newChat.addEventListener('click', handleNewChat);
384
+ exportChat.addEventListener('click', handleExportChat);
385
+ chatTab.addEventListener('click', () => switchTab('chat'));
386
+ generateTab.addEventListener('click', () => switchTab('generate'));
387
+ structuredTab.addEventListener('click', () => switchTab('structured'));
388
+ sendGenerate.addEventListener('click', handleGenerate);
389
+ sendStructured.addEventListener('click', handleStructured);
390
+ temperature.addEventListener('input', updateTemperature);
391
+ streamResponses.addEventListener('change', updateStreamSetting);
392
+
393
+ // Initialize
394
+ loadModels();
395
+ checkConnection();
396
+ updateTemperature();
397
+
398
+ // Functions
399
+ function toggleSidebarCollapse() {
400
+ isSidebarCollapsed = !isSidebarCollapsed;
401
+ sidebar.classList.toggle('collapsed');
402
+ toggleSidebar.innerHTML = isSidebarCollapsed ?
403
+ '<i class="fas fa-chevron-right"></i>' :
404
+ '<i class="fas fa-chevron-left"></i>';
405
+ }
406
+
407
+ function toggleModelList() {
408
+ const isVisible = !modelList.classList.contains('hidden');
409
+
410
+ // Hide all forms first
411
+ modelList.classList.add('hidden');
412
+ pullModelForm.classList.add('hidden');
413
+ deleteModelForm.classList.add('hidden');
414
+
415
+ if (!isVisible) {
416
+ modelList.classList.remove('hidden');
417
+ }
418
+ }
419
+
420
+ function togglePullModelForm() {
421
+ const isVisible = !pullModelForm.classList.contains('hidden');
422
+
423
+ // Hide all forms first
424
+ modelList.classList.add('hidden');
425
+ pullModelForm.classList.add('hidden');
426
+ deleteModelForm.classList.add('hidden');
427
+
428
+ if (!isVisible) {
429
+ pullModelForm.classList.remove('hidden');
430
+ }
431
+ }
432
+
433
+ function toggleDeleteModelForm() {
434
+ const isVisible = !deleteModelForm.classList.contains('hidden');
435
+
436
+ // Hide all forms first
437
+ modelList.classList.add('hidden');
438
+ pullModelForm.classList.add('hidden');
439
+ deleteModelForm.classList.add('hidden');
440
+
441
+ if (!isVisible) {
442
+ deleteModelForm.classList.remove('hidden');
443
+ populateDeleteModelDropdown();
444
+ }
445
+ }
446
+
447
+ async function loadModels() {
448
+ try {
449
+ const response = await ollama.list();
450
+ displayModels(response.models);
451
+ } catch (error) {
452
+ console.error("Error loading models:", error);
453
+ showError("Failed to load models. Check your Ollama connection.");
454
+ }
455
+ }
456
+
457
+ function displayModels(models) {
458
+ modelsContainer.innerHTML = '';
459
+
460
+ if (models.length === 0) {
461
+ modelsContainer.innerHTML = '<p class="text-gray-400 text-sm">No models installed</p>';
462
+ return;
463
+ }
464
+
465
+ models.forEach(model => {
466
+ const modelCard = document.createElement('div');
467
+ modelCard.className = 'model-card bg-gray-700 p-3 rounded-lg cursor-pointer transition-all duration-200';
468
+ modelCard.innerHTML = `
469
+ <div class="flex justify-between items-center">
470
+ <h4 class="font-medium">${model.name}</h4>
471
+ <span class="text-xs text-gray-400">${new Date(model.modified_at).toLocaleDateString()}</span>
472
+ </div>
473
+ `;
474
+ modelCard.addEventListener('click', () => selectModel(model.name));
475
+ modelsContainer.appendChild(modelCard);
476
+ });
477
+ }
478
+
479
+ function populateDeleteModelDropdown() {
480
+ modelToDelete.innerHTML = '<option value="">Select a model</option>';
481
+
482
+ // In a real implementation, we would fetch the models from Ollama
483
+ // For demo, we'll use some sample models
484
+ const sampleModels = ['llama3.1', 'mistral', 'codellama'];
485
+
486
+ sampleModels.forEach(model => {
487
+ const option = document.createElement('option');
488
+ option.value = model;
489
+ option.textContent = model;
490
+ modelToDelete.appendChild(option);
491
+ });
492
+ }
493
+
494
+ async function selectModel(modelName) {
495
+ selectedModel = modelName;
496
+ currentModel.textContent = modelName;
497
+ currentModel.className = 'bg-green-100 text-green-800 px-3 py-1 rounded-full text-sm font-medium';
498
+
499
+ // Show loading indicator
500
+ modelLoading.classList.remove('hidden');
501
+
502
+ // In a real implementation, we might verify the model is loaded
503
+ await new Promise(resolve => setTimeout(resolve, 500));
504
+
505
+ // Hide loading indicator
506
+ modelLoading.classList.add('hidden');
507
+
508
+ // Clear chat history for new model
509
+ chatHistory = [];
510
+ responseArea.innerHTML = `
511
+ <div class="text-center text-gray-500 py-10">
512
+ <i class="fas fa-robot text-4xl mb-2"></i>
513
+ <p class="text-lg">Model ${modelName} is ready</p>
514
+ <p class="text-sm mt-2">Start chatting with ${modelName}</p>
515
+ </div>
516
+ `;
517
+ }
518
+
519
+ async function handlePullModel() {
520
+ const modelName = modelToPull.value.trim();
521
+ if (!modelName) {
522
+ alert("Please enter a model name");
523
+ return;
524
+ }
525
+
526
+ pullProgress.classList.remove('hidden');
527
+
528
+ try {
529
+ const pullStream = ollama.pull({ model: modelName });
530
+
531
+ for await (const progress of pullStream) {
532
+ const percent = Math.floor((progress.completed / progress.total) * 100);
533
+ pullPercentage.textContent = `${percent}%`;
534
+ pullProgressBar.style.width = `${percent}%`;
535
+
536
+ if (percent === 100) {
537
+ pullProgress.innerHTML = `
538
+ <div class="text-green-500 text-sm">
539
+ <i class="fas fa-check-circle mr-1"></i>
540
+ Model ${modelName} downloaded successfully
541
+ </div>
542
+ `;
543
+ loadModels(); // Refresh model list
544
+ break;
545
+ }
546
+ }
547
+ } catch (error) {
548
+ console.error("Error pulling model:", error);
549
+ pullProgress.innerHTML = `
550
+ <div class="text-red-500 text-sm">
551
+ <i class="fas fa-exclamation-circle mr-1"></i>
552
+ Failed to download model: ${error.message}
553
+ </div>
554
+ `;
555
+ }
556
+ }
557
+
558
+ async function handleDeleteModel() {
559
+ const modelName = modelToDelete.value;
560
+ if (!modelName) {
561
+ alert("Please select a model to delete");
562
+ return;
563
+ }
564
+
565
+ if (!confirm(`Are you sure you want to delete ${modelName}? This cannot be undone.`)) {
566
+ return;
567
+ }
568
+
569
+ try {
570
+ await ollama.delete({ model: modelName });
571
+ alert(`Model ${modelName} deleted successfully`);
572
+ loadModels(); // Refresh model list
573
+ if (selectedModel === modelName) {
574
+ selectedModel = null;
575
+ currentModel.textContent = "No model selected";
576
+ currentModel.className = 'bg-blue-100 text-blue-800 px-3 py-1 rounded-full text-sm font-medium';
577
+ }
578
+ deleteModelForm.classList.add('hidden');
579
+ } catch (error) {
580
+ console.error("Error deleting model:", error);
581
+ alert(`Failed to delete model: ${error.message}`);
582
+ }
583
+ }
584
+
585
+ async function handleSendMessage() {
586
+ const messageText = messageInput.value.trim();
587
+ if (!messageText) return;
588
+
589
+ if (!selectedModel) {
590
+ alert("Please select a model first");
591
+ return;
592
+ }
593
+
594
+ const role = messageRole.value;
595
+ const content = messageText;
596
+
597
+ // Add user message to chat history
598
+ chatHistory.push({ role, content });
599
+
600
+ // Display user message
601
+ displayMessage({ role, content });
602
+
603
+ // Clear input
604
+ messageInput.value = '';
605
+
606
+ // Show typing indicator
607
+ const typingId = showTypingIndicator();
608
+
609
+ try {
610
+ const options = {
611
+ model: selectedModel,
612
+ messages: chatHistory,
613
+ stream: streamResponses.checked,
614
+ options: {
615
+ temperature: parseFloat(temperature.value)
616
+ }
617
+ };
618
+
619
+ if (streamResponses.checked) {
620
+ // Handle streaming response
621
+ const stream = await ollama.chat(options);
622
+ let fullResponse = '';
623
+
624
+ // Remove typing indicator
625
+ removeTypingIndicator(typingId);
626
+
627
+ // Create assistant message container
628
+ const messageId = `msg-${Date.now()}`;
629
+ const messageDiv = document.createElement('div');
630
+ messageDiv.id = messageId;
631
+ messageDiv.className = 'flex space-x-3';
632
+ messageDiv.innerHTML = `
633
+ <div class="flex-shrink-0">
634
+ <div class="bg-purple-500 text-white w-8 h-8 rounded-full flex items-center justify-center">
635
+ <i class="fas fa-robot"></i>
636
+ </div>
637
+ </div>
638
+ <div class="flex-1 min-w-0">
639
+ <div class="bg-purple-100 text-gray-800 p-3 rounded-lg">
640
+ <div class="whitespace-pre-wrap"></div>
641
+ </div>
642
+ </div>
643
+ `;
644
+ responseArea.appendChild(messageDiv);
645
+
646
+ // Scroll to bottom
647
+ responseArea.scrollTop = responseArea.scrollHeight;
648
+
649
+ // Process stream
650
+ for await (const chunk of stream) {
651
+ fullResponse += chunk.message.content;
652
+ const contentDiv = messageDiv.querySelector('.whitespace-pre-wrap');
653
+ contentDiv.textContent = fullResponse;
654
+
655
+ // Scroll to keep visible
656
+ responseArea.scrollTop = responseArea.scrollHeight;
657
+ }
658
+
659
+ // Add assistant response to chat history
660
+ chatHistory.push({ role: 'assistant', content: fullResponse });
661
+ } else {
662
+ // Handle non-streaming response
663
+ const response = await ollama.chat(options);
664
+
665
+ // Remove typing indicator
666
+ removeTypingIndicator(typingId);
667
+
668
+ // Display assistant response
669
+ displayMessage({ role: 'assistant', content: response.message.content });
670
+
671
+ // Add assistant response to chat history
672
+ chatHistory.push({ role: 'assistant', content: response.message.content });
673
+ }
674
+ } catch (error) {
675
+ console.error("Error in chat:", error);
676
+ removeTypingIndicator(typingId);
677
+ showError(error.message);
678
+ }
679
+ }
680
+
681
+ function displayMessage(message) {
682
+ const messageDiv = document.createElement('div');
683
+ messageDiv.className = 'flex space-x-3';
684
+
685
+ if (message.role === 'user') {
686
+ messageDiv.innerHTML = `
687
+ <div class="flex-shrink-0">
688
+ <div class="bg-blue-500 text-white w-8 h-8 rounded-full flex items-center justify-center">
689
+ <i class="fas fa-user"></i>
690
+ </div>
691
+ </div>
692
+ <div class="flex-1 min-w-0">
693
+ <div class="bg-blue-100 text-gray-800 p-3 rounded-lg">
694
+ <div class="whitespace-pre-wrap">${message.content}</div>
695
+ </div>
696
+ </div>
697
+ `;
698
+ } else if (message.role === 'system') {
699
+ messageDiv.innerHTML = `
700
+ <div class="flex-shrink-0">
701
+ <div class="bg-yellow-500 text-white w-8 h-8 rounded-full flex items-center justify-center">
702
+ <i class="fas fa-cog"></i>
703
+ </div>
704
+ </div>
705
+ <div class="flex-1 min-w-0">
706
+ <div class="bg-yellow-100 text-gray-800 p-3 rounded-lg">
707
+ <div class="whitespace-pre-wrap">${message.content}</div>
708
+ </div>
709
+ </div>
710
+ `;
711
+ } else { // assistant
712
+ messageDiv.innerHTML = `
713
+ <div class="flex-shrink-0">
714
+ <div class="bg-purple-500 text-white w-8 h-8 rounded-full flex items-center justify-center">
715
+ <i class="fas fa-robot"></i>
716
+ </div>
717
+ </div>
718
+ <div class="flex-1 min-w-0">
719
+ <div class="bg-purple-100 text-gray-800 p-3 rounded-lg">
720
+ <div class="whitespace-pre-wrap">${message.content}</div>
721
+ </div>
722
+ </div>
723
+ `;
724
+ }
725
+
726
+ responseArea.appendChild(messageDiv);
727
+ responseArea.scrollTop = responseArea.scrollHeight;
728
+ }
729
+
730
+ function showTypingIndicator() {
731
+ const typingId = `typing-${Date.now()}`;
732
+ const typingDiv = document.createElement('div');
733
+ typingDiv.id = typingId;
734
+ typingDiv.className = 'flex space-x-3';
735
+ typingDiv.innerHTML = `
736
+ <div class="flex-shrink-0">
737
+ <div class="bg-purple-500 text-white w-8 h-8 rounded-full flex items-center justify-center">
738
+ <i class="fas fa-robot"></i>
739
+ </div>
740
+ </div>
741
+ <div class="flex-1 min-w-0">
742
+ <div class="bg-purple-100 text-gray-800 p-3 rounded-lg">
743
+ <div class="typing-indicator">Thinking</div>
744
+ </div>
745
+ </div>
746
+ `;
747
+ responseArea.appendChild(typingDiv);
748
+ responseArea.scrollTop = responseArea.scrollHeight;
749
+ return typingId;
750
+ }
751
+
752
+ function removeTypingIndicator(id) {
753
+ const element = document.getElementById(id);
754
+ if (element) {
755
+ element.remove();
756
+ }
757
+ }
758
+
759
+ function showError(message) {
760
+ const errorDiv = document.createElement('div');
761
+ errorDiv.className = 'bg-red-100 border-l-4 border-red-500 text-red-700 p-4 mb-4';
762
+ errorDiv.innerHTML = `
763
+ <div class="flex items-center">
764
+ <div class="flex-shrink-0">
765
+ <i class="fas fa-exclamation-circle text-red-500"></i>
766
+ </div>
767
+ <div class="ml-3">
768
+ <p class="text-sm">${message}</p>
769
+ </div>
770
+ </div>
771
+ `;
772
+ responseArea.appendChild(errorDiv);
773
+ responseArea.scrollTop = responseArea.scrollHeight;
774
+ }
775
+
776
+ function handleAddImage() {
777
+ alert("Image upload functionality would be implemented here");
778
+ // In a real implementation, this would open a file dialog
779
+ // and handle image uploads to be included in the message
780
+ }
781
+
782
+ function handleNewChat() {
783
+ if (!selectedModel) {
784
+ alert("Please select a model first");
785
+ return;
786
+ }
787
+
788
+ if (chatHistory.length === 0) {
789
+ return;
790
+ }
791
+
792
+ if (confirm("Start a new chat? The current chat history will be cleared.")) {
793
+ chatHistory = [];
794
+ responseArea.innerHTML = `
795
+ <div class="text-center text-gray-500 py-10">
796
+ <i class="fas fa-robot text-4xl mb-2"></i>
797
+ <p class="text-lg">New chat started with ${selectedModel}</p>
798
+ </div>
799
+ `;
800
+ }
801
+ }
802
+
803
+ function handleExportChat() {
804
+ if (chatHistory.length === 0) {
805
+ alert("No chat history to export");
806
+ return;
807
+ }
808
+
809
+ const chatText = chatHistory.map(msg => {
810
+ return `${msg.role.toUpperCase()}: ${msg.content}`;
811
+ }).join('\n\n');
812
+
813
+ const blob = new Blob([chatText], { type: 'text/plain' });
814
+ const url = URL.createObjectURL(blob);
815
+ const a = document.createElement('a');
816
+ a.href = url;
817
+ a.download = `ollama-chat-${selectedModel || 'unknown'}-${new Date().toISOString().slice(0, 10)}.txt`;
818
+ a.click();
819
+ URL.revokeObjectURL(url);
820
+ }
821
+
822
+ function switchTab(tab) {
823
+ chatInput.classList.add('hidden');
824
+ generateInput.classList.add('hidden');
825
+ structuredInput.classList.add('hidden');
826
+
827
+ chatTab.classList.remove('tab-active');
828
+ generateTab.classList.remove('tab-active');
829
+ structuredTab.classList.remove('tab-active');
830
+
831
+ if (tab === 'chat') {
832
+ chatInput.classList.remove('hidden');
833
+ chatTab.classList.add('tab-active');
834
+ } else if (tab === 'generate') {
835
+ generateInput.classList.remove('hidden');
836
+ generateTab.classList.add('tab-active');
837
+ } else if (tab === 'structured') {
838
+ structuredInput.classList.remove('hidden');
839
+ structuredTab.classList.add('tab-active');
840
+ }
841
+ }
842
+
843
+ async function handleGenerate() {
844
+ const prompt = generatePrompt.value.trim();
845
+ if (!prompt) return;
846
+
847
+ if (!selectedModel) {
848
+ alert("Please select a model first");
849
+ return;
850
+ }
851
+
852
+ // Show typing indicator
853
+ const typingId = showTypingIndicator();
854
+
855
+ try {
856
+ const options = {
857
+ model: selectedModel,
858
+ prompt: prompt,
859
+ system: systemPrompt.value.trim() || undefined,
860
+ template: generateTemplate.value.trim() || undefined,
861
+ options: {
862
+ temperature: parseFloat(temperature.value)
863
+ }
864
+ };
865
+
866
+ const response = await ollama.generate(options);
867
+
868
+ // Remove typing indicator
869
+ removeTypingIndicator(typingId);
870
+
871
+ // Display generated response
872
+ displayMessage({
873
+ role: 'assistant',
874
+ content: `Generated response for prompt "${prompt}":\n\n${response.response}`
875
+ });
876
+ } catch (error) {
877
+ console.error("Error in generation:", error);
878
+ removeTypingIndicator(typingId);
879
+ showError(error.message);
880
+ }
881
+ }
882
+
883
+ async function handleStructured() {
884
+ const prompt = structuredPrompt.value.trim();
885
+ if (!prompt) return;
886
+
887
+ if (!selectedModel) {
888
+ alert("Please select a model first");
889
+ return;
890
+ }
891
+
892
+ // Show typing indicator
893
+ const typingId = showTypingIndicator();
894
+
895
+ try {
896
+ const schema = jsonSchema.value.trim();
897
+ const format = responseFormat.value;
898
+
899
+ // In a real implementation, we would validate the JSON schema
900
+
901
+ const options = {
902
+ model: selectedModel,
903
+ messages: [{ role: 'user', content: prompt }],
904
+ format: schema,
905
+ options: {
906
+ temperature: parseFloat(temperature.value)
907
+ }
908
+ };
909
+
910
+ const response = await ollama.chat(options);
911
+
912
+ // Remove typing indicator
913
+ removeTypingIndicator(typingId);
914
+
915
+ // Display structured response
916
+ displayMessage({
917
+ role: 'assistant',
918
+ content: `Structured response (${format}):\n\n${JSON.stringify(JSON.parse(response.message.content), null, 2)}`
919
+ });
920
+ } catch (error) {
921
+ console.error("Error in structured output:", error);
922
+ removeTypingIndicator(typingId);
923
+ showError(error.message);
924
+ }
925
+ }
926
+
927
+ function updateTemperature() {
928
+ temperatureValue.textContent = temperature.value;
929
+ }
930
+
931
+ function updateStreamSetting() {
932
+ // No action needed, just update the state
933
+ }
934
+
935
+ async function checkConnection() {
936
+ try {
937
+ // In a real implementation, we would ping the Ollama server
938
+ await new Promise(resolve => setTimeout(resolve, 300));
939
+ connectionStatus.textContent = "Connected";
940
+ connectionStatus.previousElementSibling.className = "h-3 w-3 rounded-full bg-green-500";
941
+ } catch (error) {
942
+ connectionStatus.textContent = "Disconnected";
943
+ connectionStatus.previousElementSibling.className = "h-3 w-3 rounded-full bg-red-500";
944
+ }
945
+ }
946
+ </script>
947
+ <p style="border-radius: 8px; text-align: center; font-size: 12px; color: #fff; margin-top: 16px;position: fixed; left: 8px; bottom: 8px; z-index: 10; background: rgba(0, 0, 0, 0.8); padding: 4px 8px;">Made with <img src="https://enzostvs-deepsite.hf.space/logo.svg" alt="DeepSite Logo" style="width: 16px; height: 16px; vertical-align: middle;display:inline-block;margin-right:3px;filter:brightness(0) invert(1);"><a href="https://enzostvs-deepsite.hf.space" style="color: #fff;text-decoration: underline;" target="_blank" >DeepSite</a> - 🧬 <a href="https://enzostvs-deepsite.hf.space?remix=Meroar/batch-ollamanator" style="color: #fff;text-decoration: underline;" target="_blank" >Remix</a></p></body>
948
+ </html>
prompts.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Ollama JavaScript Library The Ollama JavaScript library provides the easiest way to integrate your JavaScript project with Ollama. Getting Started npm i ollama Usage import ollama from 'ollama' const response = await ollama.chat({ model: 'llama3.1', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) console.log(response.message.content) Browser Usage To use the library without node, import the browser module. import ollama from 'ollama/browser' Streaming responses Response streaming can be enabled by setting stream: true, modifying function calls to return an AsyncGenerator where each part is an object in the stream. import ollama from 'ollama' const message = { role: 'user', content: 'Why is the sky blue?' } const response = await ollama.chat({ model: 'llama3.1', messages: [message], stream: true }) for await (const part of response) { process.stdout.write(part.message.content) } API The Ollama JavaScript library's API is designed around the Ollama REST API chat ollama.chat(request) request <Object>: The request object containing chat parameters. model <string> The name of the model to use for the chat. messages <Message[]>: Array of message objects representing the chat history. role <string>: The role of the message sender ('user', 'system', or 'assistant'). content <string>: The content of the message. images <Uint8Array[] | string[]>: (Optional) Images to be included in the message, either as Uint8Array or base64 encoded strings. format <string>: (Optional) Set the expected format of the response (json). stream <boolean>: (Optional) When true an AsyncGenerator is returned. keep_alive <string | number>: (Optional) How long to keep the model loaded. A number (seconds) or a string with a duration unit suffix ("300ms", "1.5h", "2h45m", etc.) tools <Tool[]>: (Optional) A list of tool calls the model may make. options <Options>: (Optional) Options to configure the runtime. Returns: <ChatResponse> generate ollama.generate(request) request <Object>: The request object containing generate parameters. model <string> The name of the model to use for the chat. prompt <string>: The prompt to send to the model. suffix <string>: (Optional) Suffix is the text that comes after the inserted text. system <string>: (Optional) Override the model system prompt. template <string>: (Optional) Override the model template. raw <boolean>: (Optional) Bypass the prompt template and pass the prompt directly to the model. images <Uint8Array[] | string[]>: (Optional) Images to be included, either as Uint8Array or base64 encoded strings. format <string>: (Optional) Set the expected format of the response (json). stream <boolean>: (Optional) When true an AsyncGenerator is returned. keep_alive <string | number>: (Optional) How long to keep the model loaded. A number (seconds) or a string with a duration unit suffix ("300ms", "1.5h", "2h45m", etc.) options <Options>: (Optional) Options to configure the runtime. Returns: <GenerateResponse> pull ollama.pull(request) request <Object>: The request object containing pull parameters. model <string> The name of the model to pull. insecure <boolean>: (Optional) Pull from servers whose identity cannot be verified. stream <boolean>: (Optional) When true an AsyncGenerator is returned. Returns: <ProgressResponse> push ollama.push(request) request <Object>: The request object containing push parameters. model <string> The name of the model to push. insecure <boolean>: (Optional) Push to servers whose identity cannot be verified. stream <boolean>: (Optional) When true an AsyncGenerator is returned. Returns: <ProgressResponse> create ollama.create(request) request <Object>: The request object containing create parameters. model <string> The name of the model to create. from <string>: The base model to derive from. stream <boolean>: (Optional) When true an AsyncGenerator is returned. quantize <string>: Quanization precision level (q8_0, q4_K_M, etc.). template <string>: (Optional) The prompt template to use with the model. license <string|string[]>: (Optional) The license(s) associated with the model. system <string>: (Optional) The system prompt for the model. parameters <Record<string, unknown>>: (Optional) Additional model parameters as key-value pairs. messages <Message[]>: (Optional) Initial chat messages for the model. adapters <Record<string, string>>: (Optional) A key-value map of LoRA adapter configurations. Returns: <ProgressResponse> Note: The files parameter is not currently supported in ollama-js. delete ollama.delete(request) request <Object>: The request object containing delete parameters. model <string> The name of the model to delete. Returns: <StatusResponse> copy ollama.copy(request) request <Object>: The request object containing copy parameters. source <string> The name of the model to copy from. destination <string> The name of the model to copy to. Returns: <StatusResponse> list ollama.list() Returns: <ListResponse> show ollama.show(request) request <Object>: The request object containing show parameters. model <string> The name of the model to show. system <string>: (Optional) Override the model system prompt returned. template <string>: (Optional) Override the model template returned. options <Options>: (Optional) Options to configure the runtime. Returns: <ShowResponse> embed ollama.embed(request) request <Object>: The request object containing embedding parameters. model <string> The name of the model used to generate the embeddings. input <string> | <string[]>: The input used to generate the embeddings. truncate <boolean>: (Optional) Truncate the input to fit the maximum context length supported by the model. keep_alive <string | number>: (Optional) How long to keep the model loaded. A number (seconds) or a string with a duration unit suffix ("300ms", "1.5h", "2h45m", etc.) options <Options>: (Optional) Options to configure the runtime. Returns: <EmbedResponse> ps ollama.ps() Returns: <ListResponse> abort ollama.abort() This method will abort all streamed generations currently running with the client instance. If there is a need to manage streams with timeouts, it is recommended to have one Ollama client per stream. All asynchronous threads listening to streams (typically the for await (const part of response)) will throw an AbortError exception. See examples/abort/abort-all-requests.ts for an example. Custom client A custom client can be created with the following fields: host <string>: (Optional) The Ollama host address. Default: "http://127.0.0.1:11434". fetch <Object>: (Optional) The fetch library used to make requests to the Ollama host. import { Ollama } from 'ollama' const ollama = new Ollama({ host: 'http://127.0.0.1:11434' }) const response = await ollama.chat({ model: 'llama3.1', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) Building To build the project files run: npm run build import ollama from 'ollama'; import { z } from 'zod'; import { zodToJsonSchema } from 'zod-to-json-schema'; /* Ollama structured outputs capabilities It parses the response from the model into a structured JSON object using Zod */ // Define the schema for friend info const FriendInfoSchema = z.object({ name: z.string().describe('The name of the friend'), age: z.number().int().describe('The age of the friend'), is_available: z.boolean().describe('Whether the friend is available') }); // Define the schema for friend list const FriendListSchema = z.object({ friends: z.array(FriendInfoSchema).describe('An array of friends') }); async function run(model: string) { // Convert the Zod schema to JSON Schema format const jsonSchema = zodToJsonSchema(FriendListSchema); /* Can use manually defined schema directly const schema = { 'type': 'object', 'properties': { 'friends': { 'type': 'array', 'items': { 'type': 'object', 'properties': { 'name': { 'type': 'string' }, 'age': { 'type': 'integer' }, 'is_available': { 'type': 'boolean' } }, 'required': ['name', 'age', 'is_available'] } } }, 'required': ['friends'] } */ const messages = [{ role: 'user', content: 'I have two friends. The first is Ollama 22 years old busy saving the world, and the second is Alonso 23 years old and wants to hang out. Return a list of friends in JSON format' }]; const response = await ollama.chat({ model: model, messages: messages, format: jsonSchema, // or format: schema options: { temperature: 0 // Make responses more deterministic } }); // Parse and validate the response try { const friendsResponse = FriendListSchema.parse(JSON.parse(response.message.content)); console.log(friendsResponse); } catch (error) { console.error("Generated invalid response:", error); } } run('llama3.1:8b').catch(console.error); Make a sophisticated , feature-rich LLM interaction workstation for Ollama