File size: 14,271 Bytes
5fdb69e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "e426cd04-c053-43e8-b505-63cee7956a53",
   "metadata": {},
   "source": [
    "# Welcome to a very busy Week 8 folder\n",
    "\n",
    "## We have lots to do this week!\n",
    "\n",
    "We'll move at a faster pace than usual, particularly as you're becoming proficient LLM engineers.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3cf5389-93c5-4523-bc48-78fabb91d8f6",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left;\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#900;\">Especially important this week: pull the latest</h2>\n",
    "            <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
    "            At the start of each week, it's worth checking you have the latest code.<br/>\n",
    "            First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
    "            After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
    "            <code>conda env update --f environment.yml --prune</code><br/>\n",
    "            Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
    "            <code>pip install -r requirements.txt</code>\n",
    "            <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bc0e1c1c-be6a-4395-bbbd-eeafc9330d7e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Just one import to start with!!\n",
    "\n",
    "import modal"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab5c8533-9f66-448f-b9b2-133d1ff50639",
   "metadata": {},
   "source": [
    "# Setting up the modal tokens\n",
    "\n",
    "The first time you run this, please uncomment the next line and execute it.  \n",
    "This is the same as running `modal setup` from the command line. It connects with Modal and installs your tokens.  \n",
    "\n",
    "## Debugging some common problems on Windows\n",
    "\n",
    "If this command fails in the next cell, or if any of the modal commands with the `!` fail, please try running them directly on the command line in an activated environment (without the `!`)\n",
    "\n",
    "A student on Windows mentioned that on Windows, you might also need to run this command from a command prompt in an activated environment afterwards:  \n",
    "`modal token new`  \n",
    "(Thank you Ed B. for that!)\n",
    "\n",
    "Another Windows student Minh N. mentioned you may need to use this approach, from an activated environment in the command line:  \n",
    "`modal token set --token-id <your_token_id> --token-secret <your_token_secret>`\n",
    "\n",
    "Also, a student David S. mentioned the following:  \n",
    "> In case anyone else using Windows hits this problem: Along with having to run `modal token new` from a command prompt, you have to move the generated token file. It will deploy the token file (.modal.toml) to your Windows profile folder. The virtual environment couldn't see that location (strangely, it couldn't even after I set environment variables for it and rebooted). I moved that token file to the folder I'm operating out of for the lab and it stopped throwing auth errors.\n",
    "\n",
    "And another Windows student (Robert M. - thank you!!) added another possible step:\n",
    "\n",
    "\n",
    "> I could not get modal to see my tokens (resulting in an 'auth error'), even after copying the \".modal.toml\" file to the \"week8\" folder and restarting JupyterLab. The fix was to manually set the environment variables (the standard way). This config method is explained by modal on their [web site](https://modal.com/docs/reference/modal.config)  \n",
    "```\n",
    "import os\n",
    "os.environ[\"MODAL_TOKEN_ID\"] = \"xxx\"\n",
    "os.environ[\"MODAL_TOKEN_SECRET\"] = \"yyy\" \n",
    "```\n",
    "\n",
    "Finally: I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0d240622-8422-4c99-8464-c04d063e4cb6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Remove the '# ' from the next line and run the cell, or run this command without the excalamation mark from an activated command prompt\n",
    "# !modal setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3b133701-f550-44a1-a67f-eb7ccc4769a9",
   "metadata": {},
   "outputs": [],
   "source": [
    "from hello import app, hello, hello_europe"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0f3f73ae-1295-49f3-9099-b8b41fc3429b",
   "metadata": {},
   "outputs": [],
   "source": [
    "with app.run():\n",
    "    reply=hello.local()\n",
    "reply"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c1d8c6f9-edc7-4e52-9b3a-c07d7cff1ac7",
   "metadata": {},
   "outputs": [],
   "source": [
    "with app.run():\n",
    "    reply=hello.remote()\n",
    "reply"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1c075e9-49c7-4ebd-812f-83196d32de32",
   "metadata": {},
   "source": [
    "## Added thanks to student Tue H.\n",
    "\n",
    "If you look in hello.py, I've added a simple function hello_europe\n",
    "\n",
    "That uses the decorator:  \n",
    "`@app.function(image=image, region=\"eu\")`\n",
    "\n",
    "See the result below! More region specific settings are [here](https://modal.com/docs/guide/region-selection)\n",
    "\n",
    "Note that it does consume marginally more credits to specify a region."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b027da1a-c79d-42cb-810d-32ddca31aa02",
   "metadata": {},
   "outputs": [],
   "source": [
    "with app.run():\n",
    "    reply=hello_europe.remote()\n",
    "reply"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "22e8d804-c027-45fb-8fef-06e7bba6295a",
   "metadata": {},
   "source": [
    "# Before we move on -\n",
    "\n",
    "## We need to set your HuggingFace Token as a secret in Modal\n",
    "\n",
    "1. Go to modal.com, sign in and go to your dashboard\n",
    "2. Click on Secrets in the nav bar\n",
    "3. Create new secret, click on Hugging Face, this new secret needs to be called **hf-secret** because that's how we refer to it in the code\n",
    "4. Fill in your HF_TOKEN where it prompts you\n",
    "\n",
    "### And now back to business: time to work with Llama"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cb8b6c41-8259-4329-b1c4-a1f67d26d1be",
   "metadata": {},
   "outputs": [],
   "source": [
    "# This import may give a deprecation warning about adding local Python modules to the Image\n",
    "# That warning can be safely ignored. You may get the same warning in other places, too..\n",
    "\n",
    "from llama import app, generate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "db4a718a-d95d-4f61-9688-c9df21d88fe6",
   "metadata": {},
   "outputs": [],
   "source": [
    "with modal.enable_output():\n",
    "    with app.run():\n",
    "        result=generate.remote(\"Life is a mystery, everyone must stand alone, I hear\")\n",
    "result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9a9a6844-29ec-4264-8e72-362d976b3968",
   "metadata": {},
   "outputs": [],
   "source": [
    "import modal\n",
    "from pricer_ephemeral import app, price"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "50e6cf99-8959-4ae3-ba02-e325cb7fff94",
   "metadata": {},
   "outputs": [],
   "source": [
    "with modal.enable_output():\n",
    "    with app.run():\n",
    "        result=price.remote(\"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\")\n",
    "result"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "04d8747f-8452-4077-8af6-27e03888508a",
   "metadata": {},
   "source": [
    "## Transitioning From Ephemeral Apps to Deployed Apps\n",
    "\n",
    "From a command line, `modal deploy xxx` will deploy your code as a Deployed App\n",
    "\n",
    "This is how you could package your AI service behind an API to be used in a Production System.\n",
    "\n",
    "You can also build REST endpoints easily, although we won't cover that as we'll be calling direct from Python.\n",
    "\n",
    "## Important note about secrets\n",
    "\n",
    "In both the files `pricer_service.py` and `pricer_service2.py` you will find code like this near the top:  \n",
    "`secrets = [modal.Secret.from_name(\"hf-secret\")]`  \n",
    "You may need to change from `hf-secret` to `huggingface-secret` depending on how the Secret is configured in modal.  \n",
    "To check, visit this page and look in the first column:  \n",
    "https://modal.com/secrets/\n",
    "\n",
    "## Important note for Windows people:\n",
    "\n",
    "On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do `!modal deploy` too.\n",
    "\n",
    "As an alternative, a few students have mentioned you can run this code within Jupyter Lab if you want to run it from here:\n",
    "```\n",
    "# Check the default encoding\n",
    "print(locale.getpreferredencoding())  # Should print 'UTF-8'\n",
    "\n",
    "# Ensure UTF-8 encoding\n",
    "os.environ['PYTHONIOENCODING'] = 'utf-8'\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7f90d857-2f12-4521-bb90-28efd917f7d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# You can also run \"modal deploy -m pricer_service\" at the command line in an activated environment\n",
    "!modal deploy -m pricer_service"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1dec70ff-1986-4405-8624-9bbbe0ce1f4a",
   "metadata": {},
   "outputs": [],
   "source": [
    "pricer = modal.Function.from_name(\"pricer-service\", \"price\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17776139-0d9e-4ad0-bcd0-82d3a92ca61f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# This can take a while! We'll use faster approaches shortly\n",
    "\n",
    "pricer.remote(\"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f56d1e55-2a03-4ce2-bb47-2ab6b9175a02",
   "metadata": {},
   "outputs": [],
   "source": [
    "# You can also run \"modal deploy -m pricer_service2\" at the command line in an activated environment\n",
    "\n",
    "!modal deploy -m pricer_service2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9e19daeb-1281-484b-9d2f-95cc6fed2622",
   "metadata": {},
   "outputs": [],
   "source": [
    "Pricer = modal.Cls.from_name(\"pricer-service\", \"Pricer\")\n",
    "pricer = Pricer()\n",
    "reply = pricer.price.remote(\"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\")\n",
    "print(reply)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c1b1451-6249-4462-bf2d-5937c059926c",
   "metadata": {},
   "source": [
    "# Optional: Keeping Modal warm\n",
    "\n",
    "## A way to improve the speed of the Modal pricer service\n",
    "\n",
    "The first time you run this modal class, it might take as much as 10 minutes to build.  \n",
    "Subsequently it should be much faster.. 30 seconds if it needs to wake up, otherwise 2 seconds.  \n",
    "If you want it to always be 2 seconds, you can keep the container from going to sleep by editing this constant in pricer_service2.py:\n",
    "\n",
    "`MIN_CONTAINERS = 0`\n",
    "\n",
    "Make it 1 to keep a container alive.  \n",
    "But please note: this will eat up credits! Only do this if you are comfortable to have a process running continually.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3754cfdd-ae28-47c8-91f2-6e060e2c91b3",
   "metadata": {},
   "source": [
    "## And now introducing our Agent class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ba9aedca-6a7b-4d30-9f64-59d76f76fb6d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from agents.specialist_agent import SpecialistAgent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fe5843e5-e958-4a65-8326-8f5b4686de7f",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent = SpecialistAgent()\n",
    "agent.price(\"iPad Pro 2nd generation\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f5a3181b-1310-4102-8d7d-52caf4c00538",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}