File size: 29,466 Bytes
e5a3f40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "L4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "code",
      "source": [
        "\"\"\"\n",
        "Anveshak: Spirituality Q&A - Data Preprocessing Pipeline\n",
        "\n",
        "This script processes the spiritual text corpus for the Anveshak application:\n",
        "1. Uploads and downloads text files from various sources\n",
        "2. Cleans and processes the texts to remove artifacts and noise\n",
        "3. Chunks texts into smaller, manageable pieces\n",
        "4. Generates embeddings using the E5-large-v2 model\n",
        "5. Creates a FAISS index for efficient similarity search\n",
        "6. Uploads all processed data to Google Cloud Storage\n",
        "\n",
        "Usage:\n",
        "- Run in Google Colab with GPU runtime for faster embedding generation\n",
        "- Ensure GCP authentication is set up before running\n",
        "- Configure the constants below with your actual settings\n",
        "\"\"\""
      ],
      "metadata": {
        "id": "Cyjr-eDz9GmH"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# =============================================================================\n",
        "# CONFIGURATION SETTINGS\n",
        "# =============================================================================\n",
        "# Update these values with your actual settings\n",
        "# Before open-sourcing, clear these values or replace with placeholders\n",
        "BUCKET_NAME_GCS = \"your-bucket-name\"  # e.g., \"spiritual-texts-bucket\"\n",
        "EMBEDDING_MODEL = \"your-embedding-model\"  # e.g., \"intfloat/e5-large-v2\"\n",
        "# LLM_MODEL = \"your-llm-model\"  # e.g., \"gpt-3.5-turbo\"\n",
        "\n",
        "# GCS Paths - update these with your folder structure\n",
        "METADATA_PATH_GCS = \"metadata/metadata.jsonl\"\n",
        "RAW_TEXTS_UPLOADED_PATH_GCS = \"raw-texts/uploaded\"\n",
        "RAW_TEXTS_DOWNLOADED_PATH_GCS = \"raw-texts/downloaded/\"\n",
        "CLEANED_TEXTS_PATH_GCS = \"cleaned-texts/\"\n",
        "EMBEDDINGS_PATH_GCS = \"processed/embeddings/all_embeddings.npy\"\n",
        "INDICES_PATH_GCS = \"processed/indices/faiss_index.faiss\"\n",
        "CHUNKS_PATH_GCS = \"processed/chunks/text_chunks.txt\"\n",
        "\n",
        "# Local file paths in Colab environment - update these with your folder structure\n",
        "LOCAL_METADATA_FILE = \"/content/metadata.jsonl\"\n",
        "LOCAL_RAW_TEXTS_FOLDER = \"/content/raw-texts/uploaded\"\n",
        "LOCAL_EMBEDDINGS_FILE = \"/tmp/all_embeddings.npy\"\n",
        "LOCAL_FAISS_INDEX_FILE = \"/tmp/faiss_index.faiss\"\n",
        "LOCAL_TEXT_CHUNKS_FILE = \"/tmp/text_chunks.txt\""
      ],
      "metadata": {
        "id": "YEDyIvmoXsPB"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "H1tEbKhur8xf"
      },
      "outputs": [],
      "source": [
        "# Install required packages\n",
        "!pip install faiss-cpu"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Import necessary libraries\n",
        "from google.colab import files\n",
        "from google.colab import auth\n",
        "from google.cloud import storage\n",
        "import os\n",
        "import json\n",
        "import requests\n",
        "import re\n",
        "import unicodedata\n",
        "from bs4 import BeautifulSoup\n",
        "import numpy as np\n",
        "import faiss\n",
        "import torch\n",
        "from sentence_transformers import SentenceTransformer"
      ],
      "metadata": {
        "id": "xCDTvZJRse4-"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# =============================================================================\n",
        "# AUTHENTICATION & INITIALIZATION\n",
        "# =============================================================================\n",
        "\n",
        "# Authenticate with Google Cloud (only needed in Colab)\n",
        "auth.authenticate_user()\n",
        "\n",
        "# Initialize GCS client (single initialization)\n",
        "storage_client = storage.Client()\n",
        "bucket = storage_client.bucket(BUCKET_NAME_GCS)"
      ],
      "metadata": {
        "id": "hSYQ0ZSasjLd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# =============================================================================\n",
        "# PART 1: UPLOAD RAW TEXTS AND METADATA\n",
        "# =============================================================================\n",
        "\n",
        "def upload_files_to_colab():\n",
        "    \"\"\"\n",
        "    Upload raw text files and metadata from local machine to Colab.\n",
        "\n",
        "    This function:\n",
        "    1. Prompts the user to upload text files\n",
        "    2. Saves the uploaded files to a local directory\n",
        "    3. Prompts the user to upload the metadata.jsonl file\n",
        "    4. Saves the metadata file to the specified location\n",
        "\n",
        "    Returns:\n",
        "        bool: True if upload was successful, False otherwise\n",
        "    \"\"\"\n",
        "    # First, upload text files\n",
        "    print(\"Step 1: Please upload your text files...\")\n",
        "    uploaded_text_files = files.upload()  # This will prompt the user to upload files\n",
        "\n",
        "    # Create directory structure if it doesn't exist\n",
        "    os.makedirs(LOCAL_RAW_TEXTS_FOLDER, exist_ok=True)\n",
        "\n",
        "    # Move uploaded text files to the raw-texts folder\n",
        "    for filename, content in uploaded_text_files.items():\n",
        "        if filename.endswith(\".txt\"):\n",
        "            with open(os.path.join(LOCAL_RAW_TEXTS_FOLDER, filename), \"wb\") as f:\n",
        "                f.write(content)\n",
        "            print(f\"βœ… Saved {filename} to {LOCAL_RAW_TEXTS_FOLDER}\")\n",
        "\n",
        "    print(\"Text files upload complete!\")\n",
        "\n",
        "    # Next, upload metadata file\n",
        "    print(\"\\nStep 2: Please upload your metadata.jsonl file...\")\n",
        "    uploaded_metadata = files.upload()  # This will prompt the user to upload files\n",
        "\n",
        "    # Save metadata file\n",
        "    metadata_uploaded = False\n",
        "    for filename, content in uploaded_metadata.items():\n",
        "        if filename == \"metadata.jsonl\":\n",
        "            # Ensure the directory for metadata file exists\n",
        "            os.makedirs(os.path.dirname(LOCAL_METADATA_FILE), exist_ok=True)\n",
        "            with open(LOCAL_METADATA_FILE, \"wb\") as f:\n",
        "                f.write(content)\n",
        "            print(f\"βœ… Saved metadata.jsonl to {LOCAL_METADATA_FILE}\")\n",
        "            metadata_uploaded = True\n",
        "\n",
        "    if not metadata_uploaded:\n",
        "        print(\"⚠️ Warning: metadata.jsonl was not uploaded. Please upload it to continue.\")\n",
        "        return False\n",
        "\n",
        "    print(\"Upload to Colab complete!\")\n",
        "    return True\n",
        "\n",
        "def upload_files_to_gcs():\n",
        "    \"\"\"\n",
        "    Upload raw text files and metadata from Colab to Google Cloud Storage.\n",
        "\n",
        "    This function:\n",
        "    1. Uploads each text file from the local directory to GCS\n",
        "    2. Uploads the metadata.jsonl file to GCS\n",
        "\n",
        "    All files are uploaded to the paths specified in the configuration constants.\n",
        "    \"\"\"\n",
        "    # Upload each file from the local raw-texts folder to GCS\n",
        "    for filename in os.listdir(LOCAL_RAW_TEXTS_FOLDER):\n",
        "        local_path = os.path.join(LOCAL_RAW_TEXTS_FOLDER, filename)\n",
        "        blob_path = f\"{RAW_TEXTS_UPLOADED_PATH_GCS}/{filename}\"  # GCS path\n",
        "        blob = bucket.blob(blob_path)\n",
        "        try:\n",
        "            blob.upload_from_filename(local_path)\n",
        "            print(f\"βœ… Uploaded: {filename} -> gs://{BUCKET_NAME_GCS}/{blob_path}\")\n",
        "        except Exception as e:\n",
        "            print(f\"❌ Failed to upload {filename}: {e}\")\n",
        "\n",
        "    # Upload metadata file\n",
        "    blob = bucket.blob(METADATA_PATH_GCS)\n",
        "    try:\n",
        "        blob.upload_from_filename(LOCAL_METADATA_FILE)\n",
        "        print(f\"βœ… Uploaded metadata.jsonl -> gs://{BUCKET_NAME_GCS}/{METADATA_PATH_GCS}\")\n",
        "    except Exception as e:\n",
        "        print(f\"❌ Failed to upload metadata: {e}\")"
      ],
      "metadata": {
        "id": "cShc029islmO"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# =============================================================================\n",
        "# PART 2: DOWNLOAD AND CLEAN TEXTS\n",
        "# =============================================================================\n",
        "\n",
        "def fetch_metadata_from_gcs():\n",
        "    \"\"\"\n",
        "    Fetch metadata.jsonl from GCS and return as a list of dictionaries.\n",
        "\n",
        "    Each dictionary represents a text entry with metadata like title, author, etc.\n",
        "\n",
        "    Returns:\n",
        "        list: List of dictionaries containing metadata for each text\n",
        "    \"\"\"\n",
        "    blob = bucket.blob(METADATA_PATH_GCS)\n",
        "    # Download metadata file\n",
        "    metadata_jsonl = blob.download_as_text()\n",
        "    # Parse JSONL\n",
        "    metadata = [json.loads(line) for line in metadata_jsonl.splitlines()]\n",
        "    return metadata\n",
        "\n",
        "def upload_to_gcs(source_file, destination_path):\n",
        "    \"\"\"\n",
        "    Upload a local file to Google Cloud Storage.\n",
        "\n",
        "    Args:\n",
        "        source_file (str): Path to the local file\n",
        "        destination_path (str): Path in GCS where the file should be uploaded\n",
        "    \"\"\"\n",
        "    blob = bucket.blob(destination_path)\n",
        "    blob.upload_from_filename(source_file)\n",
        "    print(f\"πŸ“€ Uploaded to GCS: {destination_path}\")\n",
        "\n",
        "def download_text_files():\n",
        "    \"\"\"\n",
        "    Download text files from URLs specified in the metadata.\n",
        "\n",
        "    This function:\n",
        "    1. Fetches metadata from GCS\n",
        "    2. Filters entries where Uploaded=False (texts to be downloaded)\n",
        "    3. Downloads each text from its URL\n",
        "    4. Uploads the downloaded text to GCS\n",
        "\n",
        "    This allows automated collection of texts that weren't manually uploaded.\n",
        "    \"\"\"\n",
        "    metadata = fetch_metadata_from_gcs()\n",
        "    # Filter entries where Uploaded is False\n",
        "    files_to_download = [item for item in metadata if item[\"Uploaded\"] == False]\n",
        "    print(f\"πŸ” Found {len(files_to_download)} files to download\")\n",
        "\n",
        "    # Process only necessary files\n",
        "    for item in files_to_download:\n",
        "        name, author, url = item[\"Title\"], item[\"Author\"], item[\"URL\"]\n",
        "        if url.lower() == \"not available\":\n",
        "            print(f\"❌ Skipping {name} - No URL available.\")\n",
        "            continue\n",
        "\n",
        "        try:\n",
        "            response = requests.get(url)\n",
        "            if response.status_code == 200:\n",
        "                raw_text = response.text\n",
        "                filename = \"{}.txt\".format(name.replace(\" \", \"_\"))\n",
        "                # Save to local first\n",
        "                local_path = f\"/tmp/{filename}\"\n",
        "                with open(local_path, \"w\", encoding=\"utf-8\") as file:\n",
        "                    file.write(raw_text)\n",
        "                # Upload to GCS\n",
        "                gcs_path = f\"{RAW_TEXTS_DOWNLOADED_PATH_GCS}{filename}\"\n",
        "                upload_to_gcs(local_path, gcs_path)\n",
        "                print(f\"βœ… Downloaded & uploaded: {filename} ({len(raw_text.split())} words)\")\n",
        "                # Clean up temp file\n",
        "                os.remove(local_path)\n",
        "            else:\n",
        "                print(f\"❌ Failed to download {name}: {url} (Status {response.status_code})\")\n",
        "        except Exception as e:\n",
        "            print(f\"❌ Error processing {name}: {e}\")\n",
        "\n",
        "def rigorous_clean_text(text):\n",
        "    \"\"\"\n",
        "    Clean text by removing metadata, junk text, and formatting issues.\n",
        "\n",
        "    This function:\n",
        "    1. Removes HTML tags using BeautifulSoup\n",
        "    2. Removes URLs and standalone numbers\n",
        "    3. Removes all-caps OCR noise words\n",
        "    4. Deduplicates adjacent identical lines\n",
        "    5. Normalizes Unicode characters\n",
        "    6. Standardizes whitespace and newlines\n",
        "\n",
        "    Args:\n",
        "        text (str): The raw text to clean\n",
        "\n",
        "    Returns:\n",
        "        str: The cleaned text\n",
        "    \"\"\"\n",
        "    text = BeautifulSoup(text, \"html.parser\").get_text()\n",
        "    text = re.sub(r\"https?:\\/\\/\\S+\", \"\", text)  # Remove links\n",
        "    text = re.sub(r\"\\b\\d+\\b\", \"\", text)  # Remove standalone numbers\n",
        "    text = re.sub(r\"\\b[A-Z]{5,}\\b\", \"\", text)  # Remove all-caps OCR noise words\n",
        "    lines = text.split(\"\\n\")\n",
        "    cleaned_lines = []\n",
        "    last_line = None\n",
        "\n",
        "    for line in lines:\n",
        "        line = line.strip()\n",
        "        if line and line != last_line:\n",
        "            cleaned_lines.append(line)\n",
        "            last_line = line\n",
        "\n",
        "    text = \"\\n\".join(cleaned_lines)\n",
        "    text = unicodedata.normalize(\"NFKD\", text)\n",
        "    text = re.sub(r\"\\s+\", \" \", text).strip()\n",
        "    text = re.sub(r\"\\n{2,}\", \"\\n\", text)\n",
        "    return text\n",
        "\n",
        "def clean_and_upload_texts():\n",
        "    \"\"\"\n",
        "    Download raw texts from GCS, clean them, and upload cleaned versions back to GCS.\n",
        "\n",
        "    This function processes all texts in both the uploaded and downloaded folders:\n",
        "    1. For each text file, downloads it from GCS\n",
        "    2. Cleans the text using rigorous_clean_text()\n",
        "    3. Uploads the cleaned version back to GCS in the cleaned-texts folder\n",
        "\n",
        "    This step ensures that all texts are properly formatted before embedding generation.\n",
        "    \"\"\"\n",
        "    raw_texts_folders = [RAW_TEXTS_DOWNLOADED_PATH_GCS, RAW_TEXTS_UPLOADED_PATH_GCS]  # Process both folders\n",
        "    total_files = 0  # Counter to track number of processed files\n",
        "\n",
        "    for raw_texts_folder in raw_texts_folders:\n",
        "        # List all files in the current raw-texts folder\n",
        "        blobs = list(bucket.list_blobs(prefix=raw_texts_folder))\n",
        "        print(f\"πŸ” Found {len(blobs)} files in {raw_texts_folder}\")\n",
        "\n",
        "        for blob in blobs:\n",
        "            if not blob.name.endswith(\".txt\"):  # Skip non-text files\n",
        "                continue\n",
        "\n",
        "            try:\n",
        "                # Download file\n",
        "                raw_text = blob.download_as_text().strip()\n",
        "                if not raw_text:  # Skip empty files\n",
        "                    print(f\"⚠️ Skipping empty file: {blob.name}\")\n",
        "                    continue\n",
        "\n",
        "                # Clean text\n",
        "                cleaned_text = rigorous_clean_text(raw_text)\n",
        "\n",
        "                # Save cleaned text back to GCS\n",
        "                cleaned_blob_name = blob.name.replace(raw_texts_folder, CLEANED_TEXTS_PATH_GCS)\n",
        "                cleaned_blob = bucket.blob(cleaned_blob_name)\n",
        "                cleaned_blob.upload_from_string(cleaned_text, content_type=\"text/plain\")\n",
        "                print(f\"βœ… Cleaned & uploaded: {cleaned_blob_name} ({len(cleaned_text.split())} words, {len(cleaned_text)} characters)\")\n",
        "                total_files += 1\n",
        "            except Exception as e:\n",
        "                print(f\"❌ Error processing {blob.name}: {e}\")\n",
        "\n",
        "    print(f\"πŸš€ Cleaning process completed! Total cleaned & uploaded files: {total_files}\")"
      ],
      "metadata": {
        "id": "Vskwg984s25K"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# =============================================================================\n",
        "# PART 3: GENERATE EMBEDDINGS AND INDEX\n",
        "# =============================================================================\n",
        "\n",
        "def fetch_metadata_dict_from_gcs():\n",
        "    \"\"\"\n",
        "    Fetch metadata.jsonl from GCS and return as a dictionary.\n",
        "\n",
        "    The dictionary is keyed by title for easy lookup during text processing.\n",
        "\n",
        "    Returns:\n",
        "        dict: Dictionary mapping text titles to their metadata\n",
        "    \"\"\"\n",
        "    metadata_blob = bucket.blob(METADATA_PATH_GCS)\n",
        "    metadata_dict = {}\n",
        "\n",
        "    if metadata_blob.exists():\n",
        "        metadata_content = metadata_blob.download_as_text()\n",
        "        for line in metadata_content.splitlines():\n",
        "            item = json.loads(line)\n",
        "            metadata_dict[item[\"Title\"]] = item  # Keep space-based lookup\n",
        "    else:\n",
        "        print(\"❌ Metadata file not found in GCS\")\n",
        "\n",
        "    return metadata_dict\n",
        "\n",
        "def chunk_text(text, chunk_size=500, overlap=50):\n",
        "    \"\"\"\n",
        "    Split text into smaller, overlapping chunks for better retrieval.\n",
        "\n",
        "    Args:\n",
        "        text (str): The text to chunk\n",
        "        chunk_size (int): Maximum number of words per chunk\n",
        "        overlap (int): Number of words to overlap between chunks\n",
        "\n",
        "    Returns:\n",
        "        list: List of text chunks\n",
        "    \"\"\"\n",
        "    words = text.split()\n",
        "    chunks = []\n",
        "    i = 0\n",
        "\n",
        "    while i < len(words):\n",
        "        chunk = \" \".join(words[i:i + chunk_size])\n",
        "        chunks.append(chunk)\n",
        "        i += chunk_size - overlap\n",
        "\n",
        "    return chunks\n",
        "\n",
        "def create_embeddings(text_chunks, batch_size=32):\n",
        "    \"\"\"\n",
        "    Generate embeddings for the given chunks of text using the specified embedding model.\n",
        "\n",
        "    This function:\n",
        "    1. Uses SentenceTransformer to load the embedding model\n",
        "    2. Prefixes each chunk with \"passage:\" as required by the E5 model\n",
        "    3. Processes chunks in batches to manage memory usage\n",
        "    4. Normalizes embeddings for cosine similarity search\n",
        "\n",
        "    Args:\n",
        "        text_chunks (list): List of text chunks to embed\n",
        "        batch_size (int): Number of chunks to process at once\n",
        "\n",
        "    Returns:\n",
        "        numpy.ndarray: Matrix of embeddings, one per text chunk\n",
        "    \"\"\"\n",
        "    # Load the model with GPU optimization\n",
        "    model = SentenceTransformer(EMBEDDING_MODEL)\n",
        "    device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
        "    model = model.to(device)\n",
        "    print(f\"πŸš€ Using device for embeddings: {device}\")\n",
        "\n",
        "    prefixed_chunks = [f\"passage: {text}\" for text in text_chunks]\n",
        "    all_embeddings = []\n",
        "\n",
        "    for i in range(0, len(prefixed_chunks), batch_size):\n",
        "        batch = prefixed_chunks[i:i+batch_size]\n",
        "\n",
        "        # Move batch to GPU (if available) for faster processing\n",
        "        with torch.no_grad():\n",
        "            batch_embeddings = model.encode(batch, convert_to_numpy=True, normalize_embeddings=True)\n",
        "\n",
        "        all_embeddings.append(batch_embeddings)\n",
        "\n",
        "        if (i + batch_size) % 100 == 0 or (i + batch_size) >= len(prefixed_chunks):\n",
        "            print(f\"πŸ“Œ Processed {i + min(batch_size, len(prefixed_chunks) - i)}/{len(prefixed_chunks)} documents\")\n",
        "\n",
        "    return np.vstack(all_embeddings).astype(\"float32\")\n",
        "\n",
        "def process_cleaned_texts():\n",
        "    \"\"\"\n",
        "    Process cleaned texts to create embeddings, FAISS index, and text chunks with metadata.\n",
        "\n",
        "    This function:\n",
        "    1. Downloads all cleaned texts from GCS\n",
        "    2. Chunks each text into smaller pieces\n",
        "    3. Generates embeddings for each chunk\n",
        "    4. Creates a FAISS index for similarity search\n",
        "    5. Saves and uploads all processed data back to GCS\n",
        "\n",
        "    This is the core processing step that prepares data for the RAG system.\n",
        "    \"\"\"\n",
        "    all_chunks = []\n",
        "    all_metadata = []\n",
        "    chunk_counter = 0\n",
        "\n",
        "    metadata_dict = fetch_metadata_dict_from_gcs()  # Load metadata\n",
        "\n",
        "    # Optimized listing of blobs in cleaned-texts folder\n",
        "    blobs = list(storage_client.list_blobs(BUCKET_NAME_GCS, prefix=CLEANED_TEXTS_PATH_GCS))\n",
        "    print(f\"πŸ” Found {len(blobs)} files in {CLEANED_TEXTS_PATH_GCS}\")\n",
        "\n",
        "    if not blobs:\n",
        "        print(f\"❌ No files found in {CLEANED_TEXTS_PATH_GCS}. Exiting.\")\n",
        "        return\n",
        "\n",
        "    for blob in blobs:\n",
        "        file_name = blob.name.split(\"/\")[-1]\n",
        "        if not file_name or file_name.startswith(\".\"):\n",
        "            continue  # Skip empty or hidden files\n",
        "\n",
        "        # Convert filename back to space-based title for metadata lookup\n",
        "        book_name = file_name.replace(\"_\", \" \")\n",
        "        metadata = metadata_dict.get(book_name, {\"Author\": \"Unknown\", \"Publisher\": \"Unknown\"})\n",
        "        author = metadata.get(\"Author\", \"Unknown\")\n",
        "\n",
        "        try:\n",
        "            # Download and read text\n",
        "            raw_text = blob.download_as_text().strip()\n",
        "\n",
        "            # Skip empty or corrupt files\n",
        "            if not raw_text:\n",
        "                print(f\"❌ Skipping empty file: {file_name}\")\n",
        "                continue\n",
        "\n",
        "            chunks = chunk_text(raw_text)\n",
        "            print(f\"βœ… Processed {book_name}: {len(chunks)} chunks\")\n",
        "\n",
        "            for chunk in chunks:\n",
        "                all_chunks.append(chunk)\n",
        "                all_metadata.append((chunk_counter, book_name, author))\n",
        "                chunk_counter += 1\n",
        "        except Exception as e:\n",
        "            print(f\"❌ Error processing {file_name}: {e}\")\n",
        "\n",
        "    # Ensure there are chunks before embedding generation\n",
        "    if not all_chunks:\n",
        "        print(\"❌ No chunks found. Skipping embedding generation.\")\n",
        "        return\n",
        "\n",
        "    # Create embeddings with GPU acceleration\n",
        "    print(f\"πŸ“ Creating embeddings for {len(all_chunks)} total chunks...\")\n",
        "    all_embeddings = create_embeddings(all_chunks)\n",
        "\n",
        "    # Build FAISS index\n",
        "    dimension = all_embeddings.shape[1]\n",
        "    index = faiss.IndexFlatIP(dimension)\n",
        "    index.add(all_embeddings)\n",
        "    print(f\"βœ… FAISS index built with {index.ntotal} vectors\")\n",
        "\n",
        "    # Save & upload embeddings\n",
        "    np.save(LOCAL_EMBEDDINGS_FILE, all_embeddings)  # Save locally first\n",
        "    embeddings_blob = bucket.blob(EMBEDDINGS_PATH_GCS)\n",
        "    embeddings_blob.upload_from_filename(LOCAL_EMBEDDINGS_FILE)\n",
        "    print(f\"βœ… Uploaded embeddings to GCS: {EMBEDDINGS_PATH_GCS}\")\n",
        "\n",
        "    # Save & upload FAISS index\n",
        "    faiss.write_index(index, LOCAL_FAISS_INDEX_FILE)\n",
        "    index_blob = bucket.blob(INDICES_PATH_GCS)\n",
        "    index_blob.upload_from_filename(LOCAL_FAISS_INDEX_FILE)\n",
        "    print(f\"βœ… Uploaded FAISS index to GCS: {INDICES_PATH_GCS}\")\n",
        "\n",
        "    # Save and upload text chunks with metadata\n",
        "    with open(LOCAL_TEXT_CHUNKS_FILE, \"w\", encoding=\"utf-8\") as f:\n",
        "        for i, (chunk_id, book_name, author) in enumerate(all_metadata):\n",
        "            f.write(f\"{i}\\t{book_name}\\t{author}\\t{all_chunks[i]}\\n\")\n",
        "\n",
        "    chunks_blob = bucket.blob(CHUNKS_PATH_GCS)\n",
        "    chunks_blob.upload_from_filename(LOCAL_TEXT_CHUNKS_FILE)\n",
        "    print(f\"βœ… Uploaded text chunks to GCS: {CHUNKS_PATH_GCS}\")\n",
        "\n",
        "    # Clean up temp files\n",
        "    os.remove(LOCAL_EMBEDDINGS_FILE)\n",
        "    os.remove(LOCAL_FAISS_INDEX_FILE)\n",
        "    os.remove(LOCAL_TEXT_CHUNKS_FILE)"
      ],
      "metadata": {
        "id": "1Yul8p9JsN1e"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# =============================================================================\n",
        "# PART 4: MAIN EXECUTION\n",
        "# =============================================================================\n",
        "\n",
        "def run_pipeline():\n",
        "    \"\"\"\n",
        "    Run the complete end-to-end preprocessing pipeline.\n",
        "\n",
        "    This function executes all steps in sequence:\n",
        "    1. Upload files from local to Colab\n",
        "    2. Upload raw texts and metadata to GCS\n",
        "    3. Download texts from URLs specified in metadata\n",
        "    4. Clean and process all texts\n",
        "    5. Generate embeddings and build the FAISS index\n",
        "\n",
        "    This is the main entry point for the preprocessing script.\n",
        "    \"\"\"\n",
        "    print(\"πŸš€ Starting pipeline execution...\")\n",
        "\n",
        "    print(\"\\n==== STEP 1: Uploading files from local to Colab ====\")\n",
        "    upload_successful = upload_files_to_colab()\n",
        "\n",
        "    if not upload_successful:\n",
        "        print(\"❌ Pipeline halted due to missing metadata file.\")\n",
        "        return\n",
        "\n",
        "    print(\"\\n==== STEP 2: Uploading raw texts and metadata to GCS ====\")\n",
        "    upload_files_to_gcs()\n",
        "\n",
        "    print(\"\\n==== STEP 3: Downloading texts from URLs ====\")\n",
        "    download_text_files()\n",
        "\n",
        "    print(\"\\n==== STEP 4: Cleaning and processing texts ====\")\n",
        "    clean_and_upload_texts()\n",
        "\n",
        "    print(\"\\n==== STEP 5: Generating embeddings and building index ====\")\n",
        "    process_cleaned_texts()\n",
        "\n",
        "    print(\"\\nβœ… Pipeline execution completed successfully!\")\n",
        "\n",
        "# Execute the complete pipeline\n",
        "if __name__ == \"__main__\":\n",
        "    run_pipeline()"
      ],
      "metadata": {
        "id": "XXB_eYvj-I0i"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}