Spaces:
Sleeping
Sleeping
[ | |
{ | |
"objectID": "notebooks/faiss.html", | |
"href": "notebooks/faiss.html", | |
"title": "Similarity Search", | |
"section": "", | |
"text": "Authored by: Merve Noyan\nEmbeddings are semantically meaningful compressions of information. They can be used to do similarity search, zero-shot classification or simply train a new model. Use cases for similarity search include searching for similar products in e-commerce, content search in social media and more. This notebook walks you through using 🤗transformers, 🤗datasets and FAISS to create and index embeddings from a feature extraction model to later use them for similarity search. Let’s install necessary libraries.\n\n!pip install -q datasets faiss-gpu transformers sentencepiece\n\nFor this tutorial, we will use CLIP model to extract the features. CLIP is a revolutionary model that introduced joint training of a text encoder and an image encoder to connect two modalities.\n\nimport torch\nfrom PIL import Image\nfrom transformers import AutoImageProcessor, AutoModel, AutoTokenizer\nimport faiss\nimport numpy as np\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else \"cpu\")\n\nmodel = AutoModel.from_pretrained(\"openai/clip-vit-base-patch16\").to(device)\nprocessor = AutoImageProcessor.from_pretrained(\"openai/clip-vit-base-patch16\")\ntokenizer = AutoTokenizer.from_pretrained(\"openai/clip-vit-base-patch16\")\n\nLoad the dataset. To keep this notebook light, we will use a small captioning dataset, jmhessel/newyorker_caption_contest.\n\nfrom datasets import load_dataset\n\nds = load_dataset(\"jmhessel/newyorker_caption_contest\", \"explanation\")\n\nSee an example.\n\nds[\"train\"][0][\"image\"]\n\n\n\n\n\n\n\n\n\nds[\"train\"][0][\"image_description\"]\n\n'Two women are looking out a window. There is snow outside, and there is a snowman with human arms.'\n\n\nWe don’t have to write any function to embed examples or create an index. 🤗 datasets library’s FAISS integration abstracts these processes. We can simply use map method of the dataset to create a new column with the embeddings for each example like below. Let’s create one for text features on the prompt column.\n\ndataset = ds[\"train\"]\nds_with_embeddings = dataset.map(lambda example:\n {'embeddings': model.get_text_features(\n **tokenizer([example[\"image_description\"]],\n truncation=True, return_tensors=\"pt\")\n .to(\"cuda\"))[0].detach().cpu().numpy()})\n\n\nds_with_embeddings.add_faiss_index(column='embeddings')\n\nWe can do the same and get the image embeddings.\n\nds_with_embeddings = ds_with_embeddings.map(lambda example:\n {'image_embeddings': model.get_image_features(\n **processor([example[\"image\"]], return_tensors=\"pt\")\n .to(\"cuda\"))[0].detach().cpu().numpy()})\n\n\nds_with_embeddings.add_faiss_index(column='image_embeddings')\n\n\n\nWe can now query the dataset with text or image to get similar items from it.\n\nprmt = \"a snowy day\"\nprmt_embedding = model.get_text_features(**tokenizer([prmt], return_tensors=\"pt\", truncation=True).to(\"cuda\"))[0].detach().cpu().numpy()\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', prmt_embedding, k=1)\n\n\ndef downscale_images(image):\n width = 200\n ratio = (width / float(image.size[0]))\n height = int((float(image.size[1]) * float(ratio)))\n img = image.resize((width, height), Image.Resampling.LANCZOS)\n return img\n\nimages = [downscale_images(image) for image in retrieved_examples[\"image\"]]\n# see the closest text and image\nprint(retrieved_examples[\"image_description\"])\ndisplay(images[0])\n\n\n['A man is in the snow. A boy with a huge snow shovel is there too. They are outside a house.']\n\n\n\n\n\n\n\n\n\n\n\n\nImage similarity inference is similar, where you just call get_image_features.\n\nimport requests\n# image of a beaver\nurl = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png\"\nimage = Image.open(requests.get(url, stream=True).raw)\ndisplay(downscale_images(image))\n\n\n\n\n\n\n\n\nSearch for the similar image.\n\nimg_embedding = model.get_image_features(**processor([image], return_tensors=\"pt\", truncation=True).to(\"cuda\"))[0].detach().cpu().numpy()\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('image_embeddings', img_embedding, k=1)\n\nDisplay the most similar image to the beaver image.\n\nimages = [downscale_images(image) for image in retrieved_examples[\"image\"]]\n# see the closest text and image\nprint(retrieved_examples[\"image_description\"])\ndisplay(images[0])\n\n['Salmon swim upstream but they see a grizzly bear and are in shock. The bear has a smug look on his face when he sees the salmon.']\n\n\n\n\n\n\n\n\n\n\n\n\nWe can save the dataset with embeddings with save_faiss_index.\n\nds_with_embeddings.save_faiss_index('embeddings', 'embeddings/embeddings.faiss')\n\n\nds_with_embeddings.save_faiss_index('image_embeddings', 'embeddings/image_embeddings.faiss')\n\nIt’s a good practice to store the embeddings in a dataset repository, so we will create one and push our embeddings there to pull later. We will login to Hugging Face Hub, create a dataset repository there and push our indexes there and load using snapshot_download.\n\nfrom huggingface_hub import HfApi, notebook_login, snapshot_download\nnotebook_login()\n\n\nfrom huggingface_hub import HfApi\napi = HfApi()\napi.create_repo(\"merve/faiss_embeddings\", repo_type=\"dataset\")\napi.upload_folder(\n folder_path=\"./embeddings\",\n repo_id=\"merve/faiss_embeddings\",\n repo_type=\"dataset\",\n)\n\n\nsnapshot_download(repo_id=\"merve/faiss_embeddings\", repo_type=\"dataset\",\n local_dir=\"downloaded_embeddings\")\n\nWe can load the embeddings to the dataset with no embeddings using load_faiss_index.\n\nds = ds[\"train\"]\nds.load_faiss_index('embeddings', './downloaded_embeddings/embeddings.faiss')\n# infer again\nprmt = \"people under the rain\"\n\n\nprmt_embedding = model.get_text_features(\n **tokenizer([prmt], return_tensors=\"pt\", truncation=True)\n .to(\"cuda\"))[0].detach().cpu().numpy()\n\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', prmt_embedding, k=1)\n\n\ndisplay(retrieved_examples[\"image\"][0])", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"FAISS for Efficient Search" | |
] | |
}, | |
{ | |
"objectID": "notebooks/faiss.html#querying-the-data-with-text-prompts", | |
"href": "notebooks/faiss.html#querying-the-data-with-text-prompts", | |
"title": "Similarity Search", | |
"section": "", | |
"text": "We can now query the dataset with text or image to get similar items from it.\n\nprmt = \"a snowy day\"\nprmt_embedding = model.get_text_features(**tokenizer([prmt], return_tensors=\"pt\", truncation=True).to(\"cuda\"))[0].detach().cpu().numpy()\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', prmt_embedding, k=1)\n\n\ndef downscale_images(image):\n width = 200\n ratio = (width / float(image.size[0]))\n height = int((float(image.size[1]) * float(ratio)))\n img = image.resize((width, height), Image.Resampling.LANCZOS)\n return img\n\nimages = [downscale_images(image) for image in retrieved_examples[\"image\"]]\n# see the closest text and image\nprint(retrieved_examples[\"image_description\"])\ndisplay(images[0])\n\n\n['A man is in the snow. A boy with a huge snow shovel is there too. They are outside a house.']", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"FAISS for Efficient Search" | |
] | |
}, | |
{ | |
"objectID": "notebooks/faiss.html#querying-the-data-with-image-prompts", | |
"href": "notebooks/faiss.html#querying-the-data-with-image-prompts", | |
"title": "Similarity Search", | |
"section": "", | |
"text": "Image similarity inference is similar, where you just call get_image_features.\n\nimport requests\n# image of a beaver\nurl = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png\"\nimage = Image.open(requests.get(url, stream=True).raw)\ndisplay(downscale_images(image))\n\n\n\n\n\n\n\n\nSearch for the similar image.\n\nimg_embedding = model.get_image_features(**processor([image], return_tensors=\"pt\", truncation=True).to(\"cuda\"))[0].detach().cpu().numpy()\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('image_embeddings', img_embedding, k=1)\n\nDisplay the most similar image to the beaver image.\n\nimages = [downscale_images(image) for image in retrieved_examples[\"image\"]]\n# see the closest text and image\nprint(retrieved_examples[\"image_description\"])\ndisplay(images[0])\n\n['Salmon swim upstream but they see a grizzly bear and are in shock. The bear has a smug look on his face when he sees the salmon.']", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"FAISS for Efficient Search" | |
] | |
}, | |
{ | |
"objectID": "notebooks/faiss.html#saving-pushing-and-loading-the-embeddings", | |
"href": "notebooks/faiss.html#saving-pushing-and-loading-the-embeddings", | |
"title": "Similarity Search", | |
"section": "", | |
"text": "We can save the dataset with embeddings with save_faiss_index.\n\nds_with_embeddings.save_faiss_index('embeddings', 'embeddings/embeddings.faiss')\n\n\nds_with_embeddings.save_faiss_index('image_embeddings', 'embeddings/image_embeddings.faiss')\n\nIt’s a good practice to store the embeddings in a dataset repository, so we will create one and push our embeddings there to pull later. We will login to Hugging Face Hub, create a dataset repository there and push our indexes there and load using snapshot_download.\n\nfrom huggingface_hub import HfApi, notebook_login, snapshot_download\nnotebook_login()\n\n\nfrom huggingface_hub import HfApi\napi = HfApi()\napi.create_repo(\"merve/faiss_embeddings\", repo_type=\"dataset\")\napi.upload_folder(\n folder_path=\"./embeddings\",\n repo_id=\"merve/faiss_embeddings\",\n repo_type=\"dataset\",\n)\n\n\nsnapshot_download(repo_id=\"merve/faiss_embeddings\", repo_type=\"dataset\",\n local_dir=\"downloaded_embeddings\")\n\nWe can load the embeddings to the dataset with no embeddings using load_faiss_index.\n\nds = ds[\"train\"]\nds.load_faiss_index('embeddings', './downloaded_embeddings/embeddings.faiss')\n# infer again\nprmt = \"people under the rain\"\n\n\nprmt_embedding = model.get_text_features(\n **tokenizer([prmt], return_tensors=\"pt\", truncation=True)\n .to(\"cuda\"))[0].detach().cpu().numpy()\n\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', prmt_embedding, k=1)\n\n\ndisplay(retrieved_examples[\"image\"][0])", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"FAISS for Efficient Search" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_zephyr_langchain.html", | |
"href": "notebooks/rag_zephyr_langchain.html", | |
"title": "Simple RAG", | |
"section": "", | |
"text": "!pip install -q torch transformers accelerate bitsandbytes transformers sentence-transformers faiss-gpu\n!pip install -q langchain", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Zephyr & LangChain" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_zephyr_langchain.html#prepare-the-data", | |
"href": "notebooks/rag_zephyr_langchain.html#prepare-the-data", | |
"title": "Simple RAG", | |
"section": "Prepare the data", | |
"text": "Prepare the data\nIn this example, we’ll load all of the issues (both open and closed) from PEFT library’s repo.\nFirst, you need to acquire a GitHub personal access token to access the GitHub API.\n\nfrom getpass import getpass\n\n1ACCESS_TOKEN = getpass(\"YOUR_GITHUB_PERSONAL_TOKEN\")\n\n\n1\n\nYou can also use an environment variable to store your token.\n\n\n\n\nNext, we’ll load all of the issues in the huggingface/peft repo: - By default, pull requests are considered issues as well, here we chose to exclude them from data with by setting include_prs=False - Setting state = \"all\" means we will load both open and closed issues.\n\nfrom langchain.document_loaders import GitHubIssuesLoader\n\nloader = GitHubIssuesLoader(\n repo=\"huggingface/peft\",\n access_token=ACCESS_TOKEN,\n include_prs=False,\n state=\"all\"\n)\n\ndocs = loader.load()\n\nThe content of individual GitHub issues may be longer than what an embedding model can take as input. If we want to embed all of the available content, we need to chunk the documents into appropriately sized pieces.\nThe most common and straightforward approach to chunking is to define a fixed size of chunks and whether there should be any overlap between them. Keeping some overlap between chunks allows us to preserve some semantic context between the chunks.\nOther approaches are typically more involved and take into account the documents’ structure and context. For example, one may want to split a document based on sentences or paragraphs, or create chunks based on the\nThe fixed-size chunking, however, works well for most common cases, so that is what we’ll do here.\n\nfrom langchain.text_splitter import CharacterTextSplitter\n\nsplitter = CharacterTextSplitter(chunk_size=512, chunk_overlap=30)\n\nchunked_docs = splitter.split_documents(docs)", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Zephyr & LangChain" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_zephyr_langchain.html#create-the-embeddings-retriever", | |
"href": "notebooks/rag_zephyr_langchain.html#create-the-embeddings-retriever", | |
"title": "Simple RAG", | |
"section": "Create the embeddings + retriever", | |
"text": "Create the embeddings + retriever\nNow that the docs are all of the appropriate size, we can create a database with their embeddings.\nTo create document chunk embeddings we’ll use the HuggingFaceEmbeddings and the BAAI/bge-base-en-v1.5 embeddings model. To create the vector database, we’ll use FAISS, a library developed by Facebook AI. This library offers efficient similarity search and clustering of dense vectors, which is what we need here. FAISS is currently one of the most used libraries for NN search in massive datasets.\n\n\n\n\n\n\nTip\n\n\n\nThere are many other embeddings models available on the Hub, and you can keep an eye on the best performing ones by checking the Massive Text Embedding Benchmark (MTEB) Leaderboard.\n\n\nWe’ll access both the embeddings model and FAISS via LangChain API.\n\nfrom langchain.vectorstores import FAISS\nfrom langchain.embeddings import HuggingFaceEmbeddings\n\ndb = FAISS.from_documents(chunked_docs,\n HuggingFaceEmbeddings(model_name='BAAI/bge-base-en-v1.5'))\n\nWe need a way to return(retrieve) the documents given an unstructured query. For that, we’ll use the as_retriever method using the db as a backbone: - search_type=\"similarity\" means we want to perform similarity search between the query and documents - search_kwargs={'k': 4} instructs the retriever to return top 4 results.\n\nretriever = db.as_retriever(\n1 search_type=\"similarity\",\n search_kwargs={'k': 4}\n)\n\n\n1\n\nThe ideal search type is context dependent, and you should experiment to find the best one for your data.\n\n\n\n\nThe vector database and retriever are now set up, next we need to set up the next piece of the chain - the model.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Zephyr & LangChain" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_zephyr_langchain.html#load-quantized-model", | |
"href": "notebooks/rag_zephyr_langchain.html#load-quantized-model", | |
"title": "Simple RAG", | |
"section": "Load quantized model", | |
"text": "Load quantized model\nFor this example, we chose HuggingFaceH4/zephyr-7b-beta, a small but powerful model. To make inference faster, we will load the quantized version of the model:\n\n\n\n\n\n\nTip\n\n\n\nWith many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the Open-source LLM leaderboard.\n\n\n\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n\nmodel_name = 'HuggingFaceH4/zephyr-7b-beta'\n\nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_use_double_quant=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=torch.bfloat16\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=bnb_config)\ntokenizer = AutoTokenizer.from_pretrained(model_name)", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Zephyr & LangChain" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_zephyr_langchain.html#setup-the-llm-chain", | |
"href": "notebooks/rag_zephyr_langchain.html#setup-the-llm-chain", | |
"title": "Simple RAG", | |
"section": "Setup the LLM chain", | |
"text": "Setup the LLM chain\nFinally, we have all the pieces we need to set up the LLM chain.\nFirst, create a text_generation pipeline using the loaded model and its tokenizer.\nNext, create a prompt template - this should follow the format of the model, so if you substitute the model checkpoint, make sure to use the appropriate formatting.\n\nfrom langchain.llms import HuggingFacePipeline\nfrom langchain.prompts import PromptTemplate\nfrom transformers import pipeline\nfrom langchain_core.output_parsers import StrOutputParser\n\ntext_generation_pipeline = pipeline(\n1 model=model,\n2 tokenizer=tokenizer,\n3 task=\"text-generation\",\n4 temperature=0.2,\n5 do_sample=True,\n6 repetition_penalty=1.1,\n7 return_full_text=True,\n8 max_new_tokens=400,\n)\n\nllm = HuggingFacePipeline(pipeline=text_generation_pipeline)\n\nprompt_template = \"\"\"\n<|system|>\nAnswer the question based on your knowledge. Use the following context to help:\n\n{context}\n\n</s>\n<|user|>\n{question}\n</s>\n<|assistant|>\n\n \"\"\"\n\nprompt = PromptTemplate(\n input_variables=[\"context\", \"question\"],\n template=prompt_template,\n)\n\nllm_chain = prompt | llm | StrOutputParser()\n\n\n1\n\nThe pre-trained model for text generation.\n\n2\n\nTokenizer to preprocess input text and postprocess generated output.\n\n3\n\nSpecifies the task as text generation.\n\n4\n\nControls the randomness in the output generation. Lower values make the output more deterministic.\n\n5\n\nEnables sampling to introduce randomness in the output generation.\n\n6\n\nPenalizes repetition in the output to encourage diversity.\n\n7\n\nReturns the full generated text including the input prompt.\n\n8\n\nLimits the maximum number of new tokens generated.\n\n\n\n\nNote: You can also use tokenizer.apply_chat_template to convert a list of messages (as dicts: {'role': 'user', 'content': '(...)'}) into a string with the appropriate chat format.\nFinally, we need to combine the llm_chain with the retriever to create a RAG chain. We pass the original question through to the final generation step, as well as the retrieved context docs:\n\nfrom langchain_core.runnables import RunnablePassthrough\n\nretriever = db.as_retriever()\n\nrag_chain = (\n {\"context\": retriever, \"question\": RunnablePassthrough()}\n | llm_chain\n)", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Zephyr & LangChain" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_zephyr_langchain.html#compare-the-results", | |
"href": "notebooks/rag_zephyr_langchain.html#compare-the-results", | |
"title": "Simple RAG", | |
"section": "Compare the results", | |
"text": "Compare the results\nLet’s see the difference RAG makes in generating answers to the library-specific questions.\n\nquestion = \"How do you combine multiple adapters?\"\n\nFirst, let’s see what kind of answer we can get with just the model itself, no context added:\n\nllm_chain.invoke({\"context\":\"\", \"question\": question})\n\nAs you can see, the model interpreted the question as one about physical computer adapters, while in the context of PEFT, “adapters” refer to LoRA adapters. Let’s see if adding context from GitHub issues helps the model give a more relevant answer:\n\nrag_chain.invoke(question)\n\nAs we can see, the added context, really helps the exact same model, provide a much more relevant and informed answer to the library-specific question.\nNotably, combining multiple adapters for inference has been added to the library, and one can find this information in the documentation, so for the next iteration of this RAG it may be worth including documentation embeddings.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Zephyr & LangChain" | |
] | |
}, | |
{ | |
"objectID": "notebooks/advanced_rag.html", | |
"href": "notebooks/advanced_rag.html", | |
"title": "Advanced RAG", | |
"section": "", | |
"text": "This notebook demonstrates how you can build an advanced RAG (Retrieval Augmented Generation) for answering a user’s question about a specific knowledge base (here, the HuggingFace documentation), using LangChain.\nFor an introduction to RAG, you can check this other cookbook!\nRAG systems are complex, with many moving parts: here a RAG diagram, where we noted in blue all possibilities for system enhancement:\n\n\n\n\n\n\n\nNote\n\n\n\n💡 As you can see, there are many steps to tune in this architecture: tuning the system properly will yield significant performance gains.\n\n\nIn this notebook, we will take a look into many of these blue notes to see how to tune your RAG system and get the best performance.\nLet’s dig into the model building! First, we install the required model dependancies.\n\n!pip install -q torch transformers transformers accelerate bitsandbytes langchain sentence-transformers faiss-gpu openpyxl pacmap\n\n\n%reload_ext dotenv\n%dotenv\n\n\nfrom tqdm.notebook import tqdm\nimport pandas as pd\nfrom typing import Optional, List, Tuple\nfrom datasets import Dataset\nimport matplotlib.pyplot as plt\n\npd.set_option(\n1 \"display.max_colwidth\", None\n) \n\n\n1\n\nThis will be helpful when visualizing retriever outputs\n\n\n\n\n\nLoad your knowledge base\n\nimport datasets\n\nds = datasets.load_dataset(\"m-ric/huggingface_doc\", split=\"train\")\n\n\nfrom langchain.docstore.document import Document as LangchainDocument\n\nRAW_KNOWLEDGE_BASE = [\n LangchainDocument(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"]})\n for doc in tqdm(ds)\n]\n\n\n\n1. Retriever - embeddings 🗂️\nThe retriever acts like an internal search engine: given the user query, it returns a few relevant snippets from your knowledge base.\nThese snippets will then be fed to the Reader Model to help it generate its answer.\nSo our objective here is, given a user question, to find the most snippets from our knowledge base to answer that question.\nThis is a wide objective, it leaves open some questions. How many snippets should we retrieve? This parameter will be named top_k.\nHow long should these snippets be? This is called the chunk size. There’s no one-size-fits-all answers, but here are a few elements: - 🔀 Your chunk size is allowed to vary from one snippet to the other. - Since there will always be some noise in your retrieval, increasing the top_k increases the chance to get relevant elements in your retrieved snippets. 🎯 Shooting more arrows increases your probability to hit your target. - Meanwhile, the summed length of your retrieved documents should not be too high: for instance, for most current models 16k tokens will probably drown your Reader model in information due to Lost-in-the-middle phenomenon. 🎯 Give your reader model only the most relevant insights, not a huge pile of books!\n\n\n\n\n\n\nNote\n\n\n\nIn this notebook, we use Langchain library since it offers a huge variety of options for vector databases and allows us to keep document metadata throughout the processing.\n\n\n\n1.1 Split the documents into chunks\n\nIn this part, we split the documents from our knowledge base into smaller chunks which will be the snippets on which the reader LLM will base its answer.\nThe goal is to prepare a collection of semantically relevant snippets. So their size should be adapted to precise ideas: too small will truncate ideas, too large will dilute them.\n\n\n\n\n\n\n\nTip\n\n\n\n💡 Many options exist for text splitting: splitting on words, on sentence boundaries, recursive chunking that processes documents in a tree-like way to preserve structure information… To learn more about chunking, I recommend you read this great notebook by Greg Kamradt.\n\n\n\nRecursive chunking breaks down the text into smaller parts step by step using a given list of separators sorted from the most important to the least important separator. If the first split doesn’t give the right size or shape chunks, the method repeats itself on the new chunks using a different separator. For instance with the list of separators [\"\\n\\n\", \"\\n\", \".\", \"\"]:\n\nThe method will first break down the document wherever there is a double line break \"\\n\\n\".\nResulting documents will be split again on simple line breaks \"\\n\", then on sentence ends \".\".\nAnd finally, if some chunks are still too big, they will be split whenever they overflow the maximum size.\n\nWith this method, the global structure is well preserved, at the expense of getting slight variations in chunk size.\n\n\nThis space lets you visualize how different splitting options affect the chunks you get.\n\n🔬 Let’s experiment a bit with chunk sizes, beginning with an arbitrary size, and see how splits work. We use Langchain’s implementation of recursive chunking with RecursiveCharacterTextSplitter. - Parameter chunk_size controls the length of individual chunks: this length is counted by default as the number of characters in the chunk. - Parameter chunk_overlap lets adjacent chunks get a bit of overlap on each other. This reduces the probability that an idea could be cut in half by the split between two adjacent chunks. We ~arbitrarily set this to 1/10th of the chunk size, you could try different values!\n\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\n# We use a hierarchical list of separators specifically tailored for splitting Markdown documents\n# This list is taken from LangChain's MarkdownTextSplitter class.\nMARKDOWN_SEPARATORS = [\n \"\\n#{1,6} \",\n \"```\\n\",\n \"\\n\\\\*\\\\*\\\\*+\\n\",\n \"\\n---+\\n\",\n \"\\n___+\\n\",\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n]\n\ntext_splitter = RecursiveCharacterTextSplitter(\n1 chunk_size=1000,\n2 chunk_overlap=100,\n3 add_start_index=True,\n4 strip_whitespace=True,\n separators=MARKDOWN_SEPARATORS,\n)\n\ndocs_processed = []\nfor doc in RAW_KNOWLEDGE_BASE:\n docs_processed += text_splitter.split_documents([doc])\n\n\n1\n\nThe maximum number of characters in a chunk: we selected this value arbitrally\n\n2\n\nThe number of characters to overlap between chunks\n\n3\n\nIf True, includes chunk’s start index in metadata\n\n4\n\nIf True, strips whitespace from the start and end of every document\n\n\n\n\nWe also have to keep in mind that when embedding documents, we will use an embedding model that has accepts a certain maximum sequence length max_seq_length.\nSo we should make sure that our chunk sizes are below this limit, because any longer chunk will be truncated before processing, thus losing relevancy.\n\nfrom sentence_transformers import SentenceTransformer\n\n# To get the value of the max sequence_length, we will query the underlying `SentenceTransformer` object used in the RecursiveCharacterTextSplitter.\nprint(\n f\"Model's maximum sequence length: {SentenceTransformer('thenlper/gte-small').max_seq_length}\"\n)\n\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"thenlper/gte-small\")\nlengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]\n\n# Plot the distrubution of document lengths, counted as the number of tokens\nfig = pd.Series(lengths).hist()\nplt.title(\"Distribution of document lengths in the knowledge base (in count of tokens)\")\nplt.show()\n\n👀 As you can see, the chunk lengths are not aligned with our limit of 512 tokens, and some documents are above the limit, thus some part of them will be lost in truncation! - So we should change the RecursiveCharacterTextSplitter class to count length in number of tokens instead of number of characters. - Then we can choose a specific chunk size, here we would choose a lower threshold than 512: - smaller documents could allow the split to focus more on specific ideas. - But too small chunks would split sentences in half, thus losing meaning again: the proper tuning is a matter of balance.\n\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom transformers import AutoTokenizer\n\nEMBEDDING_MODEL_NAME = \"thenlper/gte-small\"\n\n\ndef split_documents(\n chunk_size: int,\n knowledge_base: List[LangchainDocument],\n tokenizer_name: Optional[str] = EMBEDDING_MODEL_NAME,\n) -> List[LangchainDocument]:\n \"\"\"\n Split documents into chunks of maximum size `chunk_size` tokens and return a list of documents.\n \"\"\"\n text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(\n AutoTokenizer.from_pretrained(tokenizer_name),\n chunk_size=chunk_size,\n chunk_overlap=int(chunk_size / 10),\n add_start_index=True,\n strip_whitespace=True,\n separators=MARKDOWN_SEPARATORS,\n )\n\n docs_processed = []\n for doc in knowledge_base:\n docs_processed += text_splitter.split_documents([doc])\n\n # Remove duplicates\n unique_texts = {}\n docs_processed_unique = []\n for doc in docs_processed:\n if doc.page_content not in unique_texts:\n unique_texts[doc.page_content] = True\n docs_processed_unique.append(doc)\n\n return docs_processed_unique\n\n\ndocs_processed = split_documents(\n 512, # We choose a chunk size adapted to our model\n RAW_KNOWLEDGE_BASE,\n tokenizer_name=EMBEDDING_MODEL_NAME,\n)\n\n# Let's visualize the chunk sizes we would have in tokens from a common model\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(EMBEDDING_MODEL_NAME)\nlengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]\nfig = pd.Series(lengths).hist()\nplt.title(\"Distribution of document lengths in the knowledge base (in count of tokens)\")\nplt.show()\n\n➡️ Now the chunk length distribution looks better!\n\n\n1.2 Building the vector database\nWe want to compute the embeddings for all the chunks of our knowledge base: to learn more on sentence embeddings, we recommend reading this guide.\n\nHow does retrieval work ?\nOnce the chunks are all embedded, we store them into a vector database. When the user types in a query, it gets embedded by the same model previously used, and a similarity search returns the closest documents from the vector database.\nThe technical challenge is thus, given a query vector, to quickly find the nearest neighbours of this vector in the vector database. To do this, we need to choose two things: a distance, and a search algorithm to find the nearest neighbors quickly within a database of thousands of records.\n\nNearest Neighbor search algorithm\nThere are plentiful choices for the nearest neighbor search algorithm: we go with Facebook’s FAISS, since FAISS is performant enough for most use cases, and it is well known thus widely implemented.\n\n\nDistances\nRegarding distances, you can find a good guide here. In short:\n\nCosine similarity computes similarity between two vectors as the cosinus of their relative angle: it allows us to compare vector directions are regardless of their magnitude. Using it requires to normalize all vectors, to rescale them into unit norm.\nDot product takes into account magnitude, with the sometimes undesirable effect that increasing a vector’s length will make it more similar to all others.\nEuclidean distance is the distance between the ends of vectors.\n\nYou can try this small exercise to check your understanding of these concepts. But once vectors are normalized, the choice of a specific distance does not matter much.\nOur particular model works well with cosine similarity, so choose this distance, and we set it up both in the Embedding model, and in the distance_strategy argument of our FAISS index. With cosine similarity, we have to normalize our embeddings.\n\n\n\n\n\n\nWarning\n\n\n\n🚨👇 The cell below takes a few minutes to run on A10G!\n\n\n\nfrom langchain.vectorstores import FAISS\nfrom langchain_community.embeddings import HuggingFaceEmbeddings\nfrom langchain_community.vectorstores.utils import DistanceStrategy\n\nembedding_model = HuggingFaceEmbeddings(\n model_name=EMBEDDING_MODEL_NAME,\n multi_process=True,\n model_kwargs={\"device\": \"cuda\"},\n encode_kwargs={\"normalize_embeddings\": True}, # set True for cosine similarity\n)\n\nKNOWLEDGE_VECTOR_DATABASE = FAISS.from_documents(\n docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE\n)\n\n👀 To visualize the search for the closest documents, let’s project our embeddings from 384 dimensions down to 2 dimensions using PaCMAP.\n\n\n\n\n\n\nNote\n\n\n\n💡 We chose PaCMAP rather than other techniques such as t-SNE or UMAP, since it is efficient (preserves local and global structure), robust to initialization parameters and fast.\n\n\n\n# embed a user query in the same space\nuser_query = \"How to create a pipeline object?\"\nquery_vector = embedding_model.embed_query(user_query)\n\n\nimport pacmap\nimport numpy as np\nimport plotly.express as px\n\nembedding_projector = pacmap.PaCMAP(\n n_components=2, n_neighbors=None, MN_ratio=0.5, FP_ratio=2.0, random_state=1\n)\n\nembeddings_2d = [\n list(KNOWLEDGE_VECTOR_DATABASE.index.reconstruct_n(idx, 1)[0])\n for idx in range(len(docs_processed))\n] + [query_vector]\n\n# fit the data (The index of transformed data corresponds to the index of the original data)\ndocuments_projected = embedding_projector.fit_transform(np.array(embeddings_2d), init=\"pca\")\n\n\ndf = pd.DataFrame.from_dict(\n [\n {\n \"x\": documents_projected[i, 0],\n \"y\": documents_projected[i, 1],\n \"source\": docs_processed[i].metadata[\"source\"].split(\"/\")[1],\n \"extract\": docs_processed[i].page_content[:100] + \"...\",\n \"symbol\": \"circle\",\n \"size_col\": 4,\n }\n for i in range(len(docs_processed))\n ]\n + [\n {\n \"x\": documents_projected[-1, 0],\n \"y\": documents_projected[-1, 1],\n \"source\": \"User query\",\n \"extract\": user_query,\n \"size_col\": 100,\n \"symbol\": \"star\",\n }\n ]\n)\n\n# visualize the embedding\nfig = px.scatter(\n df,\n x=\"x\",\n y=\"y\",\n color=\"source\",\n hover_data=\"extract\",\n size=\"size_col\",\n symbol=\"symbol\",\n color_discrete_map={\"User query\": \"black\"},\n width=1000,\n height=700,\n)\nfig.update_traces(\n marker=dict(opacity=1, line=dict(width=0, color=\"DarkSlateGrey\")), selector=dict(mode=\"markers\")\n)\nfig.update_layout(\n legend_title_text=\"<b>Chunk source</b>\",\n title=\"<b>2D Projection of Chunk Embeddings via PaCMAP</b>\",\n)\nfig.show()\n\n\n➡️ On the graph above, you can see a spatial representation of the kowledge base documents. As the vector embeddings represent the document’s meaning, their closeness in meaning should be reflected in their embedding’s closeness.\nThe user query’s embedding is also shown : we want to find the k document that have the closest meaning, thus we pick the k closest vectors.\nIn the LangChain vector database implementation, this search operation is performed by the method vector_database.similarity_search(query).\nHere is the result:\n\nprint(f\"\\nStarting retrieval for {user_query=}...\")\nretrieved_docs = KNOWLEDGE_VECTOR_DATABASE.similarity_search(query=user_query, k=5)\nprint(\"\\n==================================Top document==================================\")\nprint(retrieved_docs[0].page_content)\nprint(\"==================================Metadata==================================\")\nprint(retrieved_docs[0].metadata)\n\n\n\n\n\n\n2. Reader - LLM 💬\nIn this part, the LLM Reader reads the retrieved context to formulate its answer.\nThere are actually substeps that can all be tuned: 1. The content of the retrieved documents is aggregated together into the “context”, with many processing options like prompt compression. 2. The context and the user query are aggregated into a prompt then given to the LLM to generate its answer.\n\n2.1. Reader model\nThe choice of a reader model is important on a few aspects: - the reader model’s max_seq_length must accomodate our prompt, which includes the context output by the retriever call: the context consists in 5 documents of 512 tokens each, so we aim for a context length of 4k tokens at least. - the reader model\nFor this example, we chose HuggingFaceH4/zephyr-7b-beta, a small but powerful model.\n\n\n\n\n\n\nNote\n\n\n\nWith many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the Open-source LLM leaderboard.\n\n\nTo make inference faster, we will load the quantized version of the model:\n\nfrom transformers import pipeline\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n\nREADER_MODEL_NAME = \"HuggingFaceH4/zephyr-7b-beta\"\n\nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_use_double_quant=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=torch.bfloat16,\n)\nmodel = AutoModelForCausalLM.from_pretrained(READER_MODEL_NAME, quantization_config=bnb_config)\ntokenizer = AutoTokenizer.from_pretrained(READER_MODEL_NAME)\n\nREADER_LLM = pipeline(\n model=model,\n tokenizer=tokenizer,\n task=\"text-generation\",\n do_sample=True,\n temperature=0.2,\n repetition_penalty=1.1,\n return_full_text=False,\n max_new_tokens=500,\n)\n\n\nREADER_LLM(\"What is 4+4? Answer:\")\n\n\n\n2.2. Prompt\nThe RAG prompt template below is what we will feed to the Reader LLM: it is important to have it formatted in the Reader LLM’s chat template.\nWe give it our context and the user’s question.\n\nprompt_in_chat_format = [\n {\n \"role\": \"system\",\n \"content\": \"\"\"Using the information contained in the context,\ngive a comprehensive answer to the question.\nRespond only to the question asked, response should be concise and relevant to the question.\nProvide the number of the source document when relevant.\nIf the answer cannot be deduced from the context, do not give an answer.\"\"\",\n },\n {\n \"role\": \"user\",\n \"content\": \"\"\"Context:\n{context}\n---\nNow here is the question you need to answer.\n\nQuestion: {question}\"\"\",\n },\n]\nRAG_PROMPT_TEMPLATE = tokenizer.apply_chat_template(\n prompt_in_chat_format, tokenize=False, add_generation_prompt=True\n)\nprint(RAG_PROMPT_TEMPLATE)\n\nLet’s test our Reader on our previously retrieved documents!\n\nretrieved_docs_text = [\n doc.page_content for doc in retrieved_docs\n] # we only need the text of the documents\ncontext = \"\\nExtracted documents:\\n\"\ncontext += \"\".join([f\"Document {str(i)}:::\\n\" + doc for i, doc in enumerate(retrieved_docs_text)])\n\nfinal_prompt = RAG_PROMPT_TEMPLATE.format(\n question=\"How to create a pipeline object?\", context=context\n)\n\n# Redact an answer\nanswer = READER_LLM(final_prompt)[0][\"generated_text\"]\nprint(answer)\n\n\n\n2.3. Reranking\nA good option for RAG is to retrieve more documents than you want in the end, then rerank the results with a more powerful retrieval model before keeping only the top_k.\nFor this, Colbertv2 is a great choice: instead of a bi-encoder like our classical embedding models, it is a cross-encoder that computes more fine-grained interactions between the query tokens and each document’s tokens.\nIt is easily usable thanks to the RAGatouille library.\n\nfrom ragatouille import RAGPretrainedModel\n\nRERANKER = RAGPretrainedModel.from_pretrained(\"colbert-ir/colbertv2.0\")\n\n\n\n\n3. Assembling it all!\n\nfrom transformers import Pipeline\n\n\ndef answer_with_rag(\n question: str,\n llm: Pipeline,\n knowledge_index: FAISS,\n reranker: Optional[RAGPretrainedModel] = None,\n num_retrieved_docs: int = 30,\n num_docs_final: int = 5,\n) -> Tuple[str, List[LangchainDocument]]:\n # Gather documents with retriever\n print(\"=> Retrieving documents...\")\n relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)\n relevant_docs = [doc.page_content for doc in relevant_docs] # keep only the text\n\n # Optionally rerank results\n if reranker:\n print(\"=> Reranking documents...\")\n relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)\n relevant_docs = [doc[\"content\"] for doc in relevant_docs]\n\n relevant_docs = relevant_docs[:num_docs_final]\n\n # Build the final prompt\n context = \"\\nExtracted documents:\\n\"\n context += \"\".join([f\"Document {str(i)}:::\\n\" + doc for i, doc in enumerate(relevant_docs)])\n\n final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)\n\n # Redact an answer\n print(\"=> Generating answer...\")\n answer = llm(final_prompt)[0][\"generated_text\"]\n\n return answer, relevant_docs\n\nLet’s see how our RAG pipeline answers a user query.\n\nquestion = \"how to create a pipeline object?\"\n\nanswer, relevant_docs = answer_with_rag(\n question, READER_LLM, KNOWLEDGE_VECTOR_DATABASE, reranker=RERANKER\n)\n\n\nprint(\"==================================Answer==================================\")\nprint(f\"{answer}\")\nprint(\"==================================Source docs==================================\")\nfor i, doc in enumerate(relevant_docs):\n print(f\"Document {i}------------------------------------------------------------\")\n print(doc)\n\n✅ We now have a fully functional, performant RAG sytem. That’s it for today! Congratulations for making it to the end 🥳\n\n\nTo go further 🗺️\nThis is not the end of the journey! You can try many steps to improve your RAG system. We recommend doing so in an iterative way: bring small changes to the system and see what improves performance.\n\nSetting up an evaluation pipeline\n\n💬 “You cannot improve the model performance that you do not measure”, said Gandhi… or at least Llama2 told me he said it. Anyway, you should absolutely start by measuring performance: this means building a small evaluation dataset, then monitor the performance of your RAG system on this evaluation dataset.\n\n\n\nImproving the retriever\n🛠️ You can use these options to tune the results:\n\nTune the chunking method:\n\nSize of the chunks\nMethod: split on different separators, use semantic chunking…\n\nChange the embedding model\n\n👷♀️ More could be considered: - Try another chunking method, like semantic chunking - Change the index used (here, FAISS) - Query expansion: reformulate the user query in slightly different ways to retrieve more documents.\n\n\nImproving the reader\n🛠️ Here you can try the following options to improve results: - Tune the prompt - Switch reranking on/off - Choose a more powerful reader model\n💡 Many options could be considered here to further improve the results: - Compress the retrieved context to keep only the most relevant parts to answer the query. - Extend the RAG system to make it more user-friendly: - cite source - make conversational", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"Advanced RAG" | |
] | |
}, | |
{ | |
"objectID": "about.html", | |
"href": "about.html", | |
"title": "About", | |
"section": "", | |
"text": "About this site" | |
}, | |
{ | |
"objectID": "presentation.html#whoami", | |
"href": "presentation.html#whoami", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "whoami", | |
"text": "whoami\n\nMy name is Matej\nI’m a Master’s student at the Brno University of Technology\nI currently make GPUs go brrrrrr at Hugging Face 🤗" | |
}, | |
{ | |
"objectID": "presentation.html#what-is-triton", | |
"href": "presentation.html#what-is-triton", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "What is Triton?", | |
"text": "What is Triton?\n\nNVIDIA’s open-source programming language for GPU kernels\nDesigned for AI/ML workloads\nSimplifies GPU programming compared to CUDA" | |
}, | |
{ | |
"objectID": "presentation.html#why-optimize-with-triton", | |
"href": "presentation.html#why-optimize-with-triton", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Why Optimize with Triton?", | |
"text": "Why Optimize with Triton?\n\nSimple yet effective\nLess headache than CUDA\nGPUs go brrrrrrr 🚀\nFeel cool when your kernel is faster than PyTorch 😎" | |
}, | |
{ | |
"objectID": "presentation.html#example-problem-kl-divergence", | |
"href": "presentation.html#example-problem-kl-divergence", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Example Problem: KL Divergence", | |
"text": "Example Problem: KL Divergence\n\ncommonly used in LLMs for knowledge distillation\nfor probability distributions \\(P\\) and \\(Q\\), the Kullback-Leibler divergence is defined as:\n\n\\[\nD_{KL}(P \\| Q) = \\sum_{i} P_i \\log\\left(\\frac{P_i}{Q_i}\\right)\n\\]\nimport torch\nfrom torch.nn.functional import kl_div\n\ndef kl_div_torch(p: torch.Tensor, q: torch.Tensor) -> torch.Tensor:\n return kl_div(p, q, reduction='none')" | |
}, | |
{ | |
"objectID": "presentation.html#how-about-triton", | |
"href": "presentation.html#how-about-triton", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "How about Triton?", | |
"text": "How about Triton?\nimport triton.language as tl\n\[email protected]\ndef kl_div_triton(\n p_ptr, q_ptr, output_ptr, n_elements, BLOCK_SIZE: tl.constexpr\n):\n pid = tl.program_id(0)\n block_start = pid * BLOCK_SIZE\n offsets = block_start + tl.arange(0, BLOCK_SIZE)\n mask = offsets < n_elements\n \n p = tl.load(p_ptr + offsets, mask=mask)\n q = tl.load(q_ptr + offsets, mask=mask)\n \n output = p * (tl.log(p) - tl.log(q))\n \n tl.store(output_ptr + offsets, output, mask=mask)" | |
}, | |
{ | |
"objectID": "presentation.html#how-to-integrate-with-pytorch", | |
"href": "presentation.html#how-to-integrate-with-pytorch", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "How to integrate with PyTorch?", | |
"text": "How to integrate with PyTorch?\n\nTriton works with pointers\nHow to use our custom kernel with PyTorch autograd?\n\nimport torch\n\nclass VectorAdd(torch.autograd.Function):\n @staticmethod\n def forward(ctx, p, q):\n ctx.save_for_backward(q)\n output = torch.empty_like(p)\n grid = (len(p) + 512 - 1) // 512\n kl_div_triton[grid](p, q, output, len(p), BLOCK_SIZE=512)\n return output\n\n @staticmethod\n def backward(ctx, grad_output):\n q = ctx.saved_tensors[0]\n # Calculate gradients (another triton kernel)\n return ..." | |
}, | |
{ | |
"objectID": "presentation.html#some-benchmarks", | |
"href": "presentation.html#some-benchmarks", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Some benchmarks", | |
"text": "Some benchmarks\n\nA KL Divergence kernel that is currently used in Liger Kernel written by @me" | |
}, | |
{ | |
"objectID": "presentation.html#do-i-have-to-write-everything", | |
"href": "presentation.html#do-i-have-to-write-everything", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Do I have to write everything?", | |
"text": "Do I have to write everything?\n\nTLDR: No\nMany cool projects already using Triton\nBetter Integration with PyTorch and even Hugging Face 🤗\nLiger Kernel, Unsloth AI, etc." | |
}, | |
{ | |
"objectID": "presentation.html#so-how-can-i-use-this-in-my-llm", | |
"href": "presentation.html#so-how-can-i-use-this-in-my-llm", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "So how can I use this in my LLM? 🚀", | |
"text": "So how can I use this in my LLM? 🚀\n\nLiger Kernel is a great example, providing examples of how to integrate with Hugging Face 🤗 Trainer\n\n- from transformers import AutoModelForCausalLM\n+ from liger_kernel.transformers import AutoLigerKernelForCausalLM\n\nmodel_path = \"meta-llama/Meta-Llama-3-8B-Instruct\"\n\n- model = AutoModelForCausalLM.from_pretrained(model_path)\n+ model = AutoLigerKernelForCausalLM.from_pretrained(model_path)\n\n# training/inference logic..." | |
}, | |
{ | |
"objectID": "presentation.html#key-optimization-techniques-adapted-by-liger-kernel", | |
"href": "presentation.html#key-optimization-techniques-adapted-by-liger-kernel", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Key Optimization Techniques adapted by Liger Kernel", | |
"text": "Key Optimization Techniques adapted by Liger Kernel\n\nKernel Fusion\nDomain-specific optimizations\nMemory Access Patterns\nPreemptive memory freeing" | |
}, | |
{ | |
"objectID": "presentation.html#aaand-some-more-benchmarks", | |
"href": "presentation.html#aaand-some-more-benchmarks", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Aaand some more benchmarks 🚀", | |
"text": "Aaand some more benchmarks 🚀" | |
}, | |
{ | |
"objectID": "presentation.html#last-benchmark-i-promise...", | |
"href": "presentation.html#last-benchmark-i-promise...", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Last benchmark I promise...", | |
"text": "Last benchmark I promise...\n\n\nAttention is all you need, so I thank you for yours! 🤗" | |
}, | |
{ | |
"objectID": "index.html#whoami", | |
"href": "index.html#whoami", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "whoami", | |
"text": "whoami\n\nMy name is Matej\nI’m a Master’s student at the Brno University of Technology\nI’m currently working on distributed training at Hugging Face 🤗", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#what-is-triton", | |
"href": "index.html#what-is-triton", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "What is Triton?", | |
"text": "What is Triton?\n\nNVIDIA’s open-source programming language for GPU kernels\nDesigned for AI/ML workloads\nSimplifies GPU programming compared to CUDA", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#why-optimize-with-triton", | |
"href": "index.html#why-optimize-with-triton", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Why Optimize with Triton?", | |
"text": "Why Optimize with Triton?\n\nSimple yet effective\nLess headache than CUDA\nGPUs go brrrrrrr 🚀\nFeel cool when your kernel is faster than PyTorch 😎", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#example-problem-kl-divergence", | |
"href": "index.html#example-problem-kl-divergence", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Example Problem: KL Divergence", | |
"text": "Example Problem: KL Divergence\n\ncommonly used in LLMs for knowledge distillation\nfor probability distributions \\(P\\) and \\(Q\\), the Kullback-Leibler divergence is defined as:\n\n\\[\nD_{KL}(P \\| Q) = \\sum_{i} P_i \\log\\left(\\frac{P_i}{Q_i}\\right)\n\\]\nimport torch\nfrom torch.nn.functional import kl_div\n\ndef kl_div_torch(p: torch.Tensor, q: torch.Tensor) -> torch.Tensor:\n return kl_div(p, q)", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#how-about-triton", | |
"href": "index.html#how-about-triton", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "How about Triton?", | |
"text": "How about Triton?\nimport triton\nimport triton.language as tl\n\[email protected]\ndef kl_div_triton(\n p_ptr, q_ptr, output_ptr, n_elements, BLOCK_SIZE: tl.constexpr\n):\n pid = tl.program_id(0)\n block_start = pid * BLOCK_SIZE\n offsets = block_start + tl.arange(0, BLOCK_SIZE)\n mask = offsets < n_elements\n \n p = tl.load(p_ptr + offsets, mask=mask)\n q = tl.load(q_ptr + offsets, mask=mask)\n \n output = p * (tl.log(p) - tl.log(q))\n tl.store(output_ptr + offsets, output, mask=mask)", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#how-to-integrate-with-pytorch", | |
"href": "index.html#how-to-integrate-with-pytorch", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "How to integrate with PyTorch?", | |
"text": "How to integrate with PyTorch?\n\nHow to use our custom kernel with PyTorch autograd?\n\nimport torch\n\nclass VectorAdd(torch.autograd.Function):\n @staticmethod\n def forward(ctx, p, q):\n ctx.save_for_backward(q)\n output = torch.empty_like(p)\n grid = (len(p) + 512 - 1) // 512\n kl_div_triton[grid](p, q, output, len(p), BLOCK_SIZE=512)\n return output\n\n @staticmethod\n def backward(ctx, grad_output):\n q = ctx.saved_tensors[0]\n # Calculate gradients (another triton kernel)\n return ...", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#some-benchmarks", | |
"href": "index.html#some-benchmarks", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Some benchmarks", | |
"text": "Some benchmarks\n\nA KL Divergence kernel that is currently used in Liger Kernel written by @me", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#do-i-have-to-write-everything", | |
"href": "index.html#do-i-have-to-write-everything", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Do I have to write everything?", | |
"text": "Do I have to write everything?\n\nTLDR: No\nMany cool projects already using Triton\nBetter Integration with PyTorch and even Hugging Face 🤗\nLiger Kernel, Unsloth AI, etc.", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#so-how-can-i-use-this-in-my-llm", | |
"href": "index.html#so-how-can-i-use-this-in-my-llm", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "So how can I use this in my LLM? 🚀", | |
"text": "So how can I use this in my LLM? 🚀\n\nLiger Kernel is a great example, providing examples of how to integrate with Hugging Face 🤗 Trainer\n\n- from transformers import AutoModelForCausalLM\n+ from liger_kernel.transformers import AutoLigerKernelForCausalLM\n\nmodel_path = \"meta-llama/Meta-Llama-3-8B-Instruct\"\n\n- model = AutoModelForCausalLM.from_pretrained(model_path)\n+ model = AutoLigerKernelForCausalLM.from_pretrained(model_path)\n\n# training/inference logic...", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#key-optimization-techniques-adapted-by-liger-kernel", | |
"href": "index.html#key-optimization-techniques-adapted-by-liger-kernel", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Key Optimization Techniques adapted by Liger Kernel", | |
"text": "Key Optimization Techniques adapted by Liger Kernel\n\nKernel Fusion\nDomain-specific optimizations\nMemory Access Patterns\nPreemptive memory freeing", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#aaand-some-more-benchmarks", | |
"href": "index.html#aaand-some-more-benchmarks", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Aaand some more benchmarks 🚀", | |
"text": "Aaand some more benchmarks 🚀\n\nSaving memory is key to run bigger batch size on smaller GPUs", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "index.html#last-benchmark-i-promise...", | |
"href": "index.html#last-benchmark-i-promise...", | |
"title": "Optimizing LLM Performance Using Triton", | |
"section": "Last benchmark I promise...", | |
"text": "Last benchmark I promise...\n\nBut is it faster? Yes, it is!\n\n\n\n\n\n\n\n\nAttention is all you need, so I thank you for yours! 🤗", | |
"crumbs": [ | |
"About", | |
"About Quarto" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html", | |
"href": "notebooks/single_gpu.html", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "Authored by: Maria Khalusova\nPublicly available code LLMs such as Codex, StarCoder, and Code Llama are great at generating code that adheres to general programming principles and syntax, but they may not align with an organization’s internal conventions, or be aware of proprietary libraries.\nIn this notebook, we’ll see show how you can fine-tune a code LLM on private code bases to enhance its contextual awareness and improve a model’s usefulness to your organization’s needs. Since the code LLMs are quite large, fine-tuning them in a traditional manner can be resource-draining. Worry not! We will show how you can optimize fine-tuning to fit on a single GPU.\n\n\nFor this example, we picked the top 10 Hugging Face public repositories on GitHub. We have excluded non-code files from the data, such as images, audio files, presentations, and so on. For Jupyter notebooks, we’ve kept only cells containing code. The resulting code is stored as a dataset that you can find on the Hugging Face Hub under smangrul/hf-stack-v1. It contains repo id, file path, and file content.\n\n\n\nWe’ll finetune bigcode/starcoderbase-1b, which is a 1B parameter model trained on 80+ programming languages. This is a gated model, so if you plan to run this notebook with this exact model, you’ll need to gain access to it on the model’s page. Log in to your Hugging Face account to do so:\n\nfrom huggingface_hub import notebook_login\n\nnotebook_login()\n\nTo get started, let’s install all the necessary libraries. As you can see, in addition to transformers and datasets, we’ll be using peft, bitsandbytes, and flash-attn to optimize the training.\nBy employing parameter-efficient training techniques, we can run this notebook on a single A100 High-RAM GPU.\n\n!pip install -q transformers datasets peft bitsandbytes flash-attn\n\nLet’s define some variables now. Feel free to play with these.\n\nMODEL=\"bigcode/starcoderbase-1b\" # Model checkpoint on the Hugging Face Hub\nDATASET=\"smangrul/hf-stack-v1\" # Dataset on the Hugging Face Hub\nDATA_COLUMN=\"content\" # Column name containing the code content\n\nSEQ_LENGTH=2048 # Sequence length\n\n# Training arguments\nMAX_STEPS=2000 # max_steps\nBATCH_SIZE=16 # batch_size\nGR_ACC_STEPS=1 # gradient_accumulation_steps\nLR=5e-4 # learning_rate\nLR_SCHEDULER_TYPE=\"cosine\" # lr_scheduler_type\nWEIGHT_DECAY=0.01 # weight_decay\nNUM_WARMUP_STEPS=30 # num_warmup_steps\nEVAL_FREQ=100 # eval_freq\nSAVE_FREQ=100 # save_freq\nLOG_FREQ=25 # log_freq\nOUTPUT_DIR=\"peft-starcoder-lora-a100\" # output_dir\nBF16=True # bf16\nFP16=False # no_fp16\n\n# FIM trasformations arguments\nFIM_RATE=0.5 # fim_rate\nFIM_SPM_RATE=0.5 # fim_spm_rate\n\n# LORA\nLORA_R=8 # lora_r\nLORA_ALPHA=32 # lora_alpha\nLORA_DROPOUT=0.0 # lora_dropout\nLORA_TARGET_MODULES=\"c_proj,c_attn,q_attn,c_fc,c_proj\" # lora_target_modules\n\n# bitsandbytes config\nUSE_NESTED_QUANT=True # use_nested_quant\nBNB_4BIT_COMPUTE_DTYPE=\"bfloat16\"# bnb_4bit_compute_dtype\n\nSEED=0\n\n\nfrom transformers import (\n AutoModelForCausalLM,\n AutoTokenizer,\n Trainer,\n TrainingArguments,\n logging,\n set_seed,\n BitsAndBytesConfig,\n)\n\nset_seed(SEED)\n\n\n\n\nBegin by loading the data. As the dataset is likely to be quite large, make sure to enable the streaming mode. Streaming allows us to load the data progressively as we iterate over the dataset instead of downloading the whole dataset at once.\nWe’ll reserve the first 4000 examples as the validation set, and everything else will be the training data.\n\nfrom datasets import load_dataset\nimport torch\nfrom tqdm import tqdm\n\n\ndataset = load_dataset(\n DATASET,\n data_dir=\"data\",\n split=\"train\",\n streaming=True,\n)\n\nvalid_data = dataset.take(4000)\ntrain_data = dataset.skip(4000)\ntrain_data = train_data.shuffle(buffer_size=5000, seed=SEED)\n\nAt this step, the dataset still contains raw data with code of arbitraty length. For training, we need inputs of fixed length. Let’s create an Iterable dataset that would return constant-length chunks of tokens from a stream of text files.\nFirst, let’s estimate the average number of characters per token in the dataset, which will help us later estimate the number of tokens in the text buffer later. By default, we’ll only take 400 examples (nb_examples) from the dataset. Using only a subset of the entire dataset will reduce computational cost while still providing a reasonable estimate of the overall character-to-token ratio.\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)\n\ndef chars_token_ratio(dataset, tokenizer, data_column, nb_examples=400):\n \"\"\"\n Estimate the average number of characters per token in the dataset.\n \"\"\"\n\n total_characters, total_tokens = 0, 0\n for _, example in tqdm(zip(range(nb_examples), iter(dataset)), total=nb_examples):\n total_characters += len(example[data_column])\n total_tokens += len(tokenizer(example[data_column]).tokens())\n\n return total_characters / total_tokens\n\n\nchars_per_token = chars_token_ratio(train_data, tokenizer, DATA_COLUMN)\nprint(f\"The character to token ratio of the dataset is: {chars_per_token:.2f}\")\n\n100%|██████████| 400/400 [00:10<00:00, 39.87it/s] \n\n\nThe character to token ratio of the dataset is: 2.43\n\n\n\n\n\nThe character-to-token ratio can also be used as an indicator of the quality of text tokenization. For instance, a character-to-token ratio of 1.0 would mean that each character is represented with a token, which is not very meaningful. This would indicate poor tokenization. In standard English text, one token is typically equivalent to approximately four characters, meaning the character-to-token ratio is around 4.0. We can expect a lower ratio in the code dataset, but generally speaking, a number between 2.0 and 3.5 can be considered good enough.\nOptional FIM transformations\nAutoregressive language models typically generate sequences from left to right. By applying the FIM transformations, the model can also learn to infill text. Check out “Efficient Training of Language Models to Fill in the Middle” paper to learn more about the technique. We’ll define the FIM transformations here and will use them when creating the Iterable Dataset. However, if you want to omit transformations, feel free to set fim_rate to 0.\n\nimport functools\nimport numpy as np\n\n\n# Helper function to get token ids of the special tokens for prefix, suffix and middle for FIM transformations.\[email protected]_cache(maxsize=None)\ndef get_fim_token_ids(tokenizer):\n try:\n FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_PAD = tokenizer.special_tokens_map[\"additional_special_tokens\"][1:5]\n suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id = (\n tokenizer.vocab[tok] for tok in [FIM_SUFFIX, FIM_PREFIX, FIM_MIDDLE, FIM_PAD]\n )\n except KeyError:\n suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id = None, None, None, None\n return suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id\n\n\n## Adapted from https://github.com/bigcode-project/Megatron-LM/blob/6c4bf908df8fd86b4977f54bf5b8bd4b521003d1/megatron/data/gpt_dataset.py\ndef permute(\n sample,\n np_rng,\n suffix_tok_id,\n prefix_tok_id,\n middle_tok_id,\n pad_tok_id,\n fim_rate=0.5,\n fim_spm_rate=0.5,\n truncate_or_pad=False,\n):\n \"\"\"\n Take in a sample (list of tokens) and perform a FIM transformation on it with a probability of fim_rate, using two FIM modes:\n PSM and SPM (with a probability of fim_spm_rate).\n \"\"\"\n\n # The if condition will trigger with the probability of fim_rate\n # This means FIM transformations will apply to samples with a probability of fim_rate\n if np_rng.binomial(1, fim_rate):\n\n # Split the sample into prefix, middle, and suffix, based on randomly generated indices stored in the boundaries list.\n boundaries = list(np_rng.randint(low=0, high=len(sample) + 1, size=2))\n boundaries.sort()\n\n prefix = np.array(sample[: boundaries[0]], dtype=np.int64)\n middle = np.array(sample[boundaries[0] : boundaries[1]], dtype=np.int64)\n suffix = np.array(sample[boundaries[1] :], dtype=np.int64)\n\n if truncate_or_pad:\n # calculate the new total length of the sample, taking into account tokens indicating prefix, middle, and suffix\n new_length = suffix.shape[0] + prefix.shape[0] + middle.shape[0] + 3\n diff = new_length - len(sample)\n\n # trancate or pad if there's a difference in length between the new length and the original\n if diff > 0:\n if suffix.shape[0] <= diff:\n return sample, np_rng\n suffix = suffix[: suffix.shape[0] - diff]\n elif diff < 0:\n suffix = np.concatenate([suffix, np.full((-1 * diff), pad_tok_id)])\n\n # With the probability of fim_spm_rateapply SPM variant of FIM transformations\n # SPM: suffix, prefix, middle\n if np_rng.binomial(1, fim_spm_rate):\n new_sample = np.concatenate(\n [\n [prefix_tok_id, suffix_tok_id],\n suffix,\n [middle_tok_id],\n prefix,\n middle,\n ]\n )\n # Otherwise, apply the PSM variant of FIM transformations\n # PSM: prefix, suffix, middle\n else:\n\n new_sample = np.concatenate(\n [\n [prefix_tok_id],\n prefix,\n [suffix_tok_id],\n suffix,\n [middle_tok_id],\n middle,\n ]\n )\n else:\n # don't apply FIM transformations\n new_sample = sample\n\n return list(new_sample), np_rng\n\nLet’s define the ConstantLengthDataset, an Iterable dataset that will return constant-length chunks of tokens. To do so, we’ll read a buffer of text from the original dataset until we hit the size limits and then apply tokenizer to convert the raw text into tokenized inputs. Optionally, we’ll perform FIM transformations on some sequences (the proportion of sequences affected is controlled by fim_rate).\nOnce defined, we can create instances of the ConstantLengthDataset from both training and validation data.\n\nfrom torch.utils.data import IterableDataset\nfrom torch.utils.data.dataloader import DataLoader\nimport random\n\n# Create an Iterable dataset that returns constant-length chunks of tokens from a stream of text files.\n\nclass ConstantLengthDataset(IterableDataset):\n \"\"\"\n Iterable dataset that returns constant length chunks of tokens from stream of text files.\n Args:\n tokenizer (Tokenizer): The processor used for proccessing the data.\n dataset (dataset.Dataset): Dataset with text files.\n infinite (bool): If True the iterator is reset after dataset reaches end else stops.\n seq_length (int): Length of token sequences to return.\n num_of_sequences (int): Number of token sequences to keep in buffer.\n chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\n fim_rate (float): Rate (0.0 to 1.0) that sample will be permuted with FIM.\n fim_spm_rate (float): Rate (0.0 to 1.0) of FIM permuations that will use SPM.\n seed (int): Seed for random number generator.\n \"\"\"\n\n def __init__(\n self,\n tokenizer,\n dataset,\n infinite=False,\n seq_length=1024,\n num_of_sequences=1024,\n chars_per_token=3.6,\n content_field=\"content\",\n fim_rate=0.5,\n fim_spm_rate=0.5,\n seed=0,\n ):\n self.tokenizer = tokenizer\n self.concat_token_id = tokenizer.eos_token_id\n self.dataset = dataset\n self.seq_length = seq_length\n self.infinite = infinite\n self.current_size = 0\n self.max_buffer_size = seq_length * chars_per_token * num_of_sequences\n self.content_field = content_field\n self.fim_rate = fim_rate\n self.fim_spm_rate = fim_spm_rate\n self.seed = seed\n\n (\n self.suffix_tok_id,\n self.prefix_tok_id,\n self.middle_tok_id,\n self.pad_tok_id,\n ) = get_fim_token_ids(self.tokenizer)\n if not self.suffix_tok_id and self.fim_rate > 0:\n print(\"FIM is not supported by tokenizer, disabling FIM\")\n self.fim_rate = 0\n\n def __iter__(self):\n iterator = iter(self.dataset)\n more_examples = True\n np_rng = np.random.RandomState(seed=self.seed)\n while more_examples:\n buffer, buffer_len = [], 0\n while True:\n if buffer_len >= self.max_buffer_size:\n break\n try:\n buffer.append(next(iterator)[self.content_field])\n buffer_len += len(buffer[-1])\n except StopIteration:\n if self.infinite:\n iterator = iter(self.dataset)\n else:\n more_examples = False\n break\n tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\n all_token_ids = []\n\n for tokenized_input in tokenized_inputs:\n # optionally do FIM permutations\n if self.fim_rate > 0:\n tokenized_input, np_rng = permute(\n tokenized_input,\n np_rng,\n self.suffix_tok_id,\n self.prefix_tok_id,\n self.middle_tok_id,\n self.pad_tok_id,\n fim_rate=self.fim_rate,\n fim_spm_rate=self.fim_spm_rate,\n truncate_or_pad=False,\n )\n\n all_token_ids.extend(tokenized_input + [self.concat_token_id])\n examples = []\n for i in range(0, len(all_token_ids), self.seq_length):\n input_ids = all_token_ids[i : i + self.seq_length]\n if len(input_ids) == self.seq_length:\n examples.append(input_ids)\n random.shuffle(examples)\n for example in examples:\n self.current_size += 1\n yield {\n \"input_ids\": torch.LongTensor(example),\n \"labels\": torch.LongTensor(example),\n }\n\n\ntrain_dataset = ConstantLengthDataset(\n tokenizer,\n train_data,\n infinite=True,\n seq_length=SEQ_LENGTH,\n chars_per_token=chars_per_token,\n content_field=DATA_COLUMN,\n fim_rate=FIM_RATE,\n fim_spm_rate=FIM_SPM_RATE,\n seed=SEED,\n)\neval_dataset = ConstantLengthDataset(\n tokenizer,\n valid_data,\n infinite=False,\n seq_length=SEQ_LENGTH,\n chars_per_token=chars_per_token,\n content_field=DATA_COLUMN,\n fim_rate=FIM_RATE,\n fim_spm_rate=FIM_SPM_RATE,\n seed=SEED,\n)\n\n\n\n\nNow that the data is prepared, it’s time to load the model! We’re going to load the quantized version of the model.\nThis will allow us to reduce memory usage, as quantization represents data with fewer bits. We’ll use the bitsandbytes library to quantize the model, as it has a nice integration with transformers. All we need to do is define a bitsandbytes config, and then use it when loading the model.\nThere are different variants of 4bit quantization, but generally, we recommend using NF4 quantization for better performance (bnb_4bit_quant_type=\"nf4\").\nThe bnb_4bit_use_double_quant option adds a second quantization after the first one to save an additional 0.4 bits per parameter.\nTo learn more about quantization, check out the “Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA” blog post.\nOnce defined, pass the config to the from_pretrained method to load the quantized version of the model.\n\nfrom peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training\nfrom peft.tuners.lora import LoraLayer\n\nload_in_8bit = False\n\n# 4-bit quantization\ncompute_dtype = getattr(torch, BNB_4BIT_COMPUTE_DTYPE)\n\nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=compute_dtype,\n bnb_4bit_use_double_quant=USE_NESTED_QUANT,\n)\n\ndevice_map = {\"\": 0}\n\nmodel = AutoModelForCausalLM.from_pretrained(\n MODEL,\n load_in_8bit=load_in_8bit,\n quantization_config=bnb_config,\n device_map=device_map,\n use_cache=False, # We will be using gradient checkpointing\n trust_remote_code=True,\n use_flash_attention_2=True,\n)\n\nWhen using a quantized model for training, you need to call the prepare_model_for_kbit_training() function to preprocess the quantized model for training.\n\nmodel = prepare_model_for_kbit_training(model)\n\nNow that the quantized model is ready, we can set up a LoRA configuration. LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.\nTo train a model using LoRA technique, we need to wrap the base model as a PeftModel. This involves definign LoRA configuration with LoraConfig, and wrapping the original model with get_peft_model() using the LoraConfig.\nTo learn more about LoRA and its parameters, refer to PEFT documentation.\n\n# Set up lora\npeft_config = LoraConfig(\n lora_alpha=LORA_ALPHA,\n lora_dropout=LORA_DROPOUT,\n r=LORA_R,\n bias=\"none\",\n task_type=\"CAUSAL_LM\",\n target_modules=LORA_TARGET_MODULES.split(\",\"),\n)\n\nmodel = get_peft_model(model, peft_config)\nmodel.print_trainable_parameters()\n\ntrainable params: 5,554,176 || all params: 1,142,761,472 || trainable%: 0.4860310866343243\n\n\nAs you can see, by applying LoRA technique we will now need to train less than 1% of the parameters.\n\n\n\nNow that we have prepared the data, and optimized the model, we are ready to bring everything together to start the training.\nTo instantiate a Trainer, you need to define the training configuration. The most important is the TrainingArguments, which is a class that contains all the attributes to configure the training.\nThese are similar to any other kind of model training you may run, so we won’t go into detail here.\n\ntrain_data.start_iteration = 0\n\n\ntraining_args = TrainingArguments(\n output_dir=f\"Your_HF_username/{OUTPUT_DIR}\",\n dataloader_drop_last=True,\n evaluation_strategy=\"steps\",\n save_strategy=\"steps\",\n max_steps=MAX_STEPS,\n eval_steps=EVAL_FREQ,\n save_steps=SAVE_FREQ,\n logging_steps=LOG_FREQ,\n per_device_train_batch_size=BATCH_SIZE,\n per_device_eval_batch_size=BATCH_SIZE,\n learning_rate=LR,\n lr_scheduler_type=LR_SCHEDULER_TYPE,\n warmup_steps=NUM_WARMUP_STEPS,\n gradient_accumulation_steps=GR_ACC_STEPS,\n gradient_checkpointing=True,\n fp16=FP16,\n bf16=BF16,\n weight_decay=WEIGHT_DECAY,\n push_to_hub=True,\n include_tokens_per_second=True,\n)\n\nAs a final step, instantiate the Trainer and call the train method.\n\ntrainer = Trainer(\n model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset\n)\n\nprint(\"Training...\")\ntrainer.train()\n\nTraining...\n\n\n\n \n \n \n [2000/2000 4:16:10, Epoch 1/9223372036854775807]\n \n \n\n\n\nStep\nTraining Loss\nValidation Loss\n\n\n\n\n100\n5.524600\n7.456872\n\n\n200\n5.617800\n7.262190\n\n\n300\n5.129100\n6.410039\n\n\n400\n5.052200\n6.306774\n\n\n500\n5.202900\n6.117062\n\n\n600\n4.654100\n6.018349\n\n\n700\n5.100200\n6.000355\n\n\n800\n5.049800\n5.889457\n\n\n900\n4.541200\n5.813823\n\n\n1000\n5.000700\n5.834208\n\n\n1100\n5.026500\n5.781939\n\n\n1200\n4.411800\n5.720596\n\n\n1300\n4.782500\n5.736376\n\n\n1400\n4.980200\n5.712276\n\n\n1500\n4.368700\n5.689637\n\n\n1600\n4.884700\n5.675920\n\n\n1700\n4.914400\n5.662421\n\n\n1800\n4.248700\n5.660122\n\n\n1900\n4.798400\n5.664026\n\n\n2000\n4.704200\n5.655665\n\n\n\n\n\n\nTrainOutput(global_step=2000, training_loss=4.885598585128784, metrics={'train_runtime': 15380.3075, 'train_samples_per_second': 2.081, 'train_steps_per_second': 0.13, 'train_tokens_per_second': 4261.033, 'total_flos': 4.0317260660736e+17, 'train_loss': 4.885598585128784, 'epoch': 1.0})\n\n\nFinally, you can push the fine-tuned model to your Hub repository to share with your team.\n\ntrainer.push_to_hub()\n\n\n\n\nOnce the model is uploaded to Hub, we can use it for inference. To do so we first initialize the original base model and its tokenizer. Next, we need to merge the fine-duned weights with the base model.\n\nfrom peft import PeftModel\nimport torch\n\n# load the original model first\ntokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)\nbase_model = AutoModelForCausalLM.from_pretrained(\n MODEL,\n quantization_config=None,\n device_map=None,\n trust_remote_code=True,\n torch_dtype=torch.bfloat16,\n).cuda()\n\n# merge fine-tuned weights with the base model\npeft_model_id = f\"Your_HF_username/{OUTPUT_DIR}\"\nmodel = PeftModel.from_pretrained(base_model, peft_model_id)\nmodel.merge_and_unload()\n\nNow we can use the merged model for inference. For convenience, we’ll define a get_code_completion - feel free to experiment with text generation parameters!\n\ndef get_code_completion(prefix, suffix):\n text = prompt = f\"\"\"<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>\"\"\"\n model.eval()\n outputs = model.generate(\n input_ids=tokenizer(text, return_tensors=\"pt\").input_ids.cuda(),\n max_new_tokens=128,\n temperature=0.2,\n top_k=50,\n top_p=0.95,\n do_sample=True,\n repetition_penalty=1.0,\n )\n return tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]\n\nNow all we need to do to get code completion is call the get_code_complete function and pass the first few lines that we want to be completed as a prefix, and an empty string as a suffix.\n\nprefix = \"\"\"from peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n\"\"\"\nsuffix =\"\"\"\"\"\"\n\nprint(get_code_completion(prefix, suffix))\n\nfrom peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n task_type=TaskType.CAUSAL_LM,\n r=8,\n lora_alpha=32,\n target_modules=[\"q_proj\", \"v_proj\"],\n lora_dropout=0.1,\n bias=\"none\",\n modules_to_save=[\"q_proj\", \"v_proj\"],\n inference_mode=False,\n)\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\nmodel = get_peft_model(model, peft_config)\nmodel.print_trainable_parameters()\n\n\nAs someone who has just used the PEFT library earlier in this notebook, you can see that the generated result for creating a LoraConfig is rather good!\nIf you go back to the cell where we instantiate the model for inference, and comment out the lines where we merge the fine-tuned weights, you can see what the original model would’ve generated for the exact same prefix:\n\nprefix = \"\"\"from peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n\"\"\"\nsuffix =\"\"\"\"\"\"\n\nprint(get_code_completion(prefix, suffix))\n\nfrom peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n model_name_or_path=\"facebook/wav2vec2-base-960h\",\n num_labels=1,\n num_features=1,\n num_hidden_layers=1,\n num_attention_heads=1,\n num_hidden_layers_per_attention_head=1,\n num_attention_heads_per_hidden_layer=1,\n hidden_size=1024,\n hidden_dropout_prob=0.1,\n hidden_act=\"gelu\",\n hidden_act_dropout_prob=0.1,\n hidden\n\n\nWhile it is Python syntax, you can see that the original model has no understanding of what a LoraConfig should be doing.\nTo learn how this kind of fine-tuning compares to full fine-tuning, and how to use a model like this as your copilot in VS Code via Inference Endpoints, or locally, check out the “Personal Copilot: Train Your Own Coding Assistant” blog post. This notebook complements the original blog post.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html#dataset", | |
"href": "notebooks/single_gpu.html#dataset", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "For this example, we picked the top 10 Hugging Face public repositories on GitHub. We have excluded non-code files from the data, such as images, audio files, presentations, and so on. For Jupyter notebooks, we’ve kept only cells containing code. The resulting code is stored as a dataset that you can find on the Hugging Face Hub under smangrul/hf-stack-v1. It contains repo id, file path, and file content.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html#model", | |
"href": "notebooks/single_gpu.html#model", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "We’ll finetune bigcode/starcoderbase-1b, which is a 1B parameter model trained on 80+ programming languages. This is a gated model, so if you plan to run this notebook with this exact model, you’ll need to gain access to it on the model’s page. Log in to your Hugging Face account to do so:\n\nfrom huggingface_hub import notebook_login\n\nnotebook_login()\n\nTo get started, let’s install all the necessary libraries. As you can see, in addition to transformers and datasets, we’ll be using peft, bitsandbytes, and flash-attn to optimize the training.\nBy employing parameter-efficient training techniques, we can run this notebook on a single A100 High-RAM GPU.\n\n!pip install -q transformers datasets peft bitsandbytes flash-attn\n\nLet’s define some variables now. Feel free to play with these.\n\nMODEL=\"bigcode/starcoderbase-1b\" # Model checkpoint on the Hugging Face Hub\nDATASET=\"smangrul/hf-stack-v1\" # Dataset on the Hugging Face Hub\nDATA_COLUMN=\"content\" # Column name containing the code content\n\nSEQ_LENGTH=2048 # Sequence length\n\n# Training arguments\nMAX_STEPS=2000 # max_steps\nBATCH_SIZE=16 # batch_size\nGR_ACC_STEPS=1 # gradient_accumulation_steps\nLR=5e-4 # learning_rate\nLR_SCHEDULER_TYPE=\"cosine\" # lr_scheduler_type\nWEIGHT_DECAY=0.01 # weight_decay\nNUM_WARMUP_STEPS=30 # num_warmup_steps\nEVAL_FREQ=100 # eval_freq\nSAVE_FREQ=100 # save_freq\nLOG_FREQ=25 # log_freq\nOUTPUT_DIR=\"peft-starcoder-lora-a100\" # output_dir\nBF16=True # bf16\nFP16=False # no_fp16\n\n# FIM trasformations arguments\nFIM_RATE=0.5 # fim_rate\nFIM_SPM_RATE=0.5 # fim_spm_rate\n\n# LORA\nLORA_R=8 # lora_r\nLORA_ALPHA=32 # lora_alpha\nLORA_DROPOUT=0.0 # lora_dropout\nLORA_TARGET_MODULES=\"c_proj,c_attn,q_attn,c_fc,c_proj\" # lora_target_modules\n\n# bitsandbytes config\nUSE_NESTED_QUANT=True # use_nested_quant\nBNB_4BIT_COMPUTE_DTYPE=\"bfloat16\"# bnb_4bit_compute_dtype\n\nSEED=0\n\n\nfrom transformers import (\n AutoModelForCausalLM,\n AutoTokenizer,\n Trainer,\n TrainingArguments,\n logging,\n set_seed,\n BitsAndBytesConfig,\n)\n\nset_seed(SEED)", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html#prepare-the-data", | |
"href": "notebooks/single_gpu.html#prepare-the-data", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "Begin by loading the data. As the dataset is likely to be quite large, make sure to enable the streaming mode. Streaming allows us to load the data progressively as we iterate over the dataset instead of downloading the whole dataset at once.\nWe’ll reserve the first 4000 examples as the validation set, and everything else will be the training data.\n\nfrom datasets import load_dataset\nimport torch\nfrom tqdm import tqdm\n\n\ndataset = load_dataset(\n DATASET,\n data_dir=\"data\",\n split=\"train\",\n streaming=True,\n)\n\nvalid_data = dataset.take(4000)\ntrain_data = dataset.skip(4000)\ntrain_data = train_data.shuffle(buffer_size=5000, seed=SEED)\n\nAt this step, the dataset still contains raw data with code of arbitraty length. For training, we need inputs of fixed length. Let’s create an Iterable dataset that would return constant-length chunks of tokens from a stream of text files.\nFirst, let’s estimate the average number of characters per token in the dataset, which will help us later estimate the number of tokens in the text buffer later. By default, we’ll only take 400 examples (nb_examples) from the dataset. Using only a subset of the entire dataset will reduce computational cost while still providing a reasonable estimate of the overall character-to-token ratio.\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)\n\ndef chars_token_ratio(dataset, tokenizer, data_column, nb_examples=400):\n \"\"\"\n Estimate the average number of characters per token in the dataset.\n \"\"\"\n\n total_characters, total_tokens = 0, 0\n for _, example in tqdm(zip(range(nb_examples), iter(dataset)), total=nb_examples):\n total_characters += len(example[data_column])\n total_tokens += len(tokenizer(example[data_column]).tokens())\n\n return total_characters / total_tokens\n\n\nchars_per_token = chars_token_ratio(train_data, tokenizer, DATA_COLUMN)\nprint(f\"The character to token ratio of the dataset is: {chars_per_token:.2f}\")\n\n100%|██████████| 400/400 [00:10<00:00, 39.87it/s] \n\n\nThe character to token ratio of the dataset is: 2.43\n\n\n\n\n\nThe character-to-token ratio can also be used as an indicator of the quality of text tokenization. For instance, a character-to-token ratio of 1.0 would mean that each character is represented with a token, which is not very meaningful. This would indicate poor tokenization. In standard English text, one token is typically equivalent to approximately four characters, meaning the character-to-token ratio is around 4.0. We can expect a lower ratio in the code dataset, but generally speaking, a number between 2.0 and 3.5 can be considered good enough.\nOptional FIM transformations\nAutoregressive language models typically generate sequences from left to right. By applying the FIM transformations, the model can also learn to infill text. Check out “Efficient Training of Language Models to Fill in the Middle” paper to learn more about the technique. We’ll define the FIM transformations here and will use them when creating the Iterable Dataset. However, if you want to omit transformations, feel free to set fim_rate to 0.\n\nimport functools\nimport numpy as np\n\n\n# Helper function to get token ids of the special tokens for prefix, suffix and middle for FIM transformations.\[email protected]_cache(maxsize=None)\ndef get_fim_token_ids(tokenizer):\n try:\n FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_PAD = tokenizer.special_tokens_map[\"additional_special_tokens\"][1:5]\n suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id = (\n tokenizer.vocab[tok] for tok in [FIM_SUFFIX, FIM_PREFIX, FIM_MIDDLE, FIM_PAD]\n )\n except KeyError:\n suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id = None, None, None, None\n return suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id\n\n\n## Adapted from https://github.com/bigcode-project/Megatron-LM/blob/6c4bf908df8fd86b4977f54bf5b8bd4b521003d1/megatron/data/gpt_dataset.py\ndef permute(\n sample,\n np_rng,\n suffix_tok_id,\n prefix_tok_id,\n middle_tok_id,\n pad_tok_id,\n fim_rate=0.5,\n fim_spm_rate=0.5,\n truncate_or_pad=False,\n):\n \"\"\"\n Take in a sample (list of tokens) and perform a FIM transformation on it with a probability of fim_rate, using two FIM modes:\n PSM and SPM (with a probability of fim_spm_rate).\n \"\"\"\n\n # The if condition will trigger with the probability of fim_rate\n # This means FIM transformations will apply to samples with a probability of fim_rate\n if np_rng.binomial(1, fim_rate):\n\n # Split the sample into prefix, middle, and suffix, based on randomly generated indices stored in the boundaries list.\n boundaries = list(np_rng.randint(low=0, high=len(sample) + 1, size=2))\n boundaries.sort()\n\n prefix = np.array(sample[: boundaries[0]], dtype=np.int64)\n middle = np.array(sample[boundaries[0] : boundaries[1]], dtype=np.int64)\n suffix = np.array(sample[boundaries[1] :], dtype=np.int64)\n\n if truncate_or_pad:\n # calculate the new total length of the sample, taking into account tokens indicating prefix, middle, and suffix\n new_length = suffix.shape[0] + prefix.shape[0] + middle.shape[0] + 3\n diff = new_length - len(sample)\n\n # trancate or pad if there's a difference in length between the new length and the original\n if diff > 0:\n if suffix.shape[0] <= diff:\n return sample, np_rng\n suffix = suffix[: suffix.shape[0] - diff]\n elif diff < 0:\n suffix = np.concatenate([suffix, np.full((-1 * diff), pad_tok_id)])\n\n # With the probability of fim_spm_rateapply SPM variant of FIM transformations\n # SPM: suffix, prefix, middle\n if np_rng.binomial(1, fim_spm_rate):\n new_sample = np.concatenate(\n [\n [prefix_tok_id, suffix_tok_id],\n suffix,\n [middle_tok_id],\n prefix,\n middle,\n ]\n )\n # Otherwise, apply the PSM variant of FIM transformations\n # PSM: prefix, suffix, middle\n else:\n\n new_sample = np.concatenate(\n [\n [prefix_tok_id],\n prefix,\n [suffix_tok_id],\n suffix,\n [middle_tok_id],\n middle,\n ]\n )\n else:\n # don't apply FIM transformations\n new_sample = sample\n\n return list(new_sample), np_rng\n\nLet’s define the ConstantLengthDataset, an Iterable dataset that will return constant-length chunks of tokens. To do so, we’ll read a buffer of text from the original dataset until we hit the size limits and then apply tokenizer to convert the raw text into tokenized inputs. Optionally, we’ll perform FIM transformations on some sequences (the proportion of sequences affected is controlled by fim_rate).\nOnce defined, we can create instances of the ConstantLengthDataset from both training and validation data.\n\nfrom torch.utils.data import IterableDataset\nfrom torch.utils.data.dataloader import DataLoader\nimport random\n\n# Create an Iterable dataset that returns constant-length chunks of tokens from a stream of text files.\n\nclass ConstantLengthDataset(IterableDataset):\n \"\"\"\n Iterable dataset that returns constant length chunks of tokens from stream of text files.\n Args:\n tokenizer (Tokenizer): The processor used for proccessing the data.\n dataset (dataset.Dataset): Dataset with text files.\n infinite (bool): If True the iterator is reset after dataset reaches end else stops.\n seq_length (int): Length of token sequences to return.\n num_of_sequences (int): Number of token sequences to keep in buffer.\n chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\n fim_rate (float): Rate (0.0 to 1.0) that sample will be permuted with FIM.\n fim_spm_rate (float): Rate (0.0 to 1.0) of FIM permuations that will use SPM.\n seed (int): Seed for random number generator.\n \"\"\"\n\n def __init__(\n self,\n tokenizer,\n dataset,\n infinite=False,\n seq_length=1024,\n num_of_sequences=1024,\n chars_per_token=3.6,\n content_field=\"content\",\n fim_rate=0.5,\n fim_spm_rate=0.5,\n seed=0,\n ):\n self.tokenizer = tokenizer\n self.concat_token_id = tokenizer.eos_token_id\n self.dataset = dataset\n self.seq_length = seq_length\n self.infinite = infinite\n self.current_size = 0\n self.max_buffer_size = seq_length * chars_per_token * num_of_sequences\n self.content_field = content_field\n self.fim_rate = fim_rate\n self.fim_spm_rate = fim_spm_rate\n self.seed = seed\n\n (\n self.suffix_tok_id,\n self.prefix_tok_id,\n self.middle_tok_id,\n self.pad_tok_id,\n ) = get_fim_token_ids(self.tokenizer)\n if not self.suffix_tok_id and self.fim_rate > 0:\n print(\"FIM is not supported by tokenizer, disabling FIM\")\n self.fim_rate = 0\n\n def __iter__(self):\n iterator = iter(self.dataset)\n more_examples = True\n np_rng = np.random.RandomState(seed=self.seed)\n while more_examples:\n buffer, buffer_len = [], 0\n while True:\n if buffer_len >= self.max_buffer_size:\n break\n try:\n buffer.append(next(iterator)[self.content_field])\n buffer_len += len(buffer[-1])\n except StopIteration:\n if self.infinite:\n iterator = iter(self.dataset)\n else:\n more_examples = False\n break\n tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\n all_token_ids = []\n\n for tokenized_input in tokenized_inputs:\n # optionally do FIM permutations\n if self.fim_rate > 0:\n tokenized_input, np_rng = permute(\n tokenized_input,\n np_rng,\n self.suffix_tok_id,\n self.prefix_tok_id,\n self.middle_tok_id,\n self.pad_tok_id,\n fim_rate=self.fim_rate,\n fim_spm_rate=self.fim_spm_rate,\n truncate_or_pad=False,\n )\n\n all_token_ids.extend(tokenized_input + [self.concat_token_id])\n examples = []\n for i in range(0, len(all_token_ids), self.seq_length):\n input_ids = all_token_ids[i : i + self.seq_length]\n if len(input_ids) == self.seq_length:\n examples.append(input_ids)\n random.shuffle(examples)\n for example in examples:\n self.current_size += 1\n yield {\n \"input_ids\": torch.LongTensor(example),\n \"labels\": torch.LongTensor(example),\n }\n\n\ntrain_dataset = ConstantLengthDataset(\n tokenizer,\n train_data,\n infinite=True,\n seq_length=SEQ_LENGTH,\n chars_per_token=chars_per_token,\n content_field=DATA_COLUMN,\n fim_rate=FIM_RATE,\n fim_spm_rate=FIM_SPM_RATE,\n seed=SEED,\n)\neval_dataset = ConstantLengthDataset(\n tokenizer,\n valid_data,\n infinite=False,\n seq_length=SEQ_LENGTH,\n chars_per_token=chars_per_token,\n content_field=DATA_COLUMN,\n fim_rate=FIM_RATE,\n fim_spm_rate=FIM_SPM_RATE,\n seed=SEED,\n)", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html#prepare-the-model", | |
"href": "notebooks/single_gpu.html#prepare-the-model", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "Now that the data is prepared, it’s time to load the model! We’re going to load the quantized version of the model.\nThis will allow us to reduce memory usage, as quantization represents data with fewer bits. We’ll use the bitsandbytes library to quantize the model, as it has a nice integration with transformers. All we need to do is define a bitsandbytes config, and then use it when loading the model.\nThere are different variants of 4bit quantization, but generally, we recommend using NF4 quantization for better performance (bnb_4bit_quant_type=\"nf4\").\nThe bnb_4bit_use_double_quant option adds a second quantization after the first one to save an additional 0.4 bits per parameter.\nTo learn more about quantization, check out the “Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA” blog post.\nOnce defined, pass the config to the from_pretrained method to load the quantized version of the model.\n\nfrom peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training\nfrom peft.tuners.lora import LoraLayer\n\nload_in_8bit = False\n\n# 4-bit quantization\ncompute_dtype = getattr(torch, BNB_4BIT_COMPUTE_DTYPE)\n\nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=compute_dtype,\n bnb_4bit_use_double_quant=USE_NESTED_QUANT,\n)\n\ndevice_map = {\"\": 0}\n\nmodel = AutoModelForCausalLM.from_pretrained(\n MODEL,\n load_in_8bit=load_in_8bit,\n quantization_config=bnb_config,\n device_map=device_map,\n use_cache=False, # We will be using gradient checkpointing\n trust_remote_code=True,\n use_flash_attention_2=True,\n)\n\nWhen using a quantized model for training, you need to call the prepare_model_for_kbit_training() function to preprocess the quantized model for training.\n\nmodel = prepare_model_for_kbit_training(model)\n\nNow that the quantized model is ready, we can set up a LoRA configuration. LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.\nTo train a model using LoRA technique, we need to wrap the base model as a PeftModel. This involves definign LoRA configuration with LoraConfig, and wrapping the original model with get_peft_model() using the LoraConfig.\nTo learn more about LoRA and its parameters, refer to PEFT documentation.\n\n# Set up lora\npeft_config = LoraConfig(\n lora_alpha=LORA_ALPHA,\n lora_dropout=LORA_DROPOUT,\n r=LORA_R,\n bias=\"none\",\n task_type=\"CAUSAL_LM\",\n target_modules=LORA_TARGET_MODULES.split(\",\"),\n)\n\nmodel = get_peft_model(model, peft_config)\nmodel.print_trainable_parameters()\n\ntrainable params: 5,554,176 || all params: 1,142,761,472 || trainable%: 0.4860310866343243\n\n\nAs you can see, by applying LoRA technique we will now need to train less than 1% of the parameters.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html#train-the-model", | |
"href": "notebooks/single_gpu.html#train-the-model", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "Now that we have prepared the data, and optimized the model, we are ready to bring everything together to start the training.\nTo instantiate a Trainer, you need to define the training configuration. The most important is the TrainingArguments, which is a class that contains all the attributes to configure the training.\nThese are similar to any other kind of model training you may run, so we won’t go into detail here.\n\ntrain_data.start_iteration = 0\n\n\ntraining_args = TrainingArguments(\n output_dir=f\"Your_HF_username/{OUTPUT_DIR}\",\n dataloader_drop_last=True,\n evaluation_strategy=\"steps\",\n save_strategy=\"steps\",\n max_steps=MAX_STEPS,\n eval_steps=EVAL_FREQ,\n save_steps=SAVE_FREQ,\n logging_steps=LOG_FREQ,\n per_device_train_batch_size=BATCH_SIZE,\n per_device_eval_batch_size=BATCH_SIZE,\n learning_rate=LR,\n lr_scheduler_type=LR_SCHEDULER_TYPE,\n warmup_steps=NUM_WARMUP_STEPS,\n gradient_accumulation_steps=GR_ACC_STEPS,\n gradient_checkpointing=True,\n fp16=FP16,\n bf16=BF16,\n weight_decay=WEIGHT_DECAY,\n push_to_hub=True,\n include_tokens_per_second=True,\n)\n\nAs a final step, instantiate the Trainer and call the train method.\n\ntrainer = Trainer(\n model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset\n)\n\nprint(\"Training...\")\ntrainer.train()\n\nTraining...\n\n\n\n \n \n \n [2000/2000 4:16:10, Epoch 1/9223372036854775807]\n \n \n\n\n\nStep\nTraining Loss\nValidation Loss\n\n\n\n\n100\n5.524600\n7.456872\n\n\n200\n5.617800\n7.262190\n\n\n300\n5.129100\n6.410039\n\n\n400\n5.052200\n6.306774\n\n\n500\n5.202900\n6.117062\n\n\n600\n4.654100\n6.018349\n\n\n700\n5.100200\n6.000355\n\n\n800\n5.049800\n5.889457\n\n\n900\n4.541200\n5.813823\n\n\n1000\n5.000700\n5.834208\n\n\n1100\n5.026500\n5.781939\n\n\n1200\n4.411800\n5.720596\n\n\n1300\n4.782500\n5.736376\n\n\n1400\n4.980200\n5.712276\n\n\n1500\n4.368700\n5.689637\n\n\n1600\n4.884700\n5.675920\n\n\n1700\n4.914400\n5.662421\n\n\n1800\n4.248700\n5.660122\n\n\n1900\n4.798400\n5.664026\n\n\n2000\n4.704200\n5.655665\n\n\n\n\n\n\nTrainOutput(global_step=2000, training_loss=4.885598585128784, metrics={'train_runtime': 15380.3075, 'train_samples_per_second': 2.081, 'train_steps_per_second': 0.13, 'train_tokens_per_second': 4261.033, 'total_flos': 4.0317260660736e+17, 'train_loss': 4.885598585128784, 'epoch': 1.0})\n\n\nFinally, you can push the fine-tuned model to your Hub repository to share with your team.\n\ntrainer.push_to_hub()", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/single_gpu.html#inference", | |
"href": "notebooks/single_gpu.html#inference", | |
"title": "Single GPU Fine-tuning", | |
"section": "", | |
"text": "Once the model is uploaded to Hub, we can use it for inference. To do so we first initialize the original base model and its tokenizer. Next, we need to merge the fine-duned weights with the base model.\n\nfrom peft import PeftModel\nimport torch\n\n# load the original model first\ntokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)\nbase_model = AutoModelForCausalLM.from_pretrained(\n MODEL,\n quantization_config=None,\n device_map=None,\n trust_remote_code=True,\n torch_dtype=torch.bfloat16,\n).cuda()\n\n# merge fine-tuned weights with the base model\npeft_model_id = f\"Your_HF_username/{OUTPUT_DIR}\"\nmodel = PeftModel.from_pretrained(base_model, peft_model_id)\nmodel.merge_and_unload()\n\nNow we can use the merged model for inference. For convenience, we’ll define a get_code_completion - feel free to experiment with text generation parameters!\n\ndef get_code_completion(prefix, suffix):\n text = prompt = f\"\"\"<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>\"\"\"\n model.eval()\n outputs = model.generate(\n input_ids=tokenizer(text, return_tensors=\"pt\").input_ids.cuda(),\n max_new_tokens=128,\n temperature=0.2,\n top_k=50,\n top_p=0.95,\n do_sample=True,\n repetition_penalty=1.0,\n )\n return tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]\n\nNow all we need to do to get code completion is call the get_code_complete function and pass the first few lines that we want to be completed as a prefix, and an empty string as a suffix.\n\nprefix = \"\"\"from peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n\"\"\"\nsuffix =\"\"\"\"\"\"\n\nprint(get_code_completion(prefix, suffix))\n\nfrom peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n task_type=TaskType.CAUSAL_LM,\n r=8,\n lora_alpha=32,\n target_modules=[\"q_proj\", \"v_proj\"],\n lora_dropout=0.1,\n bias=\"none\",\n modules_to_save=[\"q_proj\", \"v_proj\"],\n inference_mode=False,\n)\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\nmodel = get_peft_model(model, peft_config)\nmodel.print_trainable_parameters()\n\n\nAs someone who has just used the PEFT library earlier in this notebook, you can see that the generated result for creating a LoraConfig is rather good!\nIf you go back to the cell where we instantiate the model for inference, and comment out the lines where we merge the fine-tuned weights, you can see what the original model would’ve generated for the exact same prefix:\n\nprefix = \"\"\"from peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n\"\"\"\nsuffix =\"\"\"\"\"\"\n\nprint(get_code_completion(prefix, suffix))\n\nfrom peft import LoraConfig, TaskType, get_peft_model\nfrom transformers import AutoModelForCausalLM\npeft_config = LoraConfig(\n model_name_or_path=\"facebook/wav2vec2-base-960h\",\n num_labels=1,\n num_features=1,\n num_hidden_layers=1,\n num_attention_heads=1,\n num_hidden_layers_per_attention_head=1,\n num_attention_heads_per_hidden_layer=1,\n hidden_size=1024,\n hidden_dropout_prob=0.1,\n hidden_act=\"gelu\",\n hidden_act_dropout_prob=0.1,\n hidden\n\n\nWhile it is Python syntax, you can see that the original model has no understanding of what a LoraConfig should be doing.\nTo learn how this kind of fine-tuning compares to full fine-tuning, and how to use a model like this as your copilot in VS Code via Inference Endpoints, or locally, check out the “Personal Copilot: Train Your Own Coding Assistant” blog post. This notebook complements the original blog post.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Single GPU Optimization" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_evaluation.html", | |
"href": "notebooks/rag_evaluation.html", | |
"title": "RAG Evaluation", | |
"section": "", | |
"text": "!pip install -q torch transformers transformers langchain sentence-transformers faiss-gpu openpyxl openai\n%reload_ext autoreload\n%autoreload 2\n%reload_ext dotenv\n%dotenv\nfrom tqdm.notebook import tqdm\nimport pandas as pd\nfrom typing import Optional, List, Tuple\nfrom langchain_core.language_models import BaseChatModel\nimport json\nimport datasets\n\npd.set_option(\"display.max_colwidth\", None)", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Evaluation" | |
] | |
}, | |
{ | |
"objectID": "notebooks/rag_evaluation.html#example-results", | |
"href": "notebooks/rag_evaluation.html#example-results", | |
"title": "RAG Evaluation", | |
"section": "Example results", | |
"text": "Example results\nLet us load the results that I obtained by tweaking the different options available in this notebook. For more detail on why these options could work on not, see the notebook on advanced_RAG.\nAs you can see in the graph below, some tweaks do not bring any improvement, some give huge performance boosts.\n➡️ There is no single good recipe: you should try several different directions when tuning your RAG systems.\n\nimport plotly.express as px\n\nscores = datasets.load_dataset(\"m-ric/rag_scores_cookbook\", split=\"train\")\nscores = pd.Series(scores[\"score\"], index=scores[\"settings\"])\n\n\nfig = px.bar(\n scores,\n color=scores,\n labels={\n \"value\": \"Accuracy\",\n \"settings\": \"Configuration\",\n },\n color_continuous_scale=\"bluered\",\n)\nfig.update_layout(w\n width=1000,\n height=600,\n barmode=\"group\",\n yaxis_range=[0, 100],\n title=\"<b>Accuracy of different RAG configurations</b>\",\n xaxis_title=\"RAG settings\",\n font=dict(size=15),\n)\nfig.layout.yaxis.ticksuffix = \"%\"\nfig.update_coloraxes(showscale=False)\nfig.update_traces(texttemplate=\"%{y:.1f}\", textposition=\"outside\")\nfig.show()\n\n\nAs you can see, these had varying impact on performance. In particular, tuning the chunk size is both easy and very impactful.\nBut this is our case: your results could be very different: now that you have a robust evaluation pipeline, you can set on to explore other options! 🗺️", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"RAG Techniques", | |
"RAG Evaluation" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html", | |
"href": "notebooks/automatic_embedding.html", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "Authored by: Derek Thomas\n\n\nI have a dataset I want to embed for semantic search (or QA, or RAG), I want the easiest way to do embed this and put it in a new dataset.\n\n\n\nI’m using a dataset from my favorite subreddit r/bestofredditorupdates. Because it has long entries, I will use the new jinaai/jina-embeddings-v2-base-en since it has an 8k context length. I will deploy this using Inference Endpoint to save time and money. To follow this tutorial, you will need to have already added a payment method. If you haven’t, you can add one here in billing. To make it even easier, I’ll make this fully API based.\nTo make this MUCH faster I will use the Text Embeddings Inference image. This has many benefits like: - No model graph compilation step - Small docker images and fast boot times. Get ready for true serverless! - Token based dynamic batching - Optimized transformers code for inference using Flash Attention, Candle and cuBLASLt - Safetensors weight loading - Production ready (distributed tracing with Open Telemetry, Prometheus metrics)\n\n\n\nimg\n\n\n\n\n\n\n!pip install -q aiohttp==3.8.3 datasets==2.14.6 pandas==1.5.3 requests==2.31.0 tqdm==4.66.1 huggingface-hub>=0.20\n\n\n\n\n\nimport asyncio\nfrom getpass import getpass\nimport json\nfrom pathlib import Path\nimport time\nfrom typing import Optional\n\nfrom aiohttp import ClientSession, ClientTimeout\nfrom datasets import load_dataset, Dataset, DatasetDict\nfrom huggingface_hub import notebook_login, create_inference_endpoint, list_inference_endpoints, whoami\nimport numpy as np\nimport pandas as pd\nimport requests\nfrom tqdm.auto import tqdm\n\n\n\n\nDATASET_IN is where your text data is DATASET_OUT is where your embeddings will be stored\nNote I used 5 for the MAX_WORKERS since jina-embeddings-v2 are quite memory hungry.\n\nDATASET_IN = 'derek-thomas/dataset-creator-reddit-bestofredditorupdates'\nDATASET_OUT = \"processed-subset-bestofredditorupdates\"\nENDPOINT_NAME = \"boru-jina-embeddings-demo-ie\"\n\nMAX_WORKERS = 5 # This is for how many async workers you want. Choose based on the model and hardware \nROW_COUNT = 100 # Choose None to use all rows, Im using 100 just for a demo\n\nHugging Face offers a number of GPUs that you can choose from a number of GPUs that you can choose in Inference Endpoints. Here they are in table form:\n\n\n\nGPU\ninstanceType\ninstanceSize\nvRAM\n\n\n\n\n1x Nvidia Tesla T4\ng4dn.xlarge\nsmall\n16GB\n\n\n4x Nvidia Tesla T4\ng4dn.12xlarge\nlarge\n64GB\n\n\n1x Nvidia A10G\ng5.2xlarge\nmedium\n24GB\n\n\n4x Nvidia A10G\ng5.12xlarge\nxxlarge\n96GB\n\n\n1x Nvidia A100*\np4de\nxlarge\n80GB\n\n\n2x Nvidia A100*\np4de\n2xlarge\n160GB\n\n\n\n*Note that for A100s you might get a note to email us to get access.\n\n# GPU Choice\nVENDOR=\"aws\"\nREGION=\"us-east-1\"\nINSTANCE_SIZE=\"medium\"\nINSTANCE_TYPE=\"g5.2xlarge\"\n\n\nnotebook_login()\n\n\n\n\nSome users might have payment registered in an organization. This allows you to connect to an organization (that you are a member of) with a payment method.\nLeave it blank is you want to use your username.\n\nwho = whoami()\norganization = getpass(prompt=\"What is your Hugging Face 🤗 username or organization? (with an added payment method)\")\n\nnamespace = organization or who['name']\n\nWhat is your Hugging Face 🤗 username or organization? (with an added payment method) ········\n\n\n\n\n\n\ndataset = load_dataset(DATASET_IN)\ndataset['train']\n\n\n\n\nDataset({\n features: ['id', 'content', 'score', 'date_utc', 'title', 'flair', 'poster', 'permalink', 'new', 'updated'],\n num_rows: 10042\n})\n\n\n\ndocuments = dataset['train'].to_pandas().to_dict('records')[:ROW_COUNT]\nlen(documents), documents[0]\n\n(100,\n {'id': '10004zw',\n 'content': '[removed]',\n 'score': 1,\n 'date_utc': Timestamp('2022-12-31 18:16:22'),\n 'title': 'To All BORU contributors, Thank you :)',\n 'flair': 'CONCLUDED',\n 'poster': 'IsItAcOnSeQuEnCe',\n 'permalink': '/r/BestofRedditorUpdates/comments/10004zw/to_all_boru_contributors_thank_you/',\n 'new': False,\n 'updated': False})", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#goal", | |
"href": "notebooks/automatic_embedding.html#goal", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "I have a dataset I want to embed for semantic search (or QA, or RAG), I want the easiest way to do embed this and put it in a new dataset.", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#approach", | |
"href": "notebooks/automatic_embedding.html#approach", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "I’m using a dataset from my favorite subreddit r/bestofredditorupdates. Because it has long entries, I will use the new jinaai/jina-embeddings-v2-base-en since it has an 8k context length. I will deploy this using Inference Endpoint to save time and money. To follow this tutorial, you will need to have already added a payment method. If you haven’t, you can add one here in billing. To make it even easier, I’ll make this fully API based.\nTo make this MUCH faster I will use the Text Embeddings Inference image. This has many benefits like: - No model graph compilation step - Small docker images and fast boot times. Get ready for true serverless! - Token based dynamic batching - Optimized transformers code for inference using Flash Attention, Candle and cuBLASLt - Safetensors weight loading - Production ready (distributed tracing with Open Telemetry, Prometheus metrics)\n\n\n\nimg", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#requirements", | |
"href": "notebooks/automatic_embedding.html#requirements", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "!pip install -q aiohttp==3.8.3 datasets==2.14.6 pandas==1.5.3 requests==2.31.0 tqdm==4.66.1 huggingface-hub>=0.20", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#imports", | |
"href": "notebooks/automatic_embedding.html#imports", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "import asyncio\nfrom getpass import getpass\nimport json\nfrom pathlib import Path\nimport time\nfrom typing import Optional\n\nfrom aiohttp import ClientSession, ClientTimeout\nfrom datasets import load_dataset, Dataset, DatasetDict\nfrom huggingface_hub import notebook_login, create_inference_endpoint, list_inference_endpoints, whoami\nimport numpy as np\nimport pandas as pd\nimport requests\nfrom tqdm.auto import tqdm", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#config", | |
"href": "notebooks/automatic_embedding.html#config", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "DATASET_IN is where your text data is DATASET_OUT is where your embeddings will be stored\nNote I used 5 for the MAX_WORKERS since jina-embeddings-v2 are quite memory hungry.\n\nDATASET_IN = 'derek-thomas/dataset-creator-reddit-bestofredditorupdates'\nDATASET_OUT = \"processed-subset-bestofredditorupdates\"\nENDPOINT_NAME = \"boru-jina-embeddings-demo-ie\"\n\nMAX_WORKERS = 5 # This is for how many async workers you want. Choose based on the model and hardware \nROW_COUNT = 100 # Choose None to use all rows, Im using 100 just for a demo\n\nHugging Face offers a number of GPUs that you can choose from a number of GPUs that you can choose in Inference Endpoints. Here they are in table form:\n\n\n\nGPU\ninstanceType\ninstanceSize\nvRAM\n\n\n\n\n1x Nvidia Tesla T4\ng4dn.xlarge\nsmall\n16GB\n\n\n4x Nvidia Tesla T4\ng4dn.12xlarge\nlarge\n64GB\n\n\n1x Nvidia A10G\ng5.2xlarge\nmedium\n24GB\n\n\n4x Nvidia A10G\ng5.12xlarge\nxxlarge\n96GB\n\n\n1x Nvidia A100*\np4de\nxlarge\n80GB\n\n\n2x Nvidia A100*\np4de\n2xlarge\n160GB\n\n\n\n*Note that for A100s you might get a note to email us to get access.\n\n# GPU Choice\nVENDOR=\"aws\"\nREGION=\"us-east-1\"\nINSTANCE_SIZE=\"medium\"\nINSTANCE_TYPE=\"g5.2xlarge\"\n\n\nnotebook_login()\n\n\n\n\nSome users might have payment registered in an organization. This allows you to connect to an organization (that you are a member of) with a payment method.\nLeave it blank is you want to use your username.\n\nwho = whoami()\norganization = getpass(prompt=\"What is your Hugging Face 🤗 username or organization? (with an added payment method)\")\n\nnamespace = organization or who['name']\n\nWhat is your Hugging Face 🤗 username or organization? (with an added payment method) ········", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#get-dataset", | |
"href": "notebooks/automatic_embedding.html#get-dataset", | |
"title": "Inference Endpoints", | |
"section": "", | |
"text": "dataset = load_dataset(DATASET_IN)\ndataset['train']\n\n\n\n\nDataset({\n features: ['id', 'content', 'score', 'date_utc', 'title', 'flair', 'poster', 'permalink', 'new', 'updated'],\n num_rows: 10042\n})\n\n\n\ndocuments = dataset['train'].to_pandas().to_dict('records')[:ROW_COUNT]\nlen(documents), documents[0]\n\n(100,\n {'id': '10004zw',\n 'content': '[removed]',\n 'score': 1,\n 'date_utc': Timestamp('2022-12-31 18:16:22'),\n 'title': 'To All BORU contributors, Thank you :)',\n 'flair': 'CONCLUDED',\n 'poster': 'IsItAcOnSeQuEnCe',\n 'permalink': '/r/BestofRedditorUpdates/comments/10004zw/to_all_boru_contributors_thank_you/',\n 'new': False,\n 'updated': False})", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#create-inference-endpoint", | |
"href": "notebooks/automatic_embedding.html#create-inference-endpoint", | |
"title": "Inference Endpoints", | |
"section": "Create Inference Endpoint", | |
"text": "Create Inference Endpoint\nWe are going to use the API to create an Inference Endpoint. This should provide a few main benefits: - It’s convenient (No clicking) - It’s repeatable (We have the code to run it easily) - It’s cheaper (No time spent waiting for it to load, and automatically shut it down)\n\ntry:\n endpoint = create_inference_endpoint(\n ENDPOINT_NAME,\n repository=\"jinaai/jina-embeddings-v2-base-en\",\n revision=\"7302ac470bed880590f9344bfeee32ff8722d0e5\",\n task=\"sentence-embeddings\",\n framework=\"pytorch\",\n accelerator=\"gpu\",\n instance_size=INSTANCE_SIZE,\n instance_type=INSTANCE_TYPE,\n region=REGION,\n vendor=VENDOR,\n namespace=namespace,\n custom_image={\n \"health_route\": \"/health\",\n \"env\": {\n \"MAX_BATCH_TOKENS\": str(MAX_WORKERS * 2048),\n \"MAX_CONCURRENT_REQUESTS\": \"512\",\n \"MODEL_ID\": \"/repository\"\n },\n \"url\": \"ghcr.io/huggingface/text-embeddings-inference:0.5.0\",\n },\n type=\"protected\",\n )\nexcept:\n endpoint = [ie for ie in list_inference_endpoints(namespace=namespace) if ie.name == ENDPOINT_NAME][0]\n print('Loaded endpoint')\n\nThere are a few design choices here: - As discussed before we are using jinaai/jina-embeddings-v2-base-en as our model. - For reproducibility we are pinning it to a specific revision. - If you are interested in more models, check out the supported list here. - Note that most embedding models are based on the BERT architecture. - MAX_BATCH_TOKENS is chosen based on our number of workers and the context window of our embedding model. - type=\"protected\" utilized the security from Inference Endpoints detailed here. - I’m using 1x Nvidia A10 since jina-embeddings-v2 is memory hungry (remember the 8k context length). - You should consider further tuning MAX_BATCH_TOKENS and MAX_CONCURRENT_REQUESTS if you have high workloads", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#wait-until-its-running", | |
"href": "notebooks/automatic_embedding.html#wait-until-its-running", | |
"title": "Inference Endpoints", | |
"section": "Wait until it’s running", | |
"text": "Wait until it’s running\n\n%%time\nendpoint.wait()\n\nCPU times: user 48.1 ms, sys: 15.7 ms, total: 63.8 ms\nWall time: 52.6 s\n\n\nInferenceEndpoint(name='boru-jina-embeddings-demo-ie', namespace='HF-test-lab', repository='jinaai/jina-embeddings-v2-base-en', status='running', url='https://k7l1xeok1jwnpbx5.us-east-1.aws.endpoints.huggingface.cloud')\n\n\nWhen we use endpoint.client.post we get a bytes string back. This is a little tedious because we need to convert this to an np.array, but it’s just a couple quick lines in python.\n\nresponse = endpoint.client.post(json={\"inputs\": 'This sound track was beautiful! It paints the senery in your mind so well I would recomend it even to people who hate vid. game music!', 'truncate': True}, task=\"feature-extraction\")\nresponse = np.array(json.loads(response.decode()))\nresponse[0][:20]\n\narray([-0.05630935, -0.03560849, 0.02789049, 0.02792823, -0.02800371,\n -0.01530391, -0.01863454, -0.0077982 , 0.05374297, 0.03672185,\n -0.06114018, -0.06880157, -0.0093503 , -0.03174005, -0.03206085,\n 0.0610647 , 0.02243694, 0.03217408, 0.04181686, 0.00248854])\n\n\nYou may have inputs that exceed the context. In such scenarios, it’s up to you to handle them. In my case, I’d like to truncate rather than have an error. Let’s test that it works.\n\nembedding_input = 'This input will get multiplied' * 10000\nprint(f'The length of the embedding_input is: {len(embedding_input)}')\nresponse = endpoint.client.post(json={\"inputs\": embedding_input, 'truncate': True}, task=\"feature-extraction\")\nresponse = np.array(json.loads(response.decode()))\nresponse[0][:20]\n\nThe length of the embedding_input is: 300000\n\n\narray([-0.03088215, -0.0351537 , 0.05749275, 0.00983467, 0.02108356,\n 0.04539965, 0.06107162, -0.02536954, 0.03887688, 0.01998681,\n -0.05391388, 0.01529677, -0.1279156 , 0.01653782, -0.01940958,\n 0.0367411 , 0.0031748 , 0.04716022, -0.00713609, -0.00155313])", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
}, | |
{ | |
"objectID": "notebooks/automatic_embedding.html#pause-inference-endpoint", | |
"href": "notebooks/automatic_embedding.html#pause-inference-endpoint", | |
"title": "Inference Endpoints", | |
"section": "Pause Inference Endpoint", | |
"text": "Pause Inference Endpoint\nNow that we have finished, let’s pause the endpoint so we don’t incur any extra charges, this will also allow us to analyze the cost.\n\nendpoint = endpoint.pause()\n\nprint(f\"Endpoint Status: {endpoint.status}\")\n\nEndpoint Status: paused", | |
"crumbs": [ | |
"Open-Source AI Cookbook", | |
"Additional Techniques", | |
"Automatic Embedding" | |
] | |
} | |
] |