title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to run hf model + own LoRA in llamacpp? | 1 | I fine-tuned llama2-7b using huggingface and peft. Now I have the adapter.bin (lora). I fine-tuned it quantized (4 bits).
Now that I have a model that perfoms ok in my task, I’d like to speed up prediction as much as I can.
​
As far as I understand, one way of doing this is running my model on llamacpp, but I cannot find any tutorial or example on how to transform huggingface model+adapter to a model supported by llamacpp. Should I transform it using ggml?
Do you have any interesting example that achieves this?
​
Thanks a lot beforehand | 2023-08-01T08:19:31 | https://www.reddit.com/r/LocalLLaMA/comments/15f68fc/how_to_run_hf_model_own_lora_in_llamacpp/ | Send_me_your_loras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f68fc | false | null | t3_15f68fc | /r/LocalLLaMA/comments/15f68fc/how_to_run_hf_model_own_lora_in_llamacpp/ | false | false | self | 1 | null |
WSL 2 Setup Guide for an AI Environment (Nvidia GPUs) | 1 | [removed] | 2023-08-01T08:52:30 | https://www.reddit.com/r/LocalLLaMA/comments/15f6sgh/wsl_2_setup_guide_for_an_ai_environment_nvidia/ | GrandDemand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f6sgh | false | null | t3_15f6sgh | /r/LocalLLaMA/comments/15f6sgh/wsl_2_setup_guide_for_an_ai_environment_nvidia/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E3913tKZnCX432OI0mEa509eSLI7pe2qMCPrOwzOhnk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?width=108&crop=smart&auto=webp&s=c3a3d7ee4f7f35f4c9cc02ef394ab4ff58ad14d9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?width=216&crop=smart&auto=webp&s=4ef78db5bea9566710b7d4c7454eb06541cd21ec', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?width=320&crop=smart&auto=webp&s=9cc7e59884538ff7f65a991c5efad3db5f392378', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?width=640&crop=smart&auto=webp&s=441fa6ae0c5a8cb818eff08d2667d6d5628ecef9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?width=960&crop=smart&auto=webp&s=1176a887fd4c9d12b286af02f35716e3fc6c9042', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?width=1080&crop=smart&auto=webp&s=2b4b826968c496286b13e0e6321423218baeecd4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/dwqSwNrH2akrH3TIdSUSAAjMbdF_Z7_fgU0PXzahZ7Q.jpg?auto=webp&s=7f1e54efa1b8e9e6b7dc5c90f8a64826ab0b00be', 'width': 1200}, 'variants': {}}]} |
Anybody tried 70b with 128k context? | 1 | With ~96gb cpu ram?
llama.cpp measurements show with q4_k_m, it almost fits in 96gb.
With the model fully in ram, is the t/s still at 1-2? Has the bottleneck switch to the cpu?
prompt processing a 126k segment may take a good chunk of the day, so use `--prompt-cache FNAME --prompt-cache-all -ins`,
and `--prompt-cache FNAME --prompt-cache-ro -ins` | 2023-08-01T10:13:29 | https://www.reddit.com/r/LocalLLaMA/comments/15f8bfx/anybody_tried_70b_with_128k_context/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f8bfx | false | null | t3_15f8bfx | /r/LocalLLaMA/comments/15f8bfx/anybody_tried_70b_with_128k_context/ | false | false | self | 1 | null |
text-generation-webui not working | 1 | hi, I have problem with text generation WebUI.
when I load "Wizard-Vicuna-13B-Uncensored.ggmlv3.q2\_K.bin" model
I face this error from the text generation
Traceback (most recent call last): File “C:\\Users\\king\\Documents\\programming\\oobabooga\_windows\\text-generation-webui\\server.py”, line 68, in load\_model\_wrapper shared.model, shared.tokenizer = load\_model(shared.model\_name, loader) File “C:\\Users\\king\\Documents\\programming\\oobabooga\_windows\\text-generation-webui\\modules\\models.py”, line 78, in load\_model output = load\_func\_maploader File “C:\\Users\\king\\Documents\\programming\\oobabooga\_windows\\text-generation-webui\\modules\\models.py”, line 241, in llamacpp\_loader model, tokenizer = LlamaCppModel.from\_pretrained(model\_file) File “C:\\Users\\king\\Documents\\programming\\oobabooga\_windows\\text-generation-webui\\modules\\llamacpp\_model.py”, line 60, in from\_pretrained result.model = Llama(\*\*params) File “C:\\Users\\king\\Documents\\programming\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\llama\_cpp\\llama.py”, line 313, in init assert self.model is not None AssertionError
​
and this from the terminal
​
To create a public link, set \`share=True\` in \`launch()\`. 2023-08-01 03:19:06 INFO:Loading TheBloke\_Wizard-Vicuna-13B-Uncensored-GGML... 2023-08-01 03:19:06 INFO:llama.cpp weights detected: models\\TheBloke\_Wizard-Vicuna-13B-Uncensored-GGML\\Wizard-Vicuna-13B-Uncensored.ggmlv3.q2\_K.bin 2023-08-01 03:19:06 INFO:Cache capacity is 0 bytes Model Path: models\\TheBloke\_Wizard-Vicuna-13B-Uncensored-GGML\\Wizard-Vicuna-13B-Uncensored.ggmlv3.q2\_K.bin llama.cpp: loading model from models\\TheBloke\_Wizard-Vicuna-13B-Uncensored-GGML\\Wizard-Vicuna-13B-Uncensored.ggmlv3.q2\_K.bin llama\_model\_load\_internal: format = ggjt v3 (latest) llama\_model\_load\_internal: n\_vocab = 32000 llama\_model\_load\_internal: n\_ctx = 2048 llama\_model\_load\_internal: n\_embd = 5120 llama\_model\_load\_internal: n\_mult = 256 llama\_model\_load\_internal: n\_head = 40 llama\_model\_load\_internal: n\_head\_kv = 40 llama\_model\_load\_internal: n\_layer = 40 llama\_model\_load\_internal: n\_rot = 128 llama\_model\_load\_internal: n\_gqa = 1 llama\_model\_load\_internal: rnorm\_eps = 1.0e-06 llama\_model\_load\_internal: n\_ff = 13824 llama\_model\_load\_internal: freq\_base = 10000.0 llama\_model\_load\_internal: freq\_scale = 1 llama\_model\_load\_internal: ftype = 10 (mostly Q2\_K) llama\_model\_load\_internal: model size = 13B llama\_model\_load\_internal: ggml ctx size = 0.06 MB error loading model: llama.cpp: tensor 'layers.21.ffn\_norm.weight' is missing from model llama\_load\_model\_from\_file: failed to load model 2023-08-01 03:19:06 ERROR:Failed to load the model. Traceback (most recent call last): File "C:\\Users\\king\\Documents\\oobabooga\_windows\\text-generation-webui\\server.py", line 68, in load\_model\_wrapper shared.model, shared.tokenizer = load\_model(shared.model\_name, loader) File "C:\\Users\\king\\Documents\\oobabooga\_windows\\text-generation-webui\\modules\\models.py", line 78, in load\_model output = load\_func\_map\[loader\](model\_name) File "C:\\Users\\king\\Documents\\oobabooga\_windows\\text-generation-webui\\modules\\models.py", line 241, in llamacpp\_loader model, tokenizer = LlamaCppModel.from\_pretrained(model\_file) File "C:\\Users\\king\\Documents\\oobabooga\_windows\\text-generation-webui\\modules\\llamacpp\_model.py", line 60, in from\_pretrained result.model = Llama(\*\*params) File "C:\\Users\\king\\Documents\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\llama\_cpp\\llama.py", line 314, in \_\_init\_\_ assert self.model is not None AssertionError Exception ignored in: <function Llama.\_\_del\_\_ at 0x000001DF574491B0> Traceback (most recent call last): File "C:\\Users\\king\\Documents\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\llama\_cpp\\llama.py", line 1511, in \_\_del\_\_ if self.ctx is not None: AttributeError: 'Llama' object has no attribute 'ctx' Exception ignored in: <function LlamaCppModel.\_\_del\_\_ at 0x000001DF570C6D40> Traceback (most recent call last): File "C:\\Users\\king\\Documents\\oobabooga\_windows\\text-generation-webui\\modules\\llamacpp\_model.py", line 29, in \_\_del\_\_ self.model.\_\_del\_\_() AttributeError: 'LlamaCppModel' object has no attribute 'model'
any Idea? | 2023-08-01T11:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/15f9qd3/textgenerationwebui_not_working/ | Antwaa_Sensei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f9qd3 | false | null | t3_15f9qd3 | /r/LocalLLaMA/comments/15f9qd3/textgenerationwebui_not_working/ | false | false | self | 1 | null |
ARB: Advanced Reasoning Benchmark for Large Language Models | 1 | 2023-08-01T11:46:52 | https://arb.duckai.org/ | Balance- | arb.duckai.org | 1970-01-01T00:00:00 | 0 | {} | 15fa848 | false | null | t3_15fa848 | /r/LocalLLaMA/comments/15fa848/arb_advanced_reasoning_benchmark_for_large/ | false | false | default | 1 | null |
|
GGML Guide | 1 | I've been playing around with LLM's all summer but finally have the capabilities of fine tuning one, which I have successfully done (with LoRA). However, I am getting quite lost when trying to figure out how to:
a) Merge the weights into the model
b) Quantize this model (with updated weights) to GGML
Can anyone point me in the right direction via guides, articles, etc?
| 2023-08-01T11:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/15fa9vg/ggml_guide/ | AdNo2339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fa9vg | false | null | t3_15fa9vg | /r/LocalLLaMA/comments/15fa9vg/ggml_guide/ | false | false | self | 1 | null |
I'm building a system with an Nvidia A6000 (not Ada). Does anyone have any mobo/build recommendations or experience with this card? | 1 | I'm getting an A6000 from work (possibly 2nd also w/ nvlink) and I want to build an LLM capable system around it.
Just curious what other people's experience around using this card with LLMs.
I want to future proof myself as much as possible, so I'd like to get an AI workloads friendly mobo/CPU.
Thanks! | 2023-08-01T11:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/15fahgh/im_building_a_system_with_an_nvidia_a6000_not_ada/ | Jzzzishereyo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fahgh | false | null | t3_15fahgh | /r/LocalLLaMA/comments/15fahgh/im_building_a_system_with_an_nvidia_a6000_not_ada/ | false | false | self | 1 | null |
How to finetune Llama 2 chat on local and also quantize? | 1 | Hi, in my system I have a 3090 and 128GB ddr4 ram
I have a quiet big dataset of swedish chats that I want to finetune a llama 2 chat model with.
My questions are:
What steps do I need to take for the finetuning? What steps do I need to take for turning it into a gptq 4bit or 8bit? Should I do the finetuning before or after the quantization? I want to be able to call the final model through command line as I want my web backend app to interact with it.
Would appreciate if anyone would be able to give me a few pointers, thanks! :) | 2023-08-01T13:18:07 | https://www.reddit.com/r/LocalLLaMA/comments/15fcdrn/how_to_finetune_llama_2_chat_on_local_and_also/ | VectorD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fcdrn | false | null | t3_15fcdrn | /r/LocalLLaMA/comments/15fcdrn/how_to_finetune_llama_2_chat_on_local_and_also/ | false | false | self | 1 | null |
I can't stop asking about llamas | 1 | ​
https://preview.redd.it/6i57jnrw2ifb1.png?width=696&format=png&auto=webp&s=9983967ae79a886e37c625991fedc26f1c81712a | 2023-08-01T13:26:10 | https://www.reddit.com/r/LocalLLaMA/comments/15fckwz/i_cant_stop_asking_about_llamas/ | Fusseldieb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fckwz | false | null | t3_15fckwz | /r/LocalLLaMA/comments/15fckwz/i_cant_stop_asking_about_llamas/ | false | false | 1 | null |
|
Exposing a fine tuned Llama model with PERT QLORA int4 with Fast API within a Docker container | 1 | For the ones that want a really simple way to expose their local fine tuned model with PEFT within a Docker container image I just put some simple code on GitHub
https://github.com/fbellame/peft-gpu-inference
Keep in mind this is a personal and educational project, you can for sure improve it, it is very basic ☺️ | 2023-08-01T14:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/15fdo4p/exposing_a_fine_tuned_llama_model_with_pert_qlora/ | Smart-Substance8449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fdo4p | false | null | t3_15fdo4p | /r/LocalLLaMA/comments/15fdo4p/exposing_a_fine_tuned_llama_model_with_pert_qlora/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xKOKu6N_lA4NvYhjizyMra3NInN3Tt99ua25bL0xRV0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=108&crop=smart&auto=webp&s=1e1b56bf17f43468e169f42353b069d7b4e4c5b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=216&crop=smart&auto=webp&s=3437169cf3f2a5126c1c89d8dac82fdb04375ea4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=320&crop=smart&auto=webp&s=d7a083986c352f87163ebde525311cf9025069a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=640&crop=smart&auto=webp&s=e401c94e0159aa8ddb12ddb64c082e597b815cfd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=960&crop=smart&auto=webp&s=ceb99dc199a0eaeab770a8b988da10d4a61e6d4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=1080&crop=smart&auto=webp&s=9ba481c62db45b2ec7bc17b899bda99b76f042ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?auto=webp&s=b767c2cc30359d7dfc3d6d19661e77eb703977a2', 'width': 1200}, 'variants': {}}]} |
Exposing a fine tuned Llama model with PEFT QLORA int4 with Fast API within a Docker container | 1 | For the ones that want a really simple way to expose their local fine tuned model with PEFT within a Docker container image I just put some simple code on GitHub:
https://github.com/fbellame/peft-gpu-inference
Keep in mind this is a personal and educational project, you can for sure improve it, it is very basic ☺️ | 2023-08-01T14:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/15fdqlr/exposing_a_fine_tuned_llama_model_with_peft_qlora/ | Smart-Substance8449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fdqlr | false | null | t3_15fdqlr | /r/LocalLLaMA/comments/15fdqlr/exposing_a_fine_tuned_llama_model_with_peft_qlora/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xKOKu6N_lA4NvYhjizyMra3NInN3Tt99ua25bL0xRV0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=108&crop=smart&auto=webp&s=1e1b56bf17f43468e169f42353b069d7b4e4c5b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=216&crop=smart&auto=webp&s=3437169cf3f2a5126c1c89d8dac82fdb04375ea4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=320&crop=smart&auto=webp&s=d7a083986c352f87163ebde525311cf9025069a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=640&crop=smart&auto=webp&s=e401c94e0159aa8ddb12ddb64c082e597b815cfd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=960&crop=smart&auto=webp&s=ceb99dc199a0eaeab770a8b988da10d4a61e6d4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?width=1080&crop=smart&auto=webp&s=9ba481c62db45b2ec7bc17b899bda99b76f042ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vpM_dqu8IP8Q3E6ITKyckCREKPWO0bG9BQaf0jInEcw.jpg?auto=webp&s=b767c2cc30359d7dfc3d6d19661e77eb703977a2', 'width': 1200}, 'variants': {}}]} |
Has anyone had success fine tuning the LLama2 models in colab? | 1 | I kepp rying to finetune but use all the free RAM before the model is hardly loaded, Des anyone have a solution that avoids the RAM errors.
Also does anyone have information on how much data you can train per free session on colab?
Are there any quantized 7B/13B/65B models you can fine tine in free colabs?
Thanks in advance for any replies! | 2023-08-01T14:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/15feaai/has_anyone_had_success_fine_tuning_the_llama2/ | randomrealname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15feaai | false | null | t3_15feaai | /r/LocalLLaMA/comments/15feaai/has_anyone_had_success_fine_tuning_the_llama2/ | false | false | self | 1 | null |
Model interactions and the understanding of object visualization | 1 | I generally use LLM's for RP - writing science fiction as well as fantasy. So my usage is specialized and covering conversation over factual question and answer or code generation, etc. I typically use models more capable of chat and find them to have 'personalities' of sorts. It's relatively easy for me to know what model I'm using by the responses. Probably others have the same experience.
Do models understand visualization of three dimensional space when giving their responses? I'm going to say that Claude 2 does not. It said as much (though I suspect the model is inferior for anything i would need. But I've noticed this on many of my RP sessions with offline models ranging from WizardLM, Airoboros, and so forth...I've tried a few.
The characters teleport from across spaces to be in a new position - so no concept of time and movement. The body distorts into descriptions of positions that would break spines, and it definitely has issues with where a foot is as it relates to a knee. There's no understanding of how anatomy like that works. It doesn't try at all to visually place objects into a scene, it's just words in order, nothing more and were left to imagine and fill in its lapses.
Is this a failure of the technology, an over sight, or is it just deemed an unimportant factor to LLM design at this time? What other failing exist that I'm not noticing? I know time is one as well as visualization, it can cheat smell through word use but I know it's not smelling, it's acting at best. So it leaves me to wonder, what exactly is this technology doing aside from question and answer? The chat side specifically, what is motivating how it answers if not visualization and perception of time? Is it the fine tuning, feeding it other real world chats and dialogs so it can infer time and space? (Poorly)
​ | 2023-08-01T14:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/15feqn7/model_interactions_and_the_understanding_of/ | Fuzzlewhumper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15feqn7 | false | null | t3_15feqn7 | /r/LocalLLaMA/comments/15feqn7/model_interactions_and_the_understanding_of/ | false | false | self | 1 | null |
What are the GitHub repo that provide api inference for llama | 1 | [removed] | 2023-08-01T15:01:47 | https://www.reddit.com/r/LocalLLaMA/comments/15ff270/what_are_the_github_repo_that_provide_api/ | mrtac96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ff270 | false | null | t3_15ff270 | /r/LocalLLaMA/comments/15ff270/what_are_the_github_repo_that_provide_api/ | false | false | self | 1 | null |
Is it possible for LLama to call my API? Similar to ChatGPT Plugins | 1 | Hey,
I want to build a ChatBot that would interact with my API.
ChatGPT has this ability with Chat Plugins.
[https://platform.openai.com/docs/plugins/introduction](https://platform.openai.com/docs/plugins/introduction)
Is something like that possible with LLaMA?
Thanks | 2023-08-01T15:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/15ffvmq/is_it_possible_for_llama_to_call_my_api_similar/ | easterneuropeanstyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ffvmq | false | null | t3_15ffvmq | /r/LocalLLaMA/comments/15ffvmq/is_it_possible_for_llama_to_call_my_api_similar/ | false | false | self | 1 | null |
ChatGPT Plugin | 1 | [removed] | 2023-08-01T15:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/15ffx8d/chatgpt_plugin/ | Floatbot_Inc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ffx8d | false | null | t3_15ffx8d | /r/LocalLLaMA/comments/15ffx8d/chatgpt_plugin/ | false | false | self | 1 | null |
Presenting "The Muse" - a logit sampler that makes LLMs more creative | 1 | Inspired by [this post](https://www.reddit.com/r/LocalLLaMA/comments/15ea9jl/extremely_repetitivedeterministic_content_over/) by /u/CulturedNiichan, it got me thinking about why should all the focus be on models? Feels weird that nobody is experimenting with samplers so I set out to change that.
I have put together a [working prototype](https://github.com/the-crypt-keeper/the-muse) of a Logit processor that can be used with transformers library.
Some example generations without 'the muse':
> Once upon a time, there was a polar bear named Bucky. He lived in the Arctic with his family in a cozy igloo. Bucky loved to play outside in the snow, but he also enjoyed staying inside and watching the snowflakes fall. One day, Bucky's mom told him that they were going to a special place. It was a place where they could see the northern lights. Bucky was excited because he had never seen the northern lights before. When they arrived at the special place, they saw a beautiful display of colors in the sky. Bucky was mesmerized by the lights
> Once upon a time, there was a polar bear named Misha. Misha lived in the Arctic, where the temperature hovered around -40 degrees Celsius. Misha loved to swim in the icy waters of the Arctic, and he had a special affinity for the icy waters of the Arctic Sea. One day, Misha was swimming in the Arctic Sea when he spotted something unusual in the water. It was a small, furry creature with big, round eyes. Misha knew that the creature was not a seal, but a penguin. Misha approached the penguin cautiously. The penguin,
> Once upon a time, there lived polar bears in the Arctic. They were the most powerful bears in the land. They had the largest paws and the strongest claws. They were fierce and fierce. They had long, white fur and they were very big. They had sharp teeth and they were very strong. They were the kings of the Arctic. One day, a polar bear came to the Arctic. He was very tall and he had a long white beard. He was very wise and he had many stories to tell. He was the king of the Arctic. The polar bear walked through the Arctic. He saw many
Kinda boring and repeditive, right? Lets see what the muse can do:
> Polar Bears
Once, there lived polar bear in a remote land far from human civilization, living on a remote ice shelf. He had a family, and he had everything that a polar bears needed for a happy and fulfilling existence: food and habitat, a comfortable life and close friends and kin, but the one aspect of life he could never have, he was denied the most important element: a companion of the same kind, someone who would understand him, and be there with a shoulder for a bear of a cry or for the occasional tear of loneliness and despair, a person to love, and share the ups with and to help bear down
> Polar Bears in Space! ~~~~~ ~~~~~~~ ~~~ ~~~~~~~~~~~~~~~
A long, cold, lonely time in the cosmosphere
In space there was no life. No other creatures, plants or creatures. The sun shone bright and hot and it warmed everything around. But there were some creatures in space, they didn’d look anything alike, and were so strange to see, that it made people stare. They looked almost human but were completely furred, with huge eyes, that looked straight at the observer with the gaze that would make anyone feel uncomfortable and at ease all in one go, the way a bear’
> The sun had set as it was approaching its last moments in this day, the night was setting as it came. As darkness took hold of this day and the stars shone in their beautiful brightness. It seemed that all the world had gone silent as all sounds were lost. A lone figure was standing there in front, it had long been a tradition that every new moon the bears of this island come to this very place and dance around in celebration. They danced and danced and as they were about done the moon had set. As they danced, the wind picked the final leaves off of one of those beautiful old maple tree, the leaves
> Polar Bears: A Love-haze Story. The story takes a twist in an unfamiliar way, as I'm not used of the plot of a bear loving human, and the ending was a little bit of an abrupt. I apologize if it is in bad English, but it's a first attempt and my AI doesn’ t know much yet, and my grammar and syntax is a little off, I'm still trying. Here is my attempt at writing about a bear that fell for an alien. Please tell if you think I could do a good enough story, or should rewrite the plot: A bear was walking on a desolated Earth
A lot more fun, right? Would love to hear what you guys think. I haven't tried it with Llama-based models but I expect it should work the same. Note that setting damp too low breaks coherence (in fun and exciting ways!). | 2023-08-01T15:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ffzw5/presenting_the_muse_a_logit_sampler_that_makes/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ffzw5 | false | null | t3_15ffzw5 | /r/LocalLLaMA/comments/15ffzw5/presenting_the_muse_a_logit_sampler_that_makes/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vOl2zDzsvvicibfcQIUS3uML-sX3L-dLcqbxFGWaN5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?width=108&crop=smart&auto=webp&s=9bf3369917a3bd65b18ecf0942cd55df24321fa7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?width=216&crop=smart&auto=webp&s=1f3fdde450496be9364e99180018297d9212ff70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?width=320&crop=smart&auto=webp&s=ba7b83718316f2ff094a1c90c87c7f5f7c44a6cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?width=640&crop=smart&auto=webp&s=dcf4f5359f0813b68e7a1a464cf9957d3e58ba37', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?width=960&crop=smart&auto=webp&s=d5fb3ca0db45dfe3269532b1c0973c3b3e06fead', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?width=1080&crop=smart&auto=webp&s=1ec49fa1ef88685eafc578950e2a972b48482a2b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rLILt4a1Y1146m5n-k-oEOJDUlVjCAZlXMKoib0q6i0.jpg?auto=webp&s=2e1bda327224c86f937bc739470290ce776692e5', 'width': 1200}, 'variants': {}}]} |
Looking for sentence-transformer libraries that I can use locally with JavaScript | 1 | Python library from HuggingFace "sentence\_transformers" is amazing to generate embeddings locally from a variety of models.
Do you know any similar options for JavaScript?
What I found is only this one [https://www.npmjs.com/package/@tensorflow-models/universal-sentence-encoder](https://www.npmjs.com/package/@tensorflow-models/universal-sentence-encoder).
My goal is to load model locally and generate a set of embeddings for vector search.
​ | 2023-08-01T15:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/15fg0hk/looking_for_sentencetransformer_libraries_that_i/ | Bright_Mission_8279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fg0hk | false | null | t3_15fg0hk | /r/LocalLLaMA/comments/15fg0hk/looking_for_sentencetransformer_libraries_that_i/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?width=108&crop=smart&auto=webp&s=29849972d1063666bb20bfca982ed849dbab0739', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?width=216&crop=smart&auto=webp&s=c2e78155bcf431bc82859db1b9cc141779445961', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?width=320&crop=smart&auto=webp&s=7b8fc1121ee3f0761b7c5ec9e306f65c99c715db', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?width=640&crop=smart&auto=webp&s=18cce76337e2ca3f939805374b20a68b0a1671af', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?width=960&crop=smart&auto=webp&s=940123d8c0b4043a88a028062a5a195676254f4d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?width=1080&crop=smart&auto=webp&s=70f261d64e65120035e417a634c19726e4e3576d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WaI7ci8y_BucxfTyRMw9rEGVoXvk-w3erN7z645l-H8.jpg?auto=webp&s=dc9f3722e4f26a0d394e974bdc658bd002ee6f3d', 'width': 1200}, 'variants': {}}]} |
SLAM-group/newhope: NewHope: Harnessing 99% of GPT-4's Programming Capabilities | 1 | 2023-08-01T15:58:48 | https://github.com/slam-group/newhope | Remarkable_Ad4470 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15fgl8b | false | null | t3_15fgl8b | /r/LocalLLaMA/comments/15fgl8b/slamgroupnewhope_newhope_harnessing_99_of_gpt4s/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9LNS5ljJxvlYC_o0anLVOzzx4CdpWQWgAJqTaAOCJvY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?width=108&crop=smart&auto=webp&s=5e25f280dcd71c062119fd080af6030e7fbfe168', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?width=216&crop=smart&auto=webp&s=70764ff44e9fd72bbc3a88552cfb670b68711583', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?width=320&crop=smart&auto=webp&s=05c2c11d0584789dca60ecc3e63815dceecf03a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?width=640&crop=smart&auto=webp&s=b63c0401b61eaf97da8838c5cffb63d0ea3f57c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?width=960&crop=smart&auto=webp&s=2e0abdf35c6634b9c23ebc34a4ebcc6ffb642fd8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?width=1080&crop=smart&auto=webp&s=d53df347ad6474900b75c7e95f0f4f48a4c17979', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jRQrJ-VZ13E37tNLv6f1yFNxbCna-RevdYWcn2zWv8U.jpg?auto=webp&s=516d95910696ca5a23f46709bc51542db561d71b', 'width': 1200}, 'variants': {}}]} |
||
Hermes LLongMA-2 8k | 1 | Releasing Hermes-LLongMA-2 8k, a series of Llama-2 models, trained at 8k context length using linear positional interpolation scaling. The models were trained in collaboration with Teknium1 and u/emozilla of NousResearch, and u/kaiokendev.
The Hermes-LLongMA-2-8k 13b can be found on huggingface here: [https://huggingface.co/conceptofmind/Hermes-LLongMA-2-13b-8k](https://huggingface.co/conceptofmind/Hermes-LLongMA-2-13b-8k)
The Hermes-LLongMA-2-8k 7b model can be found on huggingface here: [https://huggingface.co/conceptofmind/Hermes-LLongMA-2-7b-8k](https://huggingface.co/conceptofmind/Hermes-LLongMA-2-7b-8k)
The NousResearch Hermes dataset consists of over 300,000 instruction data points. Thank you to karan4d and Teknium1 for providing the data to train these models.
You can find out more about the NousResearch organization here: [https://huggingface.co/NousResearch](https://huggingface.co/NousResearch)
We worked directly with u/kaiokendev, to extend the context length of the Llama-2 13b model through fine-tuning. The model passes all our evaluations and maintains the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies.
The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: [https://github.com/jquesnelle/scaled-rope](https://github.com/jquesnelle/scaled-rope)
If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: [https://kaiokendev.github.io/](https://kaiokendev.github.io/)
A PR to add scaled rotary embeddings to huggingface transformers has been added by [@joao\_gante](https://twitter.com/joao_gante) and merged: [https://github.com/huggingface/transformers/pull/24653](https://github.com/huggingface/transformers/pull/24653)
We previously trained the first publicly available model with rotary embedding scaling here: [https://twitter.com/EnricoShippole/status/1655599301454594049?s=20](https://twitter.com/EnricoShippole/status/1655599301454594049?s=20)
The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. This is not an official StabilityAI product.
A big thank you to EleutherAI for facilitating the discussions about context-length extrapolation as well. Truly an awesome open-source team and community.
If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly.
The previous suite of LLongMA 16k model releases can be found here: https://twitter.com/EnricoShippole/status/1684947213024112640?s=20
All of the models can be found on Huggingface: [https://huggingface.co/conceptofmind](https://huggingface.co/conceptofmind) | 2023-08-01T16:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/15fh91g/hermes_llongma2_8k/ | EnricoShippole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fh91g | false | null | t3_15fh91g | /r/LocalLLaMA/comments/15fh91g/hermes_llongma2_8k/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6xO5w85tnfPtWxODrA_bbbEGiooxe3_5mCGu_tSb1V0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?width=108&crop=smart&auto=webp&s=d7f400ef26c563d146bd8d0c44797bb4e2d6079b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?width=216&crop=smart&auto=webp&s=bc46b17bd2ca47db6c5088dcc1ccfa0f5ff02e8d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?width=320&crop=smart&auto=webp&s=2d71a0ff7417d632728d43082603436b433d40f6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?width=640&crop=smart&auto=webp&s=280bc7c3bf16ec5a5cb69db77d9b06f0e00451b5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?width=960&crop=smart&auto=webp&s=fe7086fbdde414c31a3f52c701916805aa0be213', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?width=1080&crop=smart&auto=webp&s=7e2a051aaab3cdbe1c156e869e1412fba5e44220', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wGn684GpkSO_A5auBtsYbcvOQxKT-px3AV9UcpJDuzw.jpg?auto=webp&s=0c47f4206f01f6464a217b92c57b443759332fa7', 'width': 1200}, 'variants': {}}]} |
Why does the model refuse to predict EOS token/finish its response? | 1 | Hey everyone!
I am working on training a custom chatbot based on llama 2 7b. I adapted OpenAssistant's prompt format (see here [OpenAssistant/llama2-13b-orca-8k-3319 · Hugging Face](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319)) and it looks like this:
https://preview.redd.it/0rrqr07yvhfb1.png?width=695&format=png&auto=webp&s=5b469f45049c8e16b4543bc4fc8f62b0d7cbd925
Not sure why, but if I use </s> token (the standard eos token, see link above for context) loss just explodes. So I added custom <|end|> token. With custom end token it trains just fine BUT **the model simply refuses to predict <|end|> token**, it generates its response indefenitely.
I checked datagenerators, everything is fine, labels include the <|end|> token. Tokenizer knows about the <|end|> token. Model knows about the <|end|> token.
I also checked logits for the <|end|> token and they have low value. When the model is supposed to predict <|end|> it predicts \`\_\` symbol (token id = 29871). Like... why?
**I literally have no idea why the hell the model doesn't want to predict <|end|>.**
\---------
Additional details:
\- I use 'meta-llama/Llama-2-7b-hf'
\- I use LoRA, model is loaded in 8bit;
\- batch size is 32;
\- ctx len 2048;
\- I do resize model's vocab;
\- I have tried token forcing, beam search, repetition penalty - nothing solves the problem;
\- I tried other prompt formats. The model answers to the request just fine, but can't finish its response nevertheless.
Example of a (broken) response:
[There must be \<|end|\> after 11 but little AI resists to work properly](https://preview.redd.it/ix1oe8o6tifb1.png?width=588&format=png&auto=webp&s=7956d7eca9e4487316ad28f39702aea872db346c)
How tokenizer is set:
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, use_fast=False)
tokenizer.add_special_tokens({
"additional_special_tokens":
[AddedToken("<|system|>"), AddedToken("<|user|>"), AddedToken("<|assistant|>"), AddedToken("<|end|>")],
"pad_token": '<pad>'
})
I suspect there is a connection to padding/token ids issues in llama: [What are the eos\_token\_id and bos\_token\_id · Issue #279 · tloen/alpaca-lora (github.com)](https://github.com/tloen/alpaca-lora/issues/279) . Though it's an old one and I'm not sure if it persists in the newest llama 2.
I wanted to try adding high weight on the loss for this token, but it doesn't seem HF supports loss weights. Considering others are training their chatbots w/o any problem (well, those who succeded in training), I think I made some kind of mistake. And I want to go down the rabbit hole instead of using other's code and closing my eyes on this strange issue (I'm gonna ~~fucking~~ FIX YOU), I have to understand what's going on.
Did anyone face the same/similar issue? What is the reason of such strange behavior? Any hypotheses?
Any help/comment would be very appreciated. | 2023-08-01T16:29:53 | https://www.reddit.com/r/LocalLLaMA/comments/15fhf33/why_does_the_model_refuse_to_predict_eos/ | oKatanaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fhf33 | false | null | t3_15fhf33 | /r/LocalLLaMA/comments/15fhf33/why_does_the_model_refuse_to_predict_eos/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MFl3Kjrjun4AxUHe9xvVT5VFQFl7jhi66SIW8hmSRL0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?width=108&crop=smart&auto=webp&s=598fe2faf02eda681c445984db5ecd5ede792c2e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?width=216&crop=smart&auto=webp&s=acde38f348e73cae2a607ad5d5f63e6502bc2186', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?width=320&crop=smart&auto=webp&s=408634cf83b305e87e9e165f94b95aafd5a5b560', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?width=640&crop=smart&auto=webp&s=e0b52b23809fa0c1be277c27b98785604f46ea8e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?width=960&crop=smart&auto=webp&s=d3b5fb02362d43d7af1e9c2d5c15295674d7bc71', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?width=1080&crop=smart&auto=webp&s=c55dfb15477b55e580d4d1bc4c53b6b60c83c9bd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5KfeBqsljbBnMjdc3r6kzCXQ5vAtDMZH-rjyrKRJii4.jpg?auto=webp&s=5982887e7d9f050422c3f19695b2483655dbe4ea', 'width': 1200}, 'variants': {}}]} |
|
Guide to running llama.cpp on Windows+Powershell+AMD GPUs | 1 | Hi!
I have an ASUS AMD Advantage Edition laptop (https://rog.asus.com/laptops/rog-strix/2021-rog-strix-g15-advantage-edition-series/) that runs windows. I haven't gotten time to install linux and set it up the way I like yet, still after more than a year.
I'm just dropping a small write-up for the set-up that I'm using with llama.cpp to run on the discrete GPUs using clbast.
You can use Kobold but it meant for more role-playing stuff and I wasn't really interested in that. Funny thing is Kobold can be set up to use the discrete GPU if needed.
1. For starters you'd need llama.cpp itself from here: https://github.com/ggerganov/llama.cpp/tags.
Pick the clblast version, which will help offload some computation over to the GPU. Unzip the download to a directory. I unzipped it to a folder called this: "D:\Apps\llama\"
2. You'd need a llm now and that can be obtained from HuggingFace or where-ever you'd like it from. Just note that it should be in ggml format. If you have a doubt, just note that the models from HuggingFace would have "ggml" written somewhere in the filename. The ones I downloaded were "nous-hermes-llama2-13b.ggmlv3.q4_1.bin" and
"Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin"
3. Move the models to the llama directory you made above. That makes life much easier.
4. You don't really need to navigate to the directory using Explorer. Just open Powershell where-ever and you can also do
cd D:\Apps\llama\
5. Here comes the fiddly part. You need to get the device ids for the GPU. An easy way to check this is to use "GPU caps viewer", go to the tab titled OpenCl and check the dropdown next to "No. of CL devices".
The discrete GPU is normally loaded as the second or after the integrated GPU. In my case the integrated GPU was gfx90c and discrete was gfx1031c.
6. In the powershell window, you need to set the relevant variables that tell llama.cpp what opencl platform and devices to use. If you're using AMD driver package, opencl is already installed, so you needn't uninstall or reinstall drivers and stuff.
$env:GGML_OPENCL_PLATFORM = "AMD"
$env:GGML_OPENCL_DEVICE = "1"
7. Check if the variables are exported properly
Get-ChildItem env:GGML_OPENCL_PLATFORM
Get-ChildItem env:GGML_OPENCL_DEVICE
This should return the following:
Name Value
---- -----
GGML_OPENCL_PLATFORM AMD
GGML_OPENCL_DEVICE 1
8. Once these are set properly, run llama.cpp using the following:
D:\Apps\llama\main.exe -m D:\Apps\llama\nous-hermes-llama2-13b.ggmlv3.q4_1.bin -ngl 33 -i --threads 8 --interactive-first -r "### Human:"
OR
replace Wizard with Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin or whatever llm you'd like. I like to play with 7B, 13B with 4_0 or 5_0 quantized llms. You might need to trawl through the fora here to find parameters for temperature, etc that work for you.
9. Checking if these work, I asked the model (nous-hermes-llama2-13b.ggmlv3.q4_1.bin) to create a R-shiny app that generates a 4x4 dataframe. I've haven't tested the app yet (I know, I know!). I've posted the content at pastebin since reddit formatting these was a paaaain. Pastebin: https://pastebin.com/peSFyF6H
salient features @ gfx1031c (6800M discrete graphics):
llama_print_timings: load time = 60188.90 ms
llama_print_timings: sample time = 3.58 ms / 103 runs ( 0.03 ms per token, 28770.95 tokens per second)
llama_print_timings: prompt eval time = 7133.18 ms / 43 tokens ( 165.89 ms per token, 6.03 tokens per second)
llama_print_timings: eval time = 13003.63 ms / 102 runs ( 127.49 ms per token, 7.84 tokens per second)
llama_print_timings: total time = 622870.10 ms
salient features @ gfx90c (cezanne architecture integrated graphics):
llama_print_timings: load time = 26205.90 ms
llama_print_timings: sample time = 6.34 ms / 103 runs ( 0.06 ms per token, 16235.81 tokens per second)
llama_print_timings: prompt eval time = 29234.08 ms / 43 tokens ( 679.86 ms per token, 1.47 tokens per second)
llama_print_timings: eval time = 118847.32 ms / 102 runs ( 1165.17 ms per token, 0.86 tokens per second)
llama_print_timings: total time = 159929.10 ms | 2023-08-01T16:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/15fhg3v/guide_to_running_llamacpp_on_windowspowershellamd/ | fatboy93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fhg3v | false | null | t3_15fhg3v | /r/LocalLLaMA/comments/15fhg3v/guide_to_running_llamacpp_on_windowspowershellamd/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'P1dQ0akhLF4j5uS9ILxJgL_mWQFrZ135Q9cviEofnFE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?width=108&crop=smart&auto=webp&s=de516b95f908dfb812da167b434812b825e34454', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?width=216&crop=smart&auto=webp&s=452562e4de78b0cfe871eb05ae1232f4f299032d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?width=320&crop=smart&auto=webp&s=08b724b833e28c3adec6d32d1b05e116497968bf', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?width=640&crop=smart&auto=webp&s=b2d1acb8619b78714c48606a7c83ea9bd8edf46b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?width=960&crop=smart&auto=webp&s=9faef34eab3cb9d588d3ec9880699094d2d2ad9b', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?width=1080&crop=smart&auto=webp&s=211e6397c6fe993c946bd1ac381b33c93082feb7', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/EnUusRKUTUiEcxuIR200kPV6uCrzMSF376vtU6FaGwU.jpg?auto=webp&s=e82955c1722b1c6763f058dd4c9259f6a0f893a3', 'width': 2400}, 'variants': {}}]} |
Need Feedback on my Chatbot/Assistant Project | 1 | An Open Source, locally ran Ai Chatbot/Assistant/Retrieval Framework focused on realistic long term memory that I have been working on silently for a while now. Its current tools include a websearch/scrape and file reading Chatbot/Agent. Uses Llama 2 and Qdrant. A walkthrough of the Agent Architecture can be found in the Github readme.
I need feedback on areas where it can be improved as well as feature suggestions. Feel free to be harsh in the feedback lol, I want complete honesty.
Github: [https://github.com/libraryofcelsus/Aetherius\_AI\_Assistant](https://github.com/libraryofcelsus/Aetherius_AI_Assistant) | 2023-08-01T16:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/15fhnco/need_feedback_on_my_chatbotassistant_project/ | libraryofcelsus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fhnco | false | null | t3_15fhnco | /r/LocalLLaMA/comments/15fhnco/need_feedback_on_my_chatbotassistant_project/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XtbDZ1XDF8uUs937-anSF3YMZHC3gHPO7pM_TUx9_Iw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?width=108&crop=smart&auto=webp&s=9ee578e3165bc5138b5ae2382f44c3948e530181', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?width=216&crop=smart&auto=webp&s=bc7862d6c80edec7c0c3025295e8cda81d29510e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?width=320&crop=smart&auto=webp&s=8eaf1fd6c741b88e7350e6119418824ad4dd36b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?width=640&crop=smart&auto=webp&s=5a858c5b110d64818de5832d50fde0ee2e52cd0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?width=960&crop=smart&auto=webp&s=d77258531674a10986b0e197ef93415b89153d05', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?width=1080&crop=smart&auto=webp&s=295c397911fd1a67673717e5c87cca0140e18b67', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-CCRumguekW3hNuJ9glYMukQelYQSDz0Il7UExaO3G0.jpg?auto=webp&s=abb915e09fdfb04bb4c2f10c90416d49314480e8', 'width': 1200}, 'variants': {}}]} |
Has Meta released the training dataset for LLaMa 2? | 1 | Have they? Or are they expected to do this anytime soon? Looking to dive a bit deeper on how this was exactly trained after reading their paper. | 2023-08-01T16:57:04 | https://www.reddit.com/r/LocalLLaMA/comments/15fi4s0/has_meta_released_the_training_dataset_for_llama_2/ | komninosc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fi4s0 | false | null | t3_15fi4s0 | /r/LocalLLaMA/comments/15fi4s0/has_meta_released_the_training_dataset_for_llama_2/ | false | false | self | 1 | null |
Testing Llama2 for Coding in the Wild | 1 | Hi LocalLlama! I’m working on an open-source IDE extension ([Continue](https://github.com/continuedev/continue)) that makes it easier to code with LLMs. We just released Llama-2 support using [Ollama](https://github.com/jmorganca/ollama) (imo the fastest way to setup Llama-2 on Mac), and would love to get some feedback on how well it works. With benchmarks like MMLU being separated from real-world quality, we’re hoping that Continue can serve as the easiest place to “smell test” new models with in-the-wild code as they are released.
(below is Llama2 answering a question about our codebase)
https://preview.redd.it/idthid8h7jfb1.png?width=3024&format=png&auto=webp&s=b1b409eca6583dfbae72d786f9327336d69e88f3 | 2023-08-01T17:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/15finnn/testing_llama2_for_coding_in_the_wild/ | sestinj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15finnn | false | null | t3_15finnn | /r/LocalLLaMA/comments/15finnn/testing_llama2_for_coding_in_the_wild/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'jUnQdNvwhORoZruvbhF0LAyynAiF6-H-GDy2MHSXT7I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?width=108&crop=smart&auto=webp&s=cb73100887e20259b4089f30086fb83bfa2db08c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?width=216&crop=smart&auto=webp&s=4a4c621615e3a5f40bc6d9c7f929b76aee89f8c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?width=320&crop=smart&auto=webp&s=f8500ee31633e1941a5d9b3924e199573e63fe16', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?width=640&crop=smart&auto=webp&s=d5ec2f1fe070ed4463322d71343eebd937de5ede', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?width=960&crop=smart&auto=webp&s=e9793d9960f2008611716b3a0581fb2702bda79f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?width=1080&crop=smart&auto=webp&s=12e7466d3471b26014a8994bfe11ef44b2ad5d13', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Uqw27n14qNrPk1Fj1oFiqIeJfVkCJTDqfiQJSIOCYRs.jpg?auto=webp&s=7323103ac3f35dfd68ddd62a5511a6e318ee9fa0', 'width': 1200}, 'variants': {}}]} |
|
How many people in this subreddit are using Llama 2 in production? | 1 | It seems like most people are using GPT-4 or 3.5 in production dataflows right now. I'm curious you all have found ways to make Llama work in production, and how you tested on your data? | 2023-08-01T17:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/15fjrcf/how_many_people_in_this_subreddit_are_using_llama/ | arctic_fly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fjrcf | false | null | t3_15fjrcf | /r/LocalLLaMA/comments/15fjrcf/how_many_people_in_this_subreddit_are_using_llama/ | false | false | self | 1 | null |
Llama 2 lora training with text generation webui? | 1 | I'm fairly used to creating loras with llama 1 models. But I seem to be doing something wrong when it comes to llama 2. I'm trying to use text generation webui with a small alpaca formatted dataset. Everything seems to go as I'd expect at first. The UI accepts the dataset, during training it iterates over every step. and then there's the usual completed message and a new lora to use. But when I load the lora I get one of two results. It's either something like a big mess of letters rather than real words, or an answer that differs from what was in my dataset. I've double checked and the llama2 models I'm trying to train on all run fine without a lora.
I'm wondering, are there any settings that need to be changed when training on a llama2 model compared to llama1? | 2023-08-01T18:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/15fk2vz/llama_2_lora_training_with_text_generation_webui/ | toothpastespiders | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fk2vz | false | null | t3_15fk2vz | /r/LocalLLaMA/comments/15fk2vz/llama_2_lora_training_with_text_generation_webui/ | false | false | self | 1 | null |
Simple demo app of TinyStories-1m that runs locally on iOS | 1 | Hey all,
After reading the TinyStories paper, me and a friend decided to port it as a test to see what performance of tiny LMs look like on mobile devices. The stories it generates aren't great or anything, it's really just a toy tech demo to play with. We used HuggingFace's exporters library to do this.
You can try it out here: [https://apps.apple.com/us/app/tinystories-on-device/id6451497115](https://apps.apple.com/us/app/tinystories-on-device/id6451497115)
(we are not affiliated with MS research or the authors) | 2023-08-01T18:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/15fknyd/simple_demo_app_of_tinystories1m_that_runs/ | seattleeng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fknyd | false | null | t3_15fknyd | /r/LocalLLaMA/comments/15fknyd/simple_demo_app_of_tinystories1m_that_runs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_S5p3sjqtu5y6CyIjmX5oa0uOEnamMztVcKwlYS-Hdc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?width=108&crop=smart&auto=webp&s=df0228238b5ace1d6482fbe64b1863efbc1d7c60', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?width=216&crop=smart&auto=webp&s=235786de3142e47d0964499f49df2c497fca9e01', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?width=320&crop=smart&auto=webp&s=76f4e44c6c5fd2a37f9b442d20323bca76ae5f7b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?width=640&crop=smart&auto=webp&s=f55cd499f4b140691256677c0e8c7513072e9c05', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?width=960&crop=smart&auto=webp&s=ef11f438de2ff7b4d7b1a1b1d019f597053a4941', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?width=1080&crop=smart&auto=webp&s=960cf4ba64eaba955895e34ab1ea47289026f01a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/QI2Ccm_sEqNd1w2WKKcc6rlSlP-8I5QJdcMSKLw81dY.jpg?auto=webp&s=fc86aeab2188ee6449810ee1f310d57434cb33b2', 'width': 1200}, 'variants': {}}]} |
How to find good llama.cpp command line parameters | 6 | What are good llama.cpp command line parameter for the llama 2 nous hermes model? And should I use different parameters for Q&A vs role play vs story writing? Do you know a good website with info for this?
Currently I'm using this for everything
GGML\_OPENCL\_PLATFORM=AMD GGML\_OPENCL\_DEVICE=1 ./main -m ./models/nous-hermes-llama2-13b.ggmlv3.q5\_1.bin --color --ignore-eos --temp .7 --mirostat 1 --mirostat-ent 4 --mirostat-lr 0.2 --repeat-last-n 1600 --repeat-penalty 1.2 --gpu-layers 25 --interactive-first --multiline-input | 2023-08-01T18:55:58 | https://www.reddit.com/r/LocalLLaMA/comments/15flc5w/how_to_find_good_llamacpp_command_line_parameters/ | TypeDeep4564 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15flc5w | false | null | t3_15flc5w | /r/LocalLLaMA/comments/15flc5w/how_to_find_good_llamacpp_command_line_parameters/ | false | false | self | 6 | null |
What do you need to evaluate LLMs in dev & prod? Tell us and we'll build it! | 1 | 2023-08-01T20:07:54 | https://docs.google.com/forms/d/e/1FAIpQLScfZ_4MSVmsiaoEByb_Y2tk--J-xtV35P6OnAiyaihbrjwlQQ/viewform | Sciencepeaches | docs.google.com | 1970-01-01T00:00:00 | 0 | {} | 15fn9l3 | false | null | t3_15fn9l3 | /r/LocalLLaMA/comments/15fn9l3/what_do_you_need_to_evaluate_llms_in_dev_prod/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aSd8ncNmnnembYorW0afv1xj2Hz2v5fwuLipkGABsSY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?width=108&crop=smart&auto=webp&s=5c37ab092aea51e7d8fe5b6e454b73dd2af9d685', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?width=216&crop=smart&auto=webp&s=5a24ec5eea2909444420c49c93d58f3aaa26a3f7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?width=320&crop=smart&auto=webp&s=56e9502f843fb8c19f7b578fcdb0fa4798ba91d4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?width=640&crop=smart&auto=webp&s=3eda8bdbc31fca5fbaf076ed7009600a4fcd1284', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?width=960&crop=smart&auto=webp&s=569b2d80ddeba8825267f7c9607e466f381cce02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?width=1080&crop=smart&auto=webp&s=9a9c89878a8538c3e4dda30aebcdeccd9e530365', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/SHJGtjA0dtC91vJD09yZIfzKKz8GPwKkrrw1ruE17X0.jpg?auto=webp&s=4baa88eda2d32856c095157b32497cfdcac07817', 'width': 1200}, 'variants': {}}]} |
||
Commercial opportunities | 1 | [removed] | 2023-08-01T20:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/15fnm5r/commercial_opportunities/ | Few-Thing-166 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fnm5r | false | null | t3_15fnm5r | /r/LocalLLaMA/comments/15fnm5r/commercial_opportunities/ | false | false | self | 1 | null |
EASIET WAY TO FINETUNE LLAMA2 ON YOUR OWN DATA | 1 | [removed] | 2023-08-01T20:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/15fo58x/easiet_way_to_finetune_llama2_on_your_own_data/ | zeroninezerotow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fo58x | false | null | t3_15fo58x | /r/LocalLLaMA/comments/15fo58x/easiet_way_to_finetune_llama2_on_your_own_data/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oHGgRUEnF3rmMmZKQYmXJ1Ts7qQRSzXIVGSNMB83tD8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?width=108&crop=smart&auto=webp&s=eeba688ec613f6a70840f6874aced163b3fec757', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?width=216&crop=smart&auto=webp&s=b7fd265843c4ea84514ace6b387c695cc2072e72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?width=320&crop=smart&auto=webp&s=d474f7d71a48ce67530c41bf224e59de15080e51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?width=640&crop=smart&auto=webp&s=259fcc945d48ad7e99dc32b518c916adb81c56db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?width=960&crop=smart&auto=webp&s=a6ae048824736e1af8ebc31ce7b2ed3ea4665f52', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?width=1080&crop=smart&auto=webp&s=9ed1fe666ed17ac50769db45f07138bda332de6f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DNjgzmLud--jwDJQN1-AXhwTkVNYUQRPiY44TasBKT0.jpg?auto=webp&s=584c20db0d2ca677799c7bffc9a85ecbe3fbdb23', 'width': 1200}, 'variants': {}}]} |
Prompt Engineering World Championships - $15k prize | 1 | Hey everyone, I'm excited to announce the first Prompt Engineering World Champtionships, which will begin on August 14th and offer a grand prize of $15,000 (along with other category prizes). This is a chance to find out how you measure up to prompt engineers across the globe. Let's see what you've got!
Competition Link: [https://app.openpipe.ai/world-champs/](https://app.openpipe.ai/world-champs/signup)
Here's what you need to know:
* Signups are limited to the first 3000 participants
* Contestants will compete to engineer the most accurate prompts to extract information from several provided datasets
* The winner of the grand prize will be the engineer who can get the highest average performance across all included models
* Models include GPT-3.5, Claude 2, and Llama 7b, 13b, and 70b fine-tuned assistants
* The competition is free to participate (all inference costs will be covered, unless you want to use your own API keys)
I look forward to competing with you, and may the best engineer win! | 2023-08-01T21:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/15fp2gf/prompt_engineering_world_championships_15k_prize/ | arctic_fly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fp2gf | false | null | t3_15fp2gf | /r/LocalLLaMA/comments/15fp2gf/prompt_engineering_world_championships_15k_prize/ | false | false | self | 1 | null |
llama.cpp can now be used as a transformers model in text-generation-webui. | 1 | This is done with the [llamacpp_HF](https://github.com/oobabooga/text-generation-webui/blob/main/modules/llamacpp_hf.py) wrapper, which I have finally managed to optimize ([spoiler: it was a one line change](https://github.com/oobabooga/text-generation-webui/commit/b53ed70a70d51a26d61939b7b04b98f8cf20638a)).
It is now about as fast as using llama.cpp directly, but with the following benefits:
1) More samplers. Transformers parameters like epsilon_cutoff, eta_cutoff, and encoder_repetition_penalty can be used.
2) Special tokens. By using the transformers Llama tokenizer with llama.cpp, special tokens like `<s>` and `</s>` are tokenized correctly. This is essential for using the llama-2 chat models, as well as other fine-tunes like Vicuna. To my knowledge, special tokens are currently a challenge in llama.cpp.
3) Custom transformers logits processors. For instance, recently I wrote a simple processor that makes chat replies longer by banning the `\n` token until enough tokens have been generated ([here is a screenshot](https://www.reddit.com/r/oobaboogazz/comments/15ewy0e/testing_the_new_long_replies_extension_with_base/)). It works out of the box with llamacpp_HF. [The Muse](https://www.reddit.com/r/LocalLLaMA/comments/15ffzw5/presenting_the_muse_a_logit_sampler_that_makes/), recently posted on this sub, will also work.
I started giving this more attention due to the 34b version of llama-2 being on hold. This makes running 13b in 8-bit precision the best option for those with 24GB GPUs. Transformers has the `load_in_8bit` option, but it's very slow and unoptimized in comparison to `load_in_4bit`. ExLlama doesn't support 8-bit GPTQ models, so llama.cpp 8-bit through llamacpp_HF emerges as a good option for people with those GPUs until 34b gets released. | 2023-08-01T21:26:21 | https://www.reddit.com/r/LocalLLaMA/comments/15fpcrj/llamacpp_can_now_be_used_as_a_transformers_model/ | oobabooga4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fpcrj | false | null | t3_15fpcrj | /r/LocalLLaMA/comments/15fpcrj/llamacpp_can_now_be_used_as_a_transformers_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b6EiNgkERGN8zqIBCzmRnp4bHtYaxYS3mgTeZwXh7_Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?width=108&crop=smart&auto=webp&s=3794235c3d391e6453a5d7ec6eb71b3f731219ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?width=216&crop=smart&auto=webp&s=772ce9e613304ad25bcb8d9c9a85a7bf05c25a0f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?width=320&crop=smart&auto=webp&s=b7e4a0189e2d2de0a8c5906412caea751030b114', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?width=640&crop=smart&auto=webp&s=866bb29e47461563d963b25ec51f496e59166de7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?width=960&crop=smart&auto=webp&s=3f335aaae24c50806bf8f0c4608da9356883ff67', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?width=1080&crop=smart&auto=webp&s=70936b7238e791caa0514dacf20598c40df35237', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/urOD25rP4DYI715WtAxpw7j4WpmBajrqo6cJcwkNpkc.jpg?auto=webp&s=9fce788ff2e94bfb796c49f218f67cf1c03b033c', 'width': 1200}, 'variants': {}}]} |
This is ridiculous, but also hilarious | 1 | 2023-08-01T21:29:55 | https://www.reddit.com/gallery/15fpg7n | holistic-engine | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15fpg7n | false | null | t3_15fpg7n | /r/LocalLLaMA/comments/15fpg7n/this_is_ridiculous_but_also_hilarious/ | false | false | 1 | null |
||
Best way to run Llama 2 locally on GPUs for fastest inference time | 1 | I've been working on having a local llama 2 model for reading my pdfs using langchain but currently inference time is too slow because I think its running on CPU's with the GGML version of the model. So what would be the best implementation of llama 2 locally? This includes which version (hf, ggml, gptq etc) and how I can maximize my GPU usage with the specific version because I do have access to 4 Nvidia Tesla V100s | 2023-08-01T22:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/15fq8b0/best_way_to_run_llama_2_locally_on_gpus_for/ | SnooStrawberries2325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fq8b0 | false | null | t3_15fq8b0 | /r/LocalLLaMA/comments/15fq8b0/best_way_to_run_llama_2_locally_on_gpus_for/ | false | false | self | 1 | null |
Can anyone help me get past this error(s)? I can't load any models. | 1 | [removed] | 2023-08-01T22:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/15fqkw6/can_anyone_help_me_get_past_this_errors_i_cant/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fqkw6 | false | null | t3_15fqkw6 | /r/LocalLLaMA/comments/15fqkw6/can_anyone_help_me_get_past_this_errors_i_cant/ | false | false | self | 1 | null |
Everybody is trying to beat ChatGPT, is there anyone trying to beat Elevenlabs? | 1 | Just like the JS world every new day either a model or new finetune LLMs are released, which is great and I have gotten a choice fatigue. Organizations like Meta, Stability, EleutherAI, Salesforce and others are awesome for providing great resources to the open-source community, but I am also longing for a decent TTS that is not artificial and robotic.
What is the best open-source TTS out there? | 2023-08-01T23:06:12 | https://www.reddit.com/r/LocalLLaMA/comments/15fru44/everybody_is_trying_to_beat_chatgpt_is_there/ | boyetosekuji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fru44 | false | null | t3_15fru44 | /r/LocalLLaMA/comments/15fru44/everybody_is_trying_to_beat_chatgpt_is_there/ | false | false | self | 1 | null |
I have two basic questions about NPCs powered by LLMs | 1 | I want to know about hypothetical games whose NPCs are powered by LLMs.
1. **Tokenization.** As far as I know, a LLM can maintain its "understanding" of the conversation's context by analyzing what was previously said. However, it has a limit that is measured by tokens *(tokens are units that can be from single characters to whole expressions)*, so if the LLM used in the game has a limit of 2000 tokens *(let's say that 1 token = 1 word)*, it can analyze only the last 2000 words, anything you talked beyond that is forever forgotten. That's a problem, because a single RPG powered by AI could be played for a literal decade, imagine that you're playing a game like Skyrim or The Witcher and you want to come back to an interesting peasant that you met 3 years ago (I really mean actual 3 years ago...).***Here is my question:*** Do developers are working in a way to store all the previous knowledge of all characters that the player interacted with without "sacrificing" tokens? I mean, something like use an algorithm to compress the NPC's knowledge to a small file *(summarization!)* that can be easily recovered by the LLM without the need of utilizing tokens?
2. **Context.** People from a fantastic medieval world are not supposed to know what computers and ozone layer are, but they know first-hand that dragons exist. Is it possible to control what NPCs know given their context of life? Is it possible to individualize the knowledge of each character? For example, a peasant is not supposed to have a large knowledge of foreign languages and heraldry, a noble may know a terrible secret that nobody else knows. If I'm playing a new [Elder Scrolls](https://en.wikipedia.org/wiki/The_Elder_Scrolls) game, I would like to spend the afternoon talking to a mage librarian about the fate of the [dwemer](https://elderscrolls.fandom.com/wiki/Dwemer), and everything he or she says really fits with the lore, but he or she will think I am crazy if I start talking about AIs and social networks. | 2023-08-01T23:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/15fso41/i_have_two_basic_questions_about_npcs_powered_by/ | maquinary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fso41 | false | null | t3_15fso41 | /r/LocalLLaMA/comments/15fso41/i_have_two_basic_questions_about_npcs_powered_by/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'c-Ce_NNNtG1YXeev81b2bSVzrEbvVdDaBqMHkBY2YkU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/orAQLKXTfIcfVMCUcmYNFksnkeuN7adFDR8cxjxmtS8.jpg?width=108&crop=smart&auto=webp&s=78da3a6cb937b4c4bbbc5f8b26a8e2c1bcec909c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/orAQLKXTfIcfVMCUcmYNFksnkeuN7adFDR8cxjxmtS8.jpg?width=216&crop=smart&auto=webp&s=f3bf66f438fdd3c818d774169ada7d5c36dd0ba1', 'width': 216}], 'source': {'height': 165, 'url': 'https://external-preview.redd.it/orAQLKXTfIcfVMCUcmYNFksnkeuN7adFDR8cxjxmtS8.jpg?auto=webp&s=e9aabe1bcd8a60f35f3592d5b6d6a6c749342fd8', 'width': 220}, 'variants': {}}]} |
Reasonably Future proof Workstation for upcoming models | 2 | I'm building a PC workstation primarily for doing LLM model inference. My current plan is to pair a single RTX 4090 with a Ryzen 9 7950X and 32GB DDR5.
But looking at the trend of model sizes growing every year, I want to try my best to make sure I can keep up for a year or two. The current generation 70B models already need 2 X 4090 or 3090 to run at reasonable speed. But I've a feeling even that's going to be insufficient in an year or two with a potentially bigger model drop.
If I wanted to future proof rest of the system
apart from the GPU what would I need?
- Should I go for a motherboard/CPU with lot more PCIE lanes than 16 or 20? Threadripper pro?
- Should I look for a motherboard/CPU that can support more than 2 memory channels?
- Am I overthinking this?
Thank you in advance. | 2023-08-02T00:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/15fua3b/reasonably_future_proof_workstation_for_upcoming/ | LatterNeighborhood58 | self.LocalLLaMA | 2023-08-02T01:01:34 | 0 | {} | 15fua3b | false | null | t3_15fua3b | /r/LocalLLaMA/comments/15fua3b/reasonably_future_proof_workstation_for_upcoming/ | false | false | self | 2 | null |
Other Languages | 1 | I’ve been testing LLaMa 2 70b 4bit and it’s capable of answering in Portuguese with a good system message.
However, after a while it tends to revert back to English. I know Meta says it’s optimized for English. What the best way to teach a new language, or improve one that it already knows? Lora? | 2023-08-02T02:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/15fvpu9/other_languages/ | blackpantera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fvpu9 | false | null | t3_15fvpu9 | /r/LocalLLaMA/comments/15fvpu9/other_languages/ | false | false | self | 1 | null |
how to understand how context size can be increased without finetuning | 1 | hello,
i am having some difficulty getting the concept of how the context size of a llm can be increased without finetuning. some basic questions i have :
1. first a basic question about what a "context size" limitation means: suppose if a llm has been trained on a context of 2k, and suppose i feed it a prompt with say 4k. will it ignore the final 2k of the tokens? or will it process all 4k but since this context size is different from its training, we cant guarantee how it'll perform?
2. i found https://old.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/
I see they base their work on recent techniques of RoPE, but is there an intuitive explanation for how this "free lunch" is happening? are there any subtleties or caveats? because it seems to be performance improvement for free..
thanks | 2023-08-02T03:21:35 | https://www.reddit.com/r/LocalLLaMA/comments/15fxf11/how_to_understand_how_context_size_can_be/ | T_hank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fxf11 | false | null | t3_15fxf11 | /r/LocalLLaMA/comments/15fxf11/how_to_understand_how_context_size_can_be/ | false | false | self | 1 | null |
LLAMA 2 70B - What am I doing wrong? | 1 | I have access to a AWS account where I can spin up ml.g5.48xlarge
At a high level, my code:
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16
)
return model
Inference:
from transformers import TextIteratorStreamer
inputs = self.tokenizer([prompt], return_tensors="pt").to("cuda")
streamer = TextIteratorStreamer(
self.tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True
)
generate_kwargs = dict(
inputs,
streamer=streamer,
max_new_tokens=max_new_tokens,
do_sample=True,
top_p=top_p,
top_k=top_k,
temperature=temperature,
num_beams=1,
)
t = Thread(target=self.model.generate, kwargs=generate_kwargs)
t.start()
outputs = []
for text in streamer:
outputs.append(text)
yield text
The 13b loads up fine, runs fine on a 12xlarge. I now want to try with the 70B, I spin up a g5.48xlarge - which apparently has 200G VRAM, and the inference is painfully slow ... Like unusably slow. Is 200G VRAM not enough to load it in torch\_dtype=torch.float16?
Do I have to shell out for the p4 24x A100s? Assuming I can even get them. I can barely get the g548xlarge after trying a few times I get one.
​
​ | 2023-08-02T03:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/15fxrl6/llama_2_70b_what_am_i_doing_wrong/ | Ok-Contribution9043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fxrl6 | false | null | t3_15fxrl6 | /r/LocalLLaMA/comments/15fxrl6/llama_2_70b_what_am_i_doing_wrong/ | false | false | self | 1 | null |
Best LLama2 model for storytelling | 1 | I was on vacation for 3 weeks and kinda fell out off the loop with all the new LLama2 stuff that happened. I used models like Airoboros and Chrono-Hermes in the past and wanted to ask if there are any LLama2 based models that perform better in the context of storytelling than those "old" models | 2023-08-02T03:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/15fxron/best_llama2_model_for_storytelling/ | TheZoroark007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fxron | false | null | t3_15fxron | /r/LocalLLaMA/comments/15fxron/best_llama2_model_for_storytelling/ | false | false | self | 1 | null |
Seeking Advice: Looking for Good Story Generation Models with Potato Computer Specs | 2 | Hello everyone, I'm new to Local LLM. Does anyone know of any good models for generating stories? I only have a potato computer with the following specs: Ryzen 5 2600, 16GB RAM, and a GTX 1070 with 8GB VRAM. Can my PC handle any of these models? If so, please give me some tips on how I can generate a good story. Thank you in advance. | 2023-08-02T03:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/15fy5cw/seeking_advice_looking_for_good_story_generation/ | MaxxNiNo1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fy5cw | false | null | t3_15fy5cw | /r/LocalLLaMA/comments/15fy5cw/seeking_advice_looking_for_good_story_generation/ | false | false | default | 2 | null |
Instruction Tuned Llama 2 13b? | 1 | I've been using Llama 2 13b on a RTX 2080. I want to use langchain to get the models to use tools. It's not working, I'm getting a failed to parse error and on investigating I read that it may be because the model is not fine tuned to follow instructions. Looking on Hugging Face I found instruct versions of Llama 2 70b but no Llama 2 13b instruct. Does such a model exist? Or is there any way to get a smaller model to use tools with langchain because I don't think it is possible for me to run the 70b model on a RTX 2080. | 2023-08-02T04:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/15fyfi7/instruction_tuned_llama_2_13b/ | tail-recursion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fyfi7 | false | null | t3_15fyfi7 | /r/LocalLLaMA/comments/15fyfi7/instruction_tuned_llama_2_13b/ | false | false | self | 1 | null |
Has anyone been able to get GGML models working with Langchain Agents | 1 | I have tried with LLAMA2-7B 8 bit quantized, Gorilla 7b ggml 8bit models. It integrates correctly with Langchain. I added two tools Wikipedia and DuckDuckgo search. I asked it when was Barrack Obama born? Straight forward question.
The Action it outputs is not restricted. It should output
Action:DuckDuckgo
Action Input: Barrack Obama birthday
Instead it outputs
Action: DuckDuckgo Barrack Obama birth date
And some other stuff in action Input like barrack Obama's birthday correct year etc.
The problem is Langchain needs an exact match and not a fuzzy match. Hence it says such a tool does not exist and does not do the search.
I am checking if anyone has ever got Langchain Agents working with GGML models and could figure out a way to output properly ( in a reproducible manner). | 2023-08-02T04:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/15fygru/has_anyone_been_able_to_get_ggml_models_working/ | PrivateUser010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fygru | false | null | t3_15fygru | /r/LocalLLaMA/comments/15fygru/has_anyone_been_able_to_get_ggml_models_working/ | false | false | self | 1 | null |
LLama2-70B on aws | 1 | Has anyone setup llama 2 on aws? What kind of machine was needed? I am able to spin up the 7B and 13B on a g5 12x large, but the 70B is unusably slow even on a 48x large. | 2023-08-02T04:41:54 | https://www.reddit.com/r/LocalLLaMA/comments/15fz08q/llama270b_on_aws/ | Ok-Contribution9043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15fz08q | false | null | t3_15fz08q | /r/LocalLLaMA/comments/15fz08q/llama270b_on_aws/ | false | false | self | 1 | null |
Orca-Mini 3b on Pixel 3 (4gb)! | 1 | 2023-08-02T05:14:02 | https://asciinema.org/a/600208 | Aaaaaaaaaeeeee | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 15fzmsz | false | null | t3_15fzmsz | /r/LocalLLaMA/comments/15fzmsz/orcamini_3b_on_pixel_3_4gb/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'T5aHkDjFy8a6BF_mvQ9Oe8D5sPrhNb6BLF4Q_XDs0tI', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/lZBDKRayEy0Gazzl2qR_A1ok-nQSV0KDaZjKMKa-vsk.jpg?width=108&crop=smart&auto=webp&s=b2553693a97640644ebd0cdf6e574b9e02296281', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/lZBDKRayEy0Gazzl2qR_A1ok-nQSV0KDaZjKMKa-vsk.jpg?width=216&crop=smart&auto=webp&s=fbd58580b6f8bbef04176fd6aa5ef8d334dff8d1', 'width': 216}, {'height': 358, 'url': 'https://external-preview.redd.it/lZBDKRayEy0Gazzl2qR_A1ok-nQSV0KDaZjKMKa-vsk.jpg?width=320&crop=smart&auto=webp&s=2d998cb7bcdcdc82457d183c50e62a8f15ab9ad2', 'width': 320}, {'height': 717, 'url': 'https://external-preview.redd.it/lZBDKRayEy0Gazzl2qR_A1ok-nQSV0KDaZjKMKa-vsk.jpg?width=640&crop=smart&auto=webp&s=30ccb63248c8517c2db8a371612ceabdfcbf2c4c', 'width': 640}, {'height': 1076, 'url': 'https://external-preview.redd.it/lZBDKRayEy0Gazzl2qR_A1ok-nQSV0KDaZjKMKa-vsk.jpg?width=960&crop=smart&auto=webp&s=53263e19f2c2de3d9f42f7a5dba6d5c523656d38', 'width': 960}], 'source': {'height': 1098, 'url': 'https://external-preview.redd.it/lZBDKRayEy0Gazzl2qR_A1ok-nQSV0KDaZjKMKa-vsk.jpg?auto=webp&s=c97cd56adb5b8c5f5219eacf06fc4f32d801e2f6', 'width': 979}, 'variants': {}}]} |
||
Weird inference from finetuned model | 1 | Hi Llamas, i finetuned Llama2 7B on datasets of FAQs and QA pairs, totalling 4k samples.
​
These have a typical structure which includes contextual headers like "Question:" or "### Question:"
​
Now when i run inference, i get responses that also have new questions within them, using the same contextual headers.
​
Additionally, the model often also includes new contextual headers which aren't in my dataset like "### additional information" etc.
​
How can i prevent this behaviour?
\- remove the headers from the dataset?
\- set the attention mask to zero on these headers?
\- are there specific stop words that Llama uses that i need to specify?
​
Its been hard to google for this, so i appreciate your help.
​
Thanks! | 2023-08-02T06:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/15g1943/weird_inference_from_finetuned_model/ | LegrangeHermit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g1943 | false | null | t3_15g1943 | /r/LocalLLaMA/comments/15g1943/weird_inference_from_finetuned_model/ | false | false | self | 1 | null |
understanding the llm eco-system | 1 | beginner in llms here. i have been trying to make sense of the lifecycle of a model. here's what i have:
- people usually release their models via huggingface, similar to other ml models.
- these are often too big and need to be quantized to be able to run on cpu-s or consumer gpu-s. this is done by a library such as ggml ( is exllama an alternate?)
- then the llms can be used as part of larger pipelines, like retrival-augmented-generation, or in agents. libraries like langchain help for these pipelines.
- when making a frontend like a conventional chatbot, libraries like oogabooga help.
- for specific parts of particular pipelines like RAG, llamaindex and unstructured.io provide dedicated facilities ( dealing with different types of documents etc)
can anyone correct me here. also would invite people to add other big names i dont know of that are significant parts of the ecosystem | 2023-08-02T06:54:15 | https://www.reddit.com/r/LocalLLaMA/comments/15g1flo/understanding_the_llm_ecosystem/ | olaconquistador | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g1flo | false | null | t3_15g1flo | /r/LocalLLaMA/comments/15g1flo/understanding_the_llm_ecosystem/ | false | false | self | 1 | null |
Discrod link is dead | 1 | Has anyone an updated link? The one in the sidebar says it's invalid. | 2023-08-02T07:58:05 | https://www.reddit.com/r/LocalLLaMA/comments/15g2jfc/discrod_link_is_dead/ | Matti-Koopa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g2jfc | false | null | t3_15g2jfc | /r/LocalLLaMA/comments/15g2jfc/discrod_link_is_dead/ | false | false | self | 1 | null |
Help to pick model / service please | 1 | Hi, so I just got into an argument with cgpt that the words babe and chick is disrespectful and refused to generate hashtags for social media for me. I am loosely following subreddits like this but its impossible for me to be up to date. Can you please recommend a language model that does not feel like speaking to a condescending nun in a disney cartoon, just executes tasks in a decent quality? Thanks
PS this is not targeted towards bashing on cgpt, their product strategy is their business and I don't really care, but I need tools that are useful for my specific use case. | 2023-08-02T08:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/15g2t8c/help_to_pick_model_service_please/ | MyMiddleNameDanger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g2t8c | false | null | t3_15g2t8c | /r/LocalLLaMA/comments/15g2t8c/help_to_pick_model_service_please/ | false | false | self | 1 | null |
Is there a way to make Local GPT say I don't know if the information is not in the index? | 1 | I'm trying to get localGPT using llama models only use the inputted info, but it keeps on telling me things based on its own knowledge, is there a way to make sure that the info is only based on the documents available? | 2023-08-02T08:25:19 | https://www.reddit.com/r/LocalLLaMA/comments/15g30im/is_there_a_way_to_make_local_gpt_say_i_dont_know/ | Tricky_Witness_1717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g30im | false | null | t3_15g30im | /r/LocalLLaMA/comments/15g30im/is_there_a_way_to_make_local_gpt_say_i_dont_know/ | false | false | self | 1 | null |
When to add custom tokens to tokenizer? | 1 | I want to finetune a model with data about products, which have skus.
However the SKUs are obviously out of vocabulary and so i think they would easily be confused for one another, leading to poor inference of SKUs.
​
Is adding the SKUs as new tokens to a model tokenizer a valid strategy around this problem?
Ive seen that some models do have empty token spaces that could be adopted.
​
If not, when is a valid instance to add new tokens? | 2023-08-02T08:39:54 | https://www.reddit.com/r/LocalLLaMA/comments/15g39bo/when_to_add_custom_tokens_to_tokenizer/ | LegrangeHermit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g39bo | false | null | t3_15g39bo | /r/LocalLLaMA/comments/15g39bo/when_to_add_custom_tokens_to_tokenizer/ | false | false | self | 1 | null |
Can't load GPTQ Model | 1 | i`mport os`
`import nltk`
`from langchain import PromptTemplate, LLMChain`
`from langchain.document_loaders import UnstructuredPDFLoader, PyPDFLoader, DirectoryLoader, TextLoader`
`from langchain.text_splitter import RecursiveCharacterTextSplitter`
`from langchain.indexes import VectorstoreIndexCreator`
`from langchain.embeddings import HuggingFaceEmbeddings`
`from langchain.vectorstores.faiss import FAISS`
`from transformers import pipeline`
`from transformers import LlamaTokenizer`
`from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig`
`import faiss`
`# Define the paths`
`model_path = './models/wizardLM-7B.safetensors' # Replace with your model path`
`# Create the embeddings and llm objects`
`embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')`
`# Load the local index`
`index = FAISS.load_local("my_faiss_index", embeddings)`
`# Initialize the question-answering model`
`qa_model = pipeline("question-answering", model="distilbert-base-cased-distilled-squad", tokenizer="distilbert-base-cased")`
`# Define the prompt template`
`template = """`
`Context: {context}`
`Question: {question}`
`Answer: {answer}`
`"""`
`# Define the similarity search function`
`def similarity_search(query, index, k=3):`
`try:`
`matched_docs = index.similarity_search(query, k=k)`
`return matched_docs`
`except Exception as e:`
`print("An error occurred during similarity search: ", e)`
`return []`
`# Split the documents into sentences`
`def split_into_sentences(document):`
`return nltk.sent_tokenize(document)`
`# Select the best sentences based on the question`
`def select_best_sentences(question, sentences):`
`results = []`
`for sentence in sentences:`
`answer = qa_model(question=question, context=sentence)`
`if answer['score'] > 0.8: # You can tune this threshold based on your requirements`
`results.append(sentence)`
`return results`
`quantize_config = BaseQuantizeConfig(**{"bits": 4, "damp_percent": 0.01, "desc_act": True, "group_size": 128})`
`llm = AutoGPTQForCausalLM.from_quantized(model_path, device="cuda:0", quantize_config=quantize_config, use_safetensors=True)`
`tokenizer = LlamaTokenizer.from_pretrained(model_path)`
`def answer_question(question):`
`# Get the most similar documents`
`matched_docs = similarity_search(question, index)`
`# Convert the matched documents into a list of sentences`
`sentences = []`
`for doc in matched_docs:`
`sentences.extend(split_into_sentences(doc.page_content))`
`# Select the best sentences`
`best_sentences = select_best_sentences(question, sentences)`
`context = "\n".join([doc.page_content for doc in matched_docs])`
`question = question`
`# Create the prompt template`
`prompt_template = PromptTemplate(template=template, input_variables=["context","question", "answer"])`
`# Initialize the LLMChain`
`llm_chain = LLMChain(prompt=prompt_template, llm=llm)`
`# Generate the answer`
`generated_text = llm_chain.run(context=context, question=question, answer='', max_tokens=512, temperature=0.0, top_p=0.05)`
`# Extract only the answer from the generated text`
`answer_start_index = generated_text.find("Answer: ") + len("Answer: ")`
`answer = generated_text[answer_start_index:]`
`return answer`
`# Main loop for continuous question-answering`
`while True:`
`# Get the user's question`
`question = input("Chatbot: ")`
`# Check if the user wants to exit`
`if question.lower() == "exit":`
`break`
`# Generate the answer`
`answer = answer_question(question)`
`# Print the answer`
`print("Answer:", answer)`
`Simply want to try different models to answer question about my docs. Need the best response time. So, I thought, I'll go for GPTQ. Have NVIDIA, running on windows, auto-gptq is there, but the above code gives.` File "C:\\Users\\Administrator\\Documents\\llama2-GPU\\llama2\\Lib\\site-packages\\transformers\\configuration\_utils.py", line 650, in \_get\_config\_dict
raise EnvironmentError(
OSError: Can't load the configuration of './models/wizardLM-7B.safetensors'. If you were trying to load it from '[https://huggingface.co/models](https://huggingface.co/models)', make sure you don't have a local directory with the same name. Otherwise, make sure './models/wizardLM-7B.safetensors' is the correct path to a directory containing a config.json file , Now, I downloaded all the files in this repo and put them in the models folder . I took some hint from here [https://stackoverflow.com/questions/76293427/langchain-pipeline-vram-usage-when-loading-model](https://stackoverflow.com/questions/76293427/langchain-pipeline-vram-usage-when-loading-model) . Stuck for now. | 2023-08-02T09:26:57 | https://www.reddit.com/r/LocalLLaMA/comments/15g450v/cant_load_gptq_model/ | Assholefrmcoinexchan | self.LocalLLaMA | 2023-08-02T09:36:56 | 0 | {} | 15g450v | false | null | t3_15g450v | /r/LocalLLaMA/comments/15g450v/cant_load_gptq_model/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]} |
Just posting again to ask for help making oobabooga work with dual 4090s…am I shadowbanned here? | 1 | [removed] | 2023-08-02T10:19:10 | https://www.reddit.com/r/LocalLLaMA/comments/15g5294/just_posting_again_to_ask_for_help_making/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g5294 | false | null | t3_15g5294 | /r/LocalLLaMA/comments/15g5294/just_posting_again_to_ask_for_help_making/ | false | false | self | 1 | null |
What I've learned from orca-mini-3b.ggmlv3.q4_1 using LLamaCPP (_python), so far. | 1 | Hello, it's Sol again. I apologize for using you people as external storage, but in return I hope you will find insights you might not get to read anywhere else, especially not in a condensed matter.
- It's *fast*.
- It's smart *enough* for it's speed. Proper prompt presentation is critical, but worth it. 70+tps fully offloaded to my 3060 6gigs, 1 CPU thread, not even in full-power mode.
- Bigger models, as of today, appear to provide more convenience in regards to prompt-design. Everything points at this being a result of the data fed into the LLM, not the size. Cleaner, more self-referential data seems to increase the usefullness of the response tokens.
- It appears that storing text is less usefull than storing question/answer-pairs about it.
- It has a limited ability to understand words, with some or all of their vowels missing.
- It has a limited capacity to repair text, as in: "corrupted transmission".
- Response-Tokens written into an extended context window, beyond the "natural" size of the model, gradually decrease in usefullness with every new token.
- Different context-window-sizes yield different responses.
- Spaces between quotation marks, or maybe even generally, appear to increase usefullness.
- It apparently supports doing multiple passes simply by requesting multiple passes ... which I find super interesting, actually.
- It happens to often finish despite having "more response" available, or before reaching its token/context limit. Refeeding it only the history, with its latest response added, *can* trigger it to continue the response.
- Refeeding history (including the latest response) revealed that responses *can* be incomplete despite not appearing to be, because it either adds more information or a final "I hope this was useful!"-style of message.
- It appears that there is a "sense" for when it's done responding regardless of "final" messages, though "final" messages *do* make it stop responding after re-feeding the history.
- It's always worth trying communicating without special prompts to see how it reacts to requests or information.
- Formulating requests in hidden prompts improves usefullness, like in "### Assistant responds in a single sentence:\n"
If any of this is ever usefull to you, let me know!
Please don't hesitate to comment on/discuss any of this, even if it's just to insult me. \^_^ The more we all can help each other improving and refining, the more we benefit as a whole. I wish I could do all the things I want to try, but money and hardware are my limits. Time, though, I have plenty. | 2023-08-02T10:21:46 | https://www.reddit.com/r/LocalLLaMA/comments/15g5419/what_ive_learned_from_orcamini3bggmlv3q4_1_using/ | Solstice_Projekt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g5419 | false | null | t3_15g5419 | /r/LocalLLaMA/comments/15g5419/what_ive_learned_from_orcamini3bggmlv3q4_1_using/ | false | false | self | 1 | null |
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs - WeChat AI, Tencent Inc. 2023 - Open-source! Comparble performance to ChatGPT while using tools! | 1 | Paper: [https://arxiv.org/abs/2307.16789](https://arxiv.org/abs/2307.16789)
Github: [https://github.com/OpenBMB/ToolBench](https://github.com/OpenBMB/ToolBench)
Abstract:
>Despite the advancements of open-source large language models (LLMs) and their variants, e.g., LLaMA and Vicuna, they remain significantly limited in performing higher-level tasks, such as following human instructions to use external tools (APIs). This is because current instruction tuning largely focuses on basic language tasks instead of the tool-use domain. This is in contrast to state-of-the-art (SOTA) LLMs, e.g., ChatGPT, which have demonstrated excellent tool-use capabilities but are unfortunately closed source. To facilitate **tool-use capabilities within open-source LLMs, we introduce ToolLLM**, a general tool-use framework of data construction, model training and evaluation. We first present **ToolBench, an instruction-tuning dataset for tool use**, which is created automatically using ChatGPT. Specifically, we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub, then prompt ChatGPT to generate diverse human instructions involving these APIs, covering both single-tool and multi-tool scenarios. Finally, we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To make the searching process more efficient, we develop a novel **depth-first search-based decision tree (DFSDT), enabling LLMs to evaluate multiple reasoning traces and expand the search space.** We show that **DFSDT significantly enhances the planning and reasoning** capabilities of LLMs. For efficient tool-use assessment, we develop an automatic evaluator: ToolEval. We fine-tune LLaMA on ToolBench and obtain ToolLLaMA. Our ToolEval reveals that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits **comparable performance to ChatGPT**. To make the pipeline more practical, we devise a neural API retriever to recommend appropriate APIs for each instruction, negating the need for manual API selection.
https://preview.redd.it/amr0njdnhofb1.jpg?width=1358&format=pjpg&auto=webp&s=d6a312bb4281033b12f7568e44db8f428baede1a
https://preview.redd.it/7q369ndnhofb1.jpg?width=1528&format=pjpg&auto=webp&s=64ce95d394b81186809bfd74aaa1d624c6960c6a
https://preview.redd.it/9fhx5kdnhofb1.jpg?width=1358&format=pjpg&auto=webp&s=6db8e5287268f43e568d35271fc7167dca747ae1
https://preview.redd.it/z9f4pkdnhofb1.jpg?width=1453&format=pjpg&auto=webp&s=ce79a9c422f7aaef83a3b7dad43acc629254c968
​ | 2023-08-02T11:03:09 | https://www.reddit.com/r/LocalLLaMA/comments/15g5w7f/toolllm_facilitating_large_language_models_to/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g5w7f | false | null | t3_15g5w7f | /r/LocalLLaMA/comments/15g5w7f/toolllm_facilitating_large_language_models_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
|
anyone have experience with autoGPT? | 1 | I learned about autoGPT and am wondering if there are any ways to make it be possible to run such a stack on my own computer locally? | 2023-08-02T11:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/15g5xzo/anyone_have_experience_with_autogpt/ | StarlordBob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g5xzo | false | null | t3_15g5xzo | /r/LocalLLaMA/comments/15g5xzo/anyone_have_experience_with_autogpt/ | false | false | self | 1 | null |
What is good model for production - quantized vs non-quantized | 1 | Now a days all models are available in both quantized (GPTQ, GGML etc) and non-quantized format. If I have to deploy a model for production which focuses on concurrent users, should I use quantized or non-quantized? | 2023-08-02T11:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/15g6ds4/what_is_good_model_for_production_quantized_vs/ | pandeypunit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g6ds4 | false | null | t3_15g6ds4 | /r/LocalLLaMA/comments/15g6ds4/what_is_good_model_for_production_quantized_vs/ | false | false | self | 1 | null |
Containerize Falcon 7b and deploy to a server | 1 | Hello,
I´m currently trying to containerize a Falcon 7b Model with docker and deploy the image to a remote server. Furthermore, I want to connect the Model with an API so I can access it with another program. My Problem is, that I'm not sure where to start, since I researched for over a week and haven´t found any helpful tutorials. Im not sure if this problem is just too easy to have any written guides or videos or if I´m searching with the wrong keywords. Any help which can point me in the right direction would be really appreciated.
I already worked with docker and got a working jupyter notebook container working on the server. In this, I could launch the model and interact with it, but just launching it in a container seems to not work for me. | 2023-08-02T12:20:44 | https://www.reddit.com/r/LocalLLaMA/comments/15g7h8m/containerize_falcon_7b_and_deploy_to_a_server/ | aldur15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15g7h8m | false | null | t3_15g7h8m | /r/LocalLLaMA/comments/15g7h8m/containerize_falcon_7b_and_deploy_to_a_server/ | false | false | self | 1 | null |
All-in-One AI Wizard: Screen Perception and So Much More | 1 | 2023-08-02T12:28:53 | https://v.redd.it/dx44rjbqxofb1 | Professional_Ice_5 | /r/LocalLLaMA/comments/15g7n9u/allinone_ai_wizard_screen_perception_and_so_much/ | 1970-01-01T00:00:00 | 0 | {} | 15g7n9u | false | {'reddit_video': {'bitrate_kbps': 0, 'dash_url': 'https://v.redd.it/dx44rjbqxofb1/DASHPlaylist.mpd?a=1693659255%2COTY1Njg3MjBjMDBjODUxZDU1YWIwYWZkN2M0NGUzOTJjYjllODFjYjljN2ZjNWJhZGQ4ZjdjYmEyOTVkZWU0MA%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/dx44rjbqxofb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/dx44rjbqxofb1/HLSPlaylist.m3u8?a=1693659255%2CZWI4YjI2ZWI1MjZiOWRhYzVlNDk5YjA2MzYxMjA0YTc5OGEzOGZjZGQ5Y2NjZGU5YjIxZDBlNDM4NWJhMzMyZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dx44rjbqxofb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1734}} | t3_15g7n9u | /r/LocalLLaMA/comments/15g7n9u/allinone_ai_wizard_screen_perception_and_so_much/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?width=108&crop=smart&format=pjpg&auto=webp&s=0972970ac3b22881e8e84471d67b6d4cda0d927b', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?width=216&crop=smart&format=pjpg&auto=webp&s=3194c1347c8facb3fac96ede813b6398b1925a9f', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?width=320&crop=smart&format=pjpg&auto=webp&s=561966bccc27e2e0fc7e3ba774d9201fd3cabe3a', 'width': 320}, {'height': 398, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?width=640&crop=smart&format=pjpg&auto=webp&s=7df504316b80357dd925c8a12381a476d58b9ded', 'width': 640}, {'height': 598, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?width=960&crop=smart&format=pjpg&auto=webp&s=9f3648e763b065245a0436e900cbbd3474db3481', 'width': 960}, {'height': 673, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?width=1080&crop=smart&format=pjpg&auto=webp&s=103b3d20232e066723b31ef5fc60b9d0c5e1c652', 'width': 1080}], 'source': {'height': 673, 'url': 'https://external-preview.redd.it/bWFnYjh1N3F4b2ZiMfF1ACbFYbZjv07BZwhF7iBEwqEHicTIzqv2qMGfTAly.png?format=pjpg&auto=webp&s=c20b95fce259e3e22aee350316d924f1bfd9a8cf', 'width': 1080}, 'variants': {}}]} |
||
Beta Testers: Vector Embedding as a Service | 1 | [removed] | 2023-08-02T13:08:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 15g8i7t | false | null | t3_15g8i7t | /r/LocalLLaMA/comments/15g8i7t/beta_testers_vector_embedding_as_a_service/ | false | false | default | 1 | null |
||
Library recommendations for NLP? | 1 | After a few months of steady learning it seems clear to me that generative APIs are not by themselves sufficient for complicated systems (or maybe I should say there are things that plain old ML do better/more efficiently). I've done some work with NLTK and after some research it seems like spaCy might be the one for me, but I thought I'd ask here:
If you were a noob getting started today, which NLP library would you invest your time with first? I can sling a variety of programming languages including python, C#, and JavaScript. | 2023-08-02T14:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/15gafgj/library_recommendations_for_nlp/ | awitod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gafgj | false | null | t3_15gafgj | /r/LocalLLaMA/comments/15gafgj/library_recommendations_for_nlp/ | false | false | self | 1 | null |
Does anyone have experience with llama.cpp and the GPD win max 2 (especially the 2023 version, but prior version will probably give some insight)? | 1 | If I understand correctly, it's not Unified Memory like on Apple Silicon, but you can get up to 64GB Ram/Vram, with bandwidth up to 240GB/s.
1/ If Bandwidth is the bottleneck, then performance would be a bit superior to a M1 or M2 Pro but still way under a M1/M2 Max?
2/ Does llama.cpp support amd's iGPU?
It wouldn't be for 70b models (even though I would definitely try at least once), but mostly smaller ones in parallel (one for coding, a couple or more of general purposes models, ask questions to all of them and pick and choose, for example).
But it wouldn't be just for LLMs, I also draw and dabble in Blender (there Surface Pen support), I like the ability to play on the go, the über portability, the official Linux support, the relatively more accessible price (compared to a macbook pro for instance, which I'm also considering), among other things. | 2023-08-02T14:31:53 | https://www.reddit.com/r/LocalLLaMA/comments/15gagu2/does_anyone_have_experience_with_llamacpp_and_the/ | bobby-chan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gagu2 | false | null | t3_15gagu2 | /r/LocalLLaMA/comments/15gagu2/does_anyone_have_experience_with_llamacpp_and_the/ | false | false | self | 1 | null |
Running LLMs locally on Android | 1 | [removed] | 2023-08-02T15:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/15gc1d3/running_llms_locally_on_android/ | atezan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gc1d3 | false | null | t3_15gc1d3 | /r/LocalLLaMA/comments/15gc1d3/running_llms_locally_on_android/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D7uTH5s4LDVjda6kEL6oSgL5gomOBRMEcuuJOPfKvF4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=108&crop=smart&auto=webp&s=51be021f144a7b76cf0827775a02f301859b9000', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=216&crop=smart&auto=webp&s=92169fcdd3c39c0dd72458d6e32f0d5be5fdd91d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=320&crop=smart&auto=webp&s=77526e71a23f5b5c402f0fe4b7e1c1b7201725ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=640&crop=smart&auto=webp&s=39465a4f24c4efa9ab6599882cd6c9edebb9e346', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=960&crop=smart&auto=webp&s=df67842b7635d3a066292560590e07166bfef21f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=1080&crop=smart&auto=webp&s=b2e76dd4b5a08eaecca3647475a4683e5e69e00e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?auto=webp&s=a508b8d15236d9440f5744bf9f71b342d3e7ccd1', 'width': 1200}, 'variants': {}}]} |
Running LLMs locally on Android | 1 | Hi folks,
I work on the Android team at Google, as a Developer Relations engineer and have been following all the amazing discussions on this space for a while.
I was curious if any of you folks have tried running text or image models on Android (LLama, Stable Diffusion or others) locally. If so, what kind of challenges have you run into?
Feel free to post your answers below or DM me if you want to give feedback privately.
Also, in addition to that, if any of you folks would like to talk to Android’s ML engineering team to share your experience in detail or get advice, you can fill out this [form](https://forms.gle/RUNRCXZuXjXA9dqr6) and we’ll reach out to you.
Thanks! | 2023-08-02T15:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/15gc1d6/running_llms_locally_on_android/ | atezan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gc1d6 | false | null | t3_15gc1d6 | /r/LocalLLaMA/comments/15gc1d6/running_llms_locally_on_android/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D7uTH5s4LDVjda6kEL6oSgL5gomOBRMEcuuJOPfKvF4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=108&crop=smart&auto=webp&s=51be021f144a7b76cf0827775a02f301859b9000', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=216&crop=smart&auto=webp&s=92169fcdd3c39c0dd72458d6e32f0d5be5fdd91d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=320&crop=smart&auto=webp&s=77526e71a23f5b5c402f0fe4b7e1c1b7201725ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=640&crop=smart&auto=webp&s=39465a4f24c4efa9ab6599882cd6c9edebb9e346', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=960&crop=smart&auto=webp&s=df67842b7635d3a066292560590e07166bfef21f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=1080&crop=smart&auto=webp&s=b2e76dd4b5a08eaecca3647475a4683e5e69e00e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?auto=webp&s=a508b8d15236d9440f5744bf9f71b342d3e7ccd1', 'width': 1200}, 'variants': {}}]} |
End-to-End Encrypted Local LLMs | 1 | Let's share thoughts and ideas about this. Do you guys think this is possible?
For example, providing or embedding the LLMs with a private GPG key, which it then encrypts all its messages/output with to the user? And vice versa.
Only the client would be able to decrypt and encrypt the output and input. It's just a **very rough idea** but I think something like this, if possible at all, will be useful for those hosting Local LLMs in the cloud.
Especially for companies and individuals that use it for sensitive data.
This could potentially be a great way to run Local LLMs privately without having to invest in a lot of hardware. | 2023-08-02T16:03:28 | https://www.reddit.com/r/LocalLLaMA/comments/15gcu2q/endtoend_encrypted_local_llms/ | MoneroBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gcu2q | false | null | t3_15gcu2q | /r/LocalLLaMA/comments/15gcu2q/endtoend_encrypted_local_llms/ | false | false | self | 1 | null |
Looks like someone did the needful and it's a small download. 70b trained on proxy logs. | 1 | 2023-08-02T16:23:23 | https://huggingface.co/v2ray/LLaMA-2-Jannie-70B-QLoRA | a_beautiful_rhind | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15gddbg | false | null | t3_15gddbg | /r/LocalLLaMA/comments/15gddbg/looks_like_someone_did_the_needful_and_its_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'CtUfBJ3zhBq8pJEVhnSqivfjjU8KRUDVb_KXzSpgjPo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?width=108&crop=smart&auto=webp&s=977aa95cd74ee012c06576b2dd5d441b1314090e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?width=216&crop=smart&auto=webp&s=7e1248e2975f8de4799d40094967dd8a4087b648', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?width=320&crop=smart&auto=webp&s=23db2ceb947e2ae45d53f088fd215881cd5fb537', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?width=640&crop=smart&auto=webp&s=a6b1b0f1083491b276562330bc27ce50710e066c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?width=960&crop=smart&auto=webp&s=e3243b2403d835d1dfa31d653ead3dd66000bf14', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?width=1080&crop=smart&auto=webp&s=fafd5c760ac940ef0e1e3374c45bdbe48a62fc51', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fD7NPj1pk9Wo27X9ZgOAMTzdSV5Fdzuqop-mmkSMR98.jpg?auto=webp&s=2772215f19fada4fa89a6e95517374af19d67bbe', 'width': 1200}, 'variants': {}}]} |
||
Error when load models | 1 | I'm having trouble loading models into WebUI. Using the guides on the web, I did as it was written and after starting WebUi, two different errors pop up depending on the models.
**TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g**
"2023-08-02 18:31:17 ERROR:Failed to load the model.
Traceback (most recent call last):
File "F:\\oobabooga\_windows\\text-generation-webui\\[server.py](https://server.py)", line 68, in load\_model\_wrapper
shared.model, shared.tokenizer = load\_model(shared.model\_name, loader)
File "F:\\oobabooga\_windows\\text-generation-webui\\modules\\[models.py](https://models.py)", line 78, in load\_model
output = load\_func\_map\[loader\](model\_name)
File "F:\\oobabooga\_windows\\text-generation-webui\\modules\\[models.py](https://models.py)", line 287, in AutoGPTQ\_loader
return modules.AutoGPTQ\_loader.load\_quantized(model\_name)
File "F:\\oobabooga\_windows\\text-generation-webui\\modules\\AutoGPTQ\_loader.py", line 53, in load\_quantized
model = AutoGPTQForCausalLM.from\_quantized(path\_to\_model, \*\*params)
File "F:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\auto\_gptq\\modeling\\[auto.py](https://auto.py)", line 94, in from\_quantized
return quant\_func(
File "F:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\auto\_gptq\\modeling\\\_base.py", line 793, in from\_quantized
accelerate.utils.modeling.load\_checkpoint\_in\_model(
File "F:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\accelerate\\utils\\[modeling.py](https://modeling.py)", line 1336, in load\_checkpoint\_in\_model
set\_module\_tensor\_to\_device(
File "F:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\accelerate\\utils\\[modeling.py](https://modeling.py)", line 298, in set\_module\_tensor\_to\_device
new\_value = [value.to](https://value.to)(device)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA\_LAUNCH\_BLOCKING=1.
Compile with \`TORCH\_USE\_CUDA\_DSA\` to enable device-side assertions."
**TheBloke\_chronos-hermes-13B-GPTQ\\chronos-hermes-13b-GPTQ-4bit-128g**
"WARNING:The safetensors archive passed at models\\TheBloke\_chronos-hermes-13B-GPTQ\\chronos-hermes-13b-GPTQ-4bit-128g.no-act.order.safetensors does not contain metadata. Make sure to save your model with the save\_pretrained method. Defaulting to 'pt' metadata."
I use RTX 4070 12GB Vram , 16 GB Ram, Intel i5-12400F
What am I doing wrong? How to fix it?
​
​ | 2023-08-02T16:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/15gduzj/error_when_load_models/ | LonleyPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gduzj | false | null | t3_15gduzj | /r/LocalLLaMA/comments/15gduzj/error_when_load_models/ | false | false | self | 1 | null |
Are there any subscription services or companies that offer remote use of high-end workstations? I'd like to try out some of these larger models without spending tens of thousands. | 1 | That's about it for my question. The more user friendly the better. | 2023-08-02T17:02:12 | https://www.reddit.com/r/LocalLLaMA/comments/15gedt1/are_there_any_subscription_services_or_companies/ | nuupdog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gedt1 | false | null | t3_15gedt1 | /r/LocalLLaMA/comments/15gedt1/are_there_any_subscription_services_or_companies/ | false | false | self | 1 | null |
Is this safe? | 1 | [removed] | 2023-08-02T17:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/15gemfv/is_this_safe/ | Ok-Common6667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gemfv | false | null | t3_15gemfv | /r/LocalLLaMA/comments/15gemfv/is_this_safe/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3JkgEOxvO7vLgHZFe9MzCqBl3XMYYDh4202xXuBxFFI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=108&crop=smart&auto=webp&s=8ed21a2bb6cba0d2fd092ded1e4c483e01937ae8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=216&crop=smart&auto=webp&s=5b1e6a483872661712a58ecea7c765e4f45504ad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=320&crop=smart&auto=webp&s=55da3efdeb3268700f44cff1f030151422395e92', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=640&crop=smart&auto=webp&s=ccc774a89b87ea46111146e323e282f08653c559', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=960&crop=smart&auto=webp&s=8a0225aada2357e1926f3f1820a66d70d34a0827', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=1080&crop=smart&auto=webp&s=06de0f5d3312272ad229216eaf13927d2ba1f125', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?auto=webp&s=f55d47d264592ecd9c277c6d08d646fa7d720d03', 'width': 1200}, 'variants': {}}]} |
Distributing processing load to home server | 1 | Are there any engines that do local distributed processing well? My workstation is a decent combo of RAM and GPU but the home server has 100s more GBs of RAM than my workstation and I'm looking for a way to harness that power as well with some sort of distributed load balancing. | 2023-08-02T17:12:21 | https://www.reddit.com/r/LocalLLaMA/comments/15genm8/distributing_processing_load_to_home_server/ | Renek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15genm8 | false | null | t3_15genm8 | /r/LocalLLaMA/comments/15genm8/distributing_processing_load_to_home_server/ | false | false | self | 1 | null |
Is this link safe? | 1 | Is this link safe? https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat | 2023-08-02T17:14:52 | https://www.reddit.com/r/LocalLLaMA/comments/15gepz1/is_this_link_safe/ | Cultural-Cod-3595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gepz1 | false | null | t3_15gepz1 | /r/LocalLLaMA/comments/15gepz1/is_this_link_safe/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3JkgEOxvO7vLgHZFe9MzCqBl3XMYYDh4202xXuBxFFI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=108&crop=smart&auto=webp&s=8ed21a2bb6cba0d2fd092ded1e4c483e01937ae8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=216&crop=smart&auto=webp&s=5b1e6a483872661712a58ecea7c765e4f45504ad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=320&crop=smart&auto=webp&s=55da3efdeb3268700f44cff1f030151422395e92', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=640&crop=smart&auto=webp&s=ccc774a89b87ea46111146e323e282f08653c559', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=960&crop=smart&auto=webp&s=8a0225aada2357e1926f3f1820a66d70d34a0827', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?width=1080&crop=smart&auto=webp&s=06de0f5d3312272ad229216eaf13927d2ba1f125', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wOblKPGEWTaz7a_hmnpdOnsIasLYj1cLKBEvlgwCxRU.jpg?auto=webp&s=f55d47d264592ecd9c277c6d08d646fa7d720d03', 'width': 1200}, 'variants': {}}]} |
Llama 2 uncensored version | 1 | is there any uncensored Llama 2 version that we use. I tried going through hugging face but was unable to find one | 2023-08-02T18:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/15gg7n9/llama_2_uncensored_version/ | _Sneaky_Bastard_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gg7n9 | false | null | t3_15gg7n9 | /r/LocalLLaMA/comments/15gg7n9/llama_2_uncensored_version/ | false | false | self | 1 | null |
Cost of Training Llama 2 by Meta | 1 | This is basically a simple set of questions:
\- How much did it cost to train this model?
\- Was it trained from the complete 0?
\- How many GPUs were used to do this?
\- Does it make sense to calculate AWS training costs using A100s based on the Times in the paper?
If anyone knows this plis help | 2023-08-02T18:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ggfjl/cost_of_training_llama_2_by_meta/ | ZCAY6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ggfjl | false | null | t3_15ggfjl | /r/LocalLLaMA/comments/15ggfjl/cost_of_training_llama_2_by_meta/ | false | false | self | 1 | null |
Is someone experiencing RAM issues in Google Colab? | 1 | [removed] | 2023-08-02T18:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/15gggqb/is_someone_experiencing_ram_issues_in_google_colab/ | danielbrdz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gggqb | false | null | t3_15gggqb | /r/LocalLLaMA/comments/15gggqb/is_someone_experiencing_ram_issues_in_google_colab/ | false | false | 1 | null |
|
My fine tuned model perform worse than the original | 1 | Hello
I'm facing a problem when fine-tuning LLM, they always tend to perform worse than the original model, even when testing it on a task that is in the fine-tuning dataset, here is my last fine-tuning details:
Model: daryl149/llama-2-13b-chat-hf
Method: QLora; afterwhich I merged the model.
GPU: A100 on runpod.
Parameters:
lora\_r = 128, lora\_alpha = 16, lora\_dropout = 0.1, epch = 15, learning\_rate = 2e-4, weight\_decay = 0.001, lr\_scheduler\_type = "constant".
​
Dataset: Local dataset composed of 900 tasks each is about a 1000 token.
​
For my use case; I found the original model performance is not too bad but I aimed to improve it with finetuning.
​
Any Idea of what i'm doing wrong?
​
Thank you in advance!
​ | 2023-08-02T18:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/15gguek/my_fine_tuned_model_perform_worse_than_the/ | Alternative-Habit894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gguek | false | null | t3_15gguek | /r/LocalLLaMA/comments/15gguek/my_fine_tuned_model_perform_worse_than_the/ | false | false | self | 1 | null |
Tutorial: Running Llama AI on a low RAM, i5 CPU Windows machine (via WSL) & Getting Started Bulk Text Processing | 1 | [removed] | 2023-08-02T19:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/15ghvcu/tutorial_running_llama_ai_on_a_low_ram_i5_cpu/ | jack-lambourne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ghvcu | false | null | t3_15ghvcu | /r/LocalLLaMA/comments/15ghvcu/tutorial_running_llama_ai_on_a_low_ram_i5_cpu/ | false | false | self | 1 | null |
Is it possible to run petals on a local network? | 1 | Say I have several machines with smaller GPUs. Is it possible to run Petals locally, so that I essentially get my own local distributed petals environment? Or does it have to be run in the big cloud with everyone's machine? | 2023-08-02T19:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/15gihre/is_it_possible_to_run_petals_on_a_local_network/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gihre | false | null | t3_15gihre | /r/LocalLLaMA/comments/15gihre/is_it_possible_to_run_petals_on_a_local_network/ | false | false | self | 1 | null |
Scam watch - post LocalLlama and chatgpt scams here | 1 | Scam artists seem eager to take advantage of LLM communities right now. Don’t know why, but don’t give your information to anybody especially your openai api keys.
Some scam below:
U/arcticfly and OpenPipe ai - they posted a fake $15k contest the other day and deleted the thread after being questioned
U/atezan an “Android developer relations team member” posted a survey, but didn’t offer any information about himself. Https://www.reddit.com/r/LocalLLaMA/comments/15gc1d6/running_llms_locally_on_android/)
Post any other scams and I’ll update the thread with them | 2023-08-02T19:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/15gii9s/scam_watch_post_localllama_and_chatgpt_scams_here/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gii9s | false | null | t3_15gii9s | /r/LocalLLaMA/comments/15gii9s/scam_watch_post_localllama_and_chatgpt_scams_here/ | false | false | self | 1 | null |
Testing / Benchmarking before production | 1 | Hello, what do you use to test your LLMs / benchmark them before putting them into production? | 2023-08-02T20:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/15gjp8b/testing_benchmarking_before_production/ | MuffinB0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gjp8b | false | null | t3_15gjp8b | /r/LocalLLaMA/comments/15gjp8b/testing_benchmarking_before_production/ | false | false | self | 1 | null |
What's the best model if I want a chatbot for this pc? | 1 | * Using Ooga Booga
* Uncensored Model
* RTX 3060
* Ryzen 5 5600X
* 16gb ram.
​
Thanks for the help. I currently use TheBloke\_orca\_mini\_v2\_7B-GPTQ. It's good but doesn't seem to have much variety in what it says.
​
If there's any suggestion for an upgrade, will RAM be better, GPU, or CPU? | 2023-08-02T21:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/15gkymu/whats_the_best_model_if_i_want_a_chatbot_for_this/ | Coteboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gkymu | false | null | t3_15gkymu | /r/LocalLLaMA/comments/15gkymu/whats_the_best_model_if_i_want_a_chatbot_for_this/ | false | false | self | 1 | null |
Best gpu for API based inference? | 2 | Let’s say I wanted to serve a 4 bit 13B model as an API and alternative to openAI. You could call the api with a prompt, and get a completion from a gpu server. Which gpu or gpu cluster would allow for the most tokens per second at the cheapest price? | 2023-08-02T21:58:54 | https://www.reddit.com/r/LocalLLaMA/comments/15glb50/best_gpu_for_api_based_inference/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15glb50 | false | null | t3_15glb50 | /r/LocalLLaMA/comments/15glb50/best_gpu_for_api_based_inference/ | false | false | self | 2 | null |
What's the all out smartest model I can run local with over 100 GB of RAM and a 3090? | 1 | There's so many different options, I just want the best one for doing analysis and story telling.
Can someone recommend one and also explain why it's the best? | 2023-08-02T23:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/15gne6v/whats_the_all_out_smartest_model_i_can_run_local/ | countrycruiser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gne6v | false | null | t3_15gne6v | /r/LocalLLaMA/comments/15gne6v/whats_the_all_out_smartest_model_i_can_run_local/ | false | false | self | 1 | null |
NewHope creators say benchmark results where leaked into the dataset, which explains the HumanEval score. This model should not be used. | 1 | [https://github.com/SLAM-group/newhope](https://github.com/SLAM-group/newhope)
I kind of expected this, but I was hoping for a crazy breakthrough. Anyways, WizardCoder 15b is still king of coding. | 2023-08-02T23:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/15gnnrf/newhope_creators_say_benchmark_results_where/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gnnrf | false | null | t3_15gnnrf | /r/LocalLLaMA/comments/15gnnrf/newhope_creators_say_benchmark_results_where/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UeH8OsMWO_TrbTD0Lh7q_Y3-mj8HxUfQgZUQMzeSey8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?width=108&crop=smart&auto=webp&s=4658c36f0f1f0ecf2cb06a014a2c3f247cc3b51a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?width=216&crop=smart&auto=webp&s=f0e42aaefee5d2a3ee12c1820fd343f85d627d53', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?width=320&crop=smart&auto=webp&s=b96ac69925a3ed354e40d39a8e9bf3d713ebd984', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?width=640&crop=smart&auto=webp&s=d953c11a65f61d7a1f371b95796db043c0dd10d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?width=960&crop=smart&auto=webp&s=2a218a7da8d6251172fdf94ce7ed0d4964564d1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?width=1080&crop=smart&auto=webp&s=0aaa7d9b202af5372ea1f1e8aa2d5b3fda89de1b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aTfDk2Ybj6zoPoJHfzJ-wN6ovuEK79-BUk6z8Rofd1w.jpg?auto=webp&s=bc5ae613ff76ed705808a799aa05aba2060047fa', 'width': 1200}, 'variants': {}}]} |
New to Ooba and cant get a model to work for my life | 1 | I downloaded PygmalionAI/pygmalion-6b for Ooba and I keep getting this error when loading it. Im new to this and im struggling
Traceback (most recent call last): File “E:\\oobabooga\_windows\\text-generation-webui\\[server.py](http://server.py/)”, line 68, in load\_model\_wrapper shared.model, shared.tokenizer = load\_model(shared.model\_name, loader) File “E:\\oobabooga\_windows\\text-generation-webui\\modules\\[models.py](http://models.py/)”, line 78, in load\_model output = load\_func\_map[loader](http://127.0.0.1:7860/model_name) File “E:\\oobabooga\_windows\\text-generation-webui\\modules\\[models.py](http://models.py/)”, line 300, in ExLlama\_HF\_loader return ExllamaHF.from\_pretrained(model\_name) File “E:\\oobabooga\_windows\\text-generation-webui\\modules\\exllama\_hf.py”, line 93, in from\_pretrained config = ExLlamaConfig(pretrained\_model\_name\_or\_path / ‘config.json’) File “E:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\exllama\\[model.py](http://model.py/)”, line 52, in **init** self.pad\_token\_id = read\_config\[“pad\_token\_id”\] KeyError: ‘pad\_token\_id’ | 2023-08-03T00:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/15godj3/new_to_ooba_and_cant_get_a_model_to_work_for_my/ | Right_Situation_1074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15godj3 | false | null | t3_15godj3 | /r/LocalLLaMA/comments/15godj3/new_to_ooba_and_cant_get_a_model_to_work_for_my/ | false | false | self | 1 | null |
Chronos-13b-v2: Llama 2 Roleplay model | 1 | FP16 : [https://huggingface.co/elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
GPTQ: [https://huggingface.co/elinas/chronos-13b-v2-GPTQ](https://huggingface.co/elinas/chronos-13b-v2-GPTQ)
GGML: [https://huggingface.co/TheBloke/Chronos-13B-v2-GGML](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML) | 2023-08-03T00:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/15gp7m4/chronos13bv2_llama_2_roleplay_model/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gp7m4 | false | null | t3_15gp7m4 | /r/LocalLLaMA/comments/15gp7m4/chronos13bv2_llama_2_roleplay_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6WSCjWUnBJuAsy5fkEx8CHGPzL_pjGNmFOjd2nM9sSA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=108&crop=smart&auto=webp&s=9bac96fe0305c63af13fc7c41dd5377b9d019c5c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=216&crop=smart&auto=webp&s=e00db1282d5aa2bfa7a35abcb8df7c0968c6b851', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=320&crop=smart&auto=webp&s=f942da20959aa12054f3cb652fd61ec1c6c9ed06', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=640&crop=smart&auto=webp&s=ba40656b3dcbc38151f5b28d86bfbc6404816201', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=960&crop=smart&auto=webp&s=9ca18f0af3f5db22cdf9af59f5980322c1ea6360', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=1080&crop=smart&auto=webp&s=fd948f77e87787a6f6dee4784b3c49bb0c91ddf4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?auto=webp&s=0e7483d56a14d888834cc1a308664efad48cbb31', 'width': 1200}, 'variants': {}}]} |
Chronos-13b-v2: Llama 2 Roleplay, Storywriting, and Chat Model | 1 | FP16 : [https://huggingface.co/elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
GPTQ: [https://huggingface.co/elinas/chronos-13b-v2-GPTQ](https://huggingface.co/elinas/chronos-13b-v2-GPTQ)
GGML: [https://huggingface.co/TheBloke/Chronos-13B-v2-GGML](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML) | 2023-08-03T00:43:12 | https://www.reddit.com/r/LocalLLaMA/comments/15gp9fq/chronos13bv2_llama_2_roleplay_storywriting_and/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gp9fq | false | null | t3_15gp9fq | /r/LocalLLaMA/comments/15gp9fq/chronos13bv2_llama_2_roleplay_storywriting_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6WSCjWUnBJuAsy5fkEx8CHGPzL_pjGNmFOjd2nM9sSA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=108&crop=smart&auto=webp&s=9bac96fe0305c63af13fc7c41dd5377b9d019c5c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=216&crop=smart&auto=webp&s=e00db1282d5aa2bfa7a35abcb8df7c0968c6b851', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=320&crop=smart&auto=webp&s=f942da20959aa12054f3cb652fd61ec1c6c9ed06', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=640&crop=smart&auto=webp&s=ba40656b3dcbc38151f5b28d86bfbc6404816201', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=960&crop=smart&auto=webp&s=9ca18f0af3f5db22cdf9af59f5980322c1ea6360', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?width=1080&crop=smart&auto=webp&s=fd948f77e87787a6f6dee4784b3c49bb0c91ddf4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x5QpeICP_-QMv-z4JgUY2SW2NUKtEujxVawhIN8Z_Ms.jpg?auto=webp&s=0e7483d56a14d888834cc1a308664efad48cbb31', 'width': 1200}, 'variants': {}}]} |
Using LLaMA2 for private code base and documentation | 1 | Hey folks,
I'm pretty new to this topic, but I've been investing a lot of time to understand how to train your LLM properly, so I apologize if this request is a bit "loose."
I have a medium-sized Python application with about 1000 files (not counting the dependencies) and a lot of documentation on this application. I want to train a model to use the documentation, emails, and code base to answer questions about the software. Is it possible? Is fine-tuning the base model from meta enough? I believe I would also have to use data to give context about Python itself (something like [this](https://huggingface.co/datasets/Nan-Do/code-search-net-python)). Thanks! | 2023-08-03T01:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/15gputb/using_llama2_for_private_code_base_and/ | ViRROOO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gputb | false | null | t3_15gputb | /r/LocalLLaMA/comments/15gputb/using_llama2_for_private_code_base_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5aX1p4FyRxtZkQj3ZeMm5fhY-JUD92DqBG-6eEIvrHA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?width=108&crop=smart&auto=webp&s=26dc537b0dac54cbb0c04f049f3153febb9073da', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?width=216&crop=smart&auto=webp&s=cd59ddff41671a9e06f180d161bbd53671ebbbf7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?width=320&crop=smart&auto=webp&s=5800b2777b1a5b4eeeb94c0f2db345ba78bcc400', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?width=640&crop=smart&auto=webp&s=61f92234d1736664b5f674bf9f9622fac9b98ad7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?width=960&crop=smart&auto=webp&s=e0aec351495be8dfca757958c7617a91e844523d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?width=1080&crop=smart&auto=webp&s=1aa94660e8e362556a61a5c1bc1193e619514ce6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YsftUDy53eXhIKoSqoy2pLxTu7zJqbZglkgn189Rvbs.jpg?auto=webp&s=f63013e9fc0489cf9eec98a34693cb9bccadaa4e', 'width': 1200}, 'variants': {}}]} |
New Vicunia model based on Llama2 | 1 | 2023-08-03T02:07:58 | https://twitter.com/lmsysorg/status/1686794639469371393 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15gr3oz | false | {'oembed': {'author_name': 'lmsys.org', 'author_url': 'https://twitter.com/lmsysorg', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Excited to release our latest Vicuna v1.5 series, featuring 4K and 16K context lengths with improved performance on almost all benchmarks!<br>Vicuna v1.5 is based on the commercial-friendly Llama 2 and has extended context length via positional interpolation.<br><br>Since its release,… <a href="https://t.co/6MW9YyRWf7">pic.twitter.com/6MW9YyRWf7</a></p>— lmsys.org (@lmsysorg) <a href="https://twitter.com/lmsysorg/status/1686794639469371393?ref_src=twsrc%5Etfw">August 2, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/lmsysorg/status/1686794639469371393', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15gr3oz | /r/LocalLLaMA/comments/15gr3oz/new_vicunia_model_based_on_llama2/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BBm_8z8EjR_Uz2o7QQQaa7FtHLDA0s8FWsKJgbL4GO8', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/y3IxSWh8RiE8GS1KDXRbz1qsm1zPekobJ5OnTnongHw.jpg?width=108&crop=smart&auto=webp&s=f4f10d7057084290f64071097e1ef1e48d90b89f', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/y3IxSWh8RiE8GS1KDXRbz1qsm1zPekobJ5OnTnongHw.jpg?auto=webp&s=228aa22abe0541b93c96bbb1817b7a7639fbdfaa', 'width': 140}, 'variants': {}}]} |
||
In case anyone was wondering how to use llama 2 as a chatbot with hf | 1 | [removed] | 2023-08-03T02:57:29 | https://www.reddit.com/r/LocalLLaMA/comments/15gs5gx/in_case_anyone_was_wondering_how_to_use_llama_2/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gs5gx | false | null | t3_15gs5gx | /r/LocalLLaMA/comments/15gs5gx/in_case_anyone_was_wondering_how_to_use_llama_2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dVj-cRYRybJbopaMPdpFFuWob4mGW2zbfdVRyyRUQ7M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?width=108&crop=smart&auto=webp&s=4926741ed45ad51227764bea3d9f71bb42c6666b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?width=216&crop=smart&auto=webp&s=27aa71bc663e901f117530cdf7946c59788ffd5e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?width=320&crop=smart&auto=webp&s=81013207831c5639264d43d8a3cdf374805246c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?width=640&crop=smart&auto=webp&s=6780e26a7b120967f1cdc31376ea41407c760742', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?width=960&crop=smart&auto=webp&s=04adf0d862b55ce424f144aa2c715b4073d9d5b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?width=1080&crop=smart&auto=webp&s=e6f9666b01ddc2aa454a38b653278ddac256cccb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HAvPT86BKgFb3QGkGeynUGAnCsrVwcKxP-mArL1c46c.jpg?auto=webp&s=6d41144febb5a759841adc3a54c28bb830beceea', 'width': 1200}, 'variants': {}}]} |
Llama2 (llama.cpp) Word Cloud Generator app using ggplot2 /ggwordcloud | 1 | Adding on my full stack Dockerized build for llama.cpp, I have created an R shiny app to generate word clouds from the generated text
Within the context of the Docker build, the app takes a few settings and a prompt from the user, which is passed to the /completion endpoint of the llama.cpp server API. a text output and a word cloud are generated from the API response se
Its pretty simple, but demonstrates the ability to link Llama2 API responses to interactive visualizations in a web app within a single container
Not that impressive, but if anyone is interested, [here is the wordcloud app](https://github.com/snakewizardd/llamaDocker/blob/main/wordcloud.R)
Below is the output from the prompt *Give me a great idea for a DevOps project* used in the attached image
___
DevOps is a methodology that combines software development (Dev) and IT operations (Ops). The goal of DevOps is to improve the speed and quality of software delivery by breaking down silos between these two teams and automating processes.
Here are some great ideas for DevOps projects:
1. Continuous Integration/Continuous Deployment (CI/CD): Implement a CI/CD pipeline that automatically builds, tests, and deploys code changes to production after they pass automated tests. This can significantly reduce the time between code changes and deployment.
2. Infrastructure as Code (IaC): Use tools like Terraform or CloudFormation to manage infrastructure provisioning and configuration through code, rather than manually configuring servers or cloud resources. This can simplify infrastructure management, improve consistency, and reduce errors.
3. Monitoring and Logging: Implement a monitoring and logging system that tracks application performance, error rates, and user behavior. Use tools like Prometheus, Grafana, or ELK Stack to collect data from multiple sources and visualize it in real-time.
4. Containerization: Move applications to containerized environments using tools like Docker or Kubernetes. Containers provide lightweight isolation and portability, making it easier to deploy and manage applications.
5. Chaos Engineering: Introduce chaos engineering practices to deliberately fail components or services in a controlled manner to test the resilience of your systems. Use tools like Gremlin or Gloo to inject failures and observe how the system responds.
6. Release Management: Implement a release management process that automates the deployment of software changes to different environments, such as development, testing, staging, and production. Use tools like Jenkins or GitLab to manage releases and create versioned builds.
7. Security Automation: Automate security-related tasks, such as vulnerability scanning, patch management, and access control configuration. Use tools like Ansible or Puppet to simplify security automation.
8. Multi-Cloud Management: Develop a multi-cloud strategy that allows you to manage resources and applications across different cloud providers. Use tools like CloudFormation or Terraform to define infrastructure as code and deploy it across multiple clouds.
9. Culture Change: Implement cultural changes that encourage collaboration between development and operations teams. Foster a culture of shared responsibility, transparency, and continuous improvement.
10. Artificial Intelligence/Machine Learning (AI/ML): Use AI/ML to automate repetitive tasks, improve application performance, or optimize resource utilization. Implement tools like TensorFlow or PyTorch to build custom AI models or use pre-built models from cloud providers like AWS or GCP.
Remember, the best DevOps projects are those that address specific business needs and provide measurable improvements in efficiency, reliability, or customer experience. Start with small, incremental changes and gradually scale up to larger transformations. | 2023-08-03T04:09:17 | https://www.reddit.com/gallery/15gtl1v | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15gtl1v | false | null | t3_15gtl1v | /r/LocalLLaMA/comments/15gtl1v/llama2_llamacpp_word_cloud_generator_app_using/ | false | false | 1 | null |
|
Retrieve certain sections from PDFs | 1 | Each PDF contains multiple chapters. Each chapter has a section called "Exercises". The problem I am facing is getting these exercises through the Langchain retrievalqa chain and chromadb as a retriever. The query I want to ask is "Get me exercise from Chapter 1".
Currently, it's retrieving the chunks that are less relevant. Is there a solution to this problem? Is there some problem with the chunking part? Is there some Langchain functionality that can solve this?
Currently, I am using PyMuPDFLoader and RecursiveCharacterTextSplitter; is there any better way of doing this?
I found this in the Langchain documentation: [https://python.langchain.com/docs/integrations/document\_loaders/docugami,](https://python.langchain.com/docs/integrations/document_loaders/docugami) but it's paid. Is there any way to do intelligent chunking (chunking section by section) and add that section to metadata? Is there any alternative to Docugami | 2023-08-03T05:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/15guzif/retrieve_certain_sections_from_pdfs/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15guzif | false | null | t3_15guzif | /r/LocalLLaMA/comments/15guzif/retrieve_certain_sections_from_pdfs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=108&crop=smart&auto=webp&s=2684aa31208d728f65279640de17c8d8f9039e79', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=216&crop=smart&auto=webp&s=d50c278029cd238c11dc42e60a8b08d7d1f28bc3', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=320&crop=smart&auto=webp&s=1642eda69cd46554b563bc6d931ff7565bf15d55', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=640&crop=smart&auto=webp&s=fbdcb89f2e77b07ef0f74faf07f62774da8993e6', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?auto=webp&s=a6f2697c0bbf3ffa9fd7a65e9e0e8d57c392d56a', 'width': 794}, 'variants': {}}]} |
Flashbacks from 90' | 1 | 2023-08-03T06:28:29 | Wrong_User_Logged | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15gw4ks | false | null | t3_15gw4ks | /r/LocalLLaMA/comments/15gw4ks/flashbacks_from_90/ | false | false | 1 | {'enabled': True, 'images': [{'id': '4ut0n7547T7Vr7ZunSmfeyhW2gSP-tIMXqZOm7wpx8E', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/17hivxaaaufb1.jpg?width=108&crop=smart&auto=webp&s=eaab26517c071f57d3baff51596cda668468e710', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/17hivxaaaufb1.jpg?width=216&crop=smart&auto=webp&s=50d24ea170e61e2c7dac1e4da48e2008411b52a9', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/17hivxaaaufb1.jpg?width=320&crop=smart&auto=webp&s=04206a1f272f3690f3042979371a5b08f1567bed', 'width': 320}, {'height': 446, 'url': 'https://preview.redd.it/17hivxaaaufb1.jpg?width=640&crop=smart&auto=webp&s=f5a23c435e47c238233050362d1398214a7b4582', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/17hivxaaaufb1.jpg?auto=webp&s=928dbf5d1bc5801689d16572f88a0b0a1dc0e711', 'width': 716}, 'variants': {}}]} |
|||
How do i get rid of openai key when i don't need it??? | 1 | hi can someone give me a hand?
I want to use llama\_index to do indexing and i was kept being prompted for openai key. I don't want to use openai, stay away from it and i am not interested to use openai, and i want to use it with other free LLM like those in huggingface; I was thrown an error "ValueError: No API key found for OpenAI." at the function call
​
`ServiceContext.from_defaults(llm_predictor=llm, prompt_helper=prompt_helper) and VectorStoreIndex.from_documents(documents, service_context=service_context)`
​
so here are my codes:
`from langchain import HuggingFaceHub, LLMChain, PromptTemplate`
`from llama_index import VectorStoreIndex, SimpleDirectoryReader,LLMPredictor,PromptHelper,ServiceContext`
`llm = HuggingFaceHub(repo_id="google/flan-t5-small", model_kwargs={"temperature":1, "max_length":1024})`
​
`service_context = ServiceContext.from_defaults(llm_predictor=llm, prompt_helper=prompt_helper)`
`documents = SimpleDirectoryReader('docs').load_data()`
`index = VectorStoreIndex.from_documents(documents, service_context=service_context)` | 2023-08-03T06:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/15gw997/how_do_i_get_rid_of_openai_key_when_i_dont_need_it/ | popcornismid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gw997 | false | null | t3_15gw997 | /r/LocalLLaMA/comments/15gw997/how_do_i_get_rid_of_openai_key_when_i_dont_need_it/ | false | false | self | 1 | null |
Is buying Mac Studio a good idea for running models? | 1 | 2023-08-03T06:49:27 | Wrong_User_Logged | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15gwhfa | false | null | t3_15gwhfa | /r/LocalLLaMA/comments/15gwhfa/is_buying_mac_studio_a_good_idea_for_running/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0GIuiPhksQyuqFa6LoXub5ckC40vd4Yq2lA__ExvbQs', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/40vph6jzdufb1.jpg?width=108&crop=smart&auto=webp&s=ab3409f8a85f449b0fced3d7d996ca47572aa24a', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/40vph6jzdufb1.jpg?width=216&crop=smart&auto=webp&s=ec85e849f9afb9e19bb9e4e3c8f613630e37dbab', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/40vph6jzdufb1.jpg?width=320&crop=smart&auto=webp&s=1884794814d270e1dc8b79150d7277dc99007469', 'width': 320}, {'height': 446, 'url': 'https://preview.redd.it/40vph6jzdufb1.jpg?width=640&crop=smart&auto=webp&s=407e390010e363245e7030ee9d4f3244bd596ad4', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/40vph6jzdufb1.jpg?auto=webp&s=02721950e9c9e643fcbc3611851da15e62e0dad3', 'width': 716}, 'variants': {}}]} |
|||
I'm paying you to help me generate erotic stories (I already have the models on Runpod). | 1 | [removed] | 2023-08-03T07:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/15gwol7/im_paying_you_to_help_me_generate_erotic_stories/ | Weekly_Highway5493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gwol7 | false | null | t3_15gwol7 | /r/LocalLLaMA/comments/15gwol7/im_paying_you_to_help_me_generate_erotic_stories/ | false | false | nsfw | 1 | null |
The best model for "Talk to your data" scenarios? | 1 | So, the part where we divide documents into chunks and create embeddings for them, store them in a vector DB is pretty straight forward, but using an LLM for taking chunks as input and answering the query; I am not clear which Open Sourced LLM would be a clear winner to use.
There are a lot of 7B and 13B models out there we can use, but which one is best or more optimized for this task?
Can anyone help/guide me in this regard? I believe there is no benchmark or leraderboard to evaluate a models peformence in question answering scenario, or is there?. | 2023-08-03T07:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/15gwp19/the_best_model_for_talk_to_your_data_scenarios/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15gwp19 | false | null | t3_15gwp19 | /r/LocalLLaMA/comments/15gwp19/the_best_model_for_talk_to_your_data_scenarios/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.