title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How many of you built a rig specifically to self host LLMs as a hobby? | 214 | So, I am curious how many folks have invested in hardware to run local LLMs. I am not talking about ML/AI pros who work for tech companies and who buy this stuff anyway to earn living--talking about hobbyists: what are your use cases and do you feel it was worth it?
Building a home server right now, and was curious if I should invest in extra RAM and a 4090 for this--curious to hear about other people. | 2023-07-04T18:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/14qmk3v/how_many_of_you_built_a_rig_specifically_to_self/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qmk3v | false | null | t3_14qmk3v | /r/LocalLLaMA/comments/14qmk3v/how_many_of_you_built_a_rig_specifically_to_self/ | false | false | self | 214 | null |
Huggingface alternative | 39 | I'm currently downloading a model from huggingface with 200 KB/s. It should be 100x as fast. Has anybody experienced that? Does anyone download their LLMs from a different source? I've recently stumbled upon [ai.torrents.luxe](https://ai.torrents.luxe) but it's not up to date and lacks many (especially ggml) models.
I think torrents are very suitable for distributing LLMs. | 2023-07-04T18:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/14qncmy/huggingface_alternative/ | Mefi282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qncmy | false | null | t3_14qncmy | /r/LocalLLaMA/comments/14qncmy/huggingface_alternative/ | false | false | self | 39 | null |
I measured the effect of RAM overclocking on LLM speed. | 57 | I tested, how RAM speed affects generation speed. In short, 11% increase in RAM frequency leads to 6% increase in generation speed. The details follow:
The test setup was AMD Ryzen 9 3950X and 64Gb RAM (Kingston Renegate)
I started the model like this: `.\koboldcpp.exe --model airoboros-65b-gpt4-1.4.ggmlv3.q5_K_M.bin --highpriority --threads 16 --usecublas --stream` then used the instruction mode to give it a task: "Write a story about plumber who was disrespected with a plunger." 3 times and recorded the generation speed reported by the console.
First run: memory at XMP profile, 18-22-22-39 3600MHz.
Ms per token: 1073 1070 1068 = avg 1070
Second run, the same at 4000MHz.
Ms per token: 1018 1010 1000 = avg 1009 | 2023-07-04T21:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/14qrp0s/i_measured_the_effect_of_ram_overclocking_on_llm/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qrp0s | false | null | t3_14qrp0s | /r/LocalLLaMA/comments/14qrp0s/i_measured_the_effect_of_ram_overclocking_on_llm/ | false | false | self | 57 | null |
CPU or GPU for inference only ? | 5 | Hey, I'm currently thinking of building something to do inference only with LLMs.
Does a GPU make a difference or can I run using only a CPU ?
Do you think the new am5/ddr5 combo is more relevant for this kind of tasks than am4/ddr4?
Also if I want to use diffusion models for image generation, is a GPU required ?
Many thanks ✌️ | 2023-07-04T21:46:05 | https://www.reddit.com/r/LocalLLaMA/comments/14qryw5/cpu_or_gpu_for_inference_only/ | 1PLSXD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qryw5 | false | null | t3_14qryw5 | /r/LocalLLaMA/comments/14qryw5/cpu_or_gpu_for_inference_only/ | false | false | self | 5 | null |
Isn't GPT-2 enough for most tasks nowadays? Why should I use a really large model? | 1 | [removed] | 2023-07-04T21:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/14qs4vh/isnt_gpt2_enough_for_most_tasks_nowadays_why/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qs4vh | false | null | t3_14qs4vh | /r/LocalLLaMA/comments/14qs4vh/isnt_gpt2_enough_for_most_tasks_nowadays_why/ | false | false | default | 1 | null |
When you train a LoRA at 256 token cutoff length/context window, what does that mean for its behavior? | 7 | I know that the training process itself is only going to look at 256 token chunks at once, and the typical llama model is trained/finetuned at 2048 token context. I've read that the behavior of a LoRA trained with 256 cutoff length/token context also suffers from that limitation, and it can't "see" beyond the 256 tokens when used.
Once you start using a model with a 256 token context LoRA, is the 256 token block positioned against the "end" of the context, aka where the LLM is currently outputting? Will it only affect behavior utilizing what is seen within that 256 token context window?
If a LLM with this LoRA is inferring, and it has instructions in that 256 token window in its context that are telling it to do stuff the LoRA trained it to do, and it also has instructions beyond that (say filling up the full 2048 token context) telling it to do more stuff, is it able to follow all the instructions (256 token context LoRA, the rest of the 2048 token context) without getting scrambled? Basically, does the LoRA interfere with its normal ability to handle 2048 token context, and can it merge the 256 token context LoRA behavior into that normal ability without getting messed up? | 2023-07-04T23:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/14qu6nz/when_you_train_a_lora_at_256_token_cutoff/ | SoylentMithril | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qu6nz | false | null | t3_14qu6nz | /r/LocalLLaMA/comments/14qu6nz/when_you_train_a_lora_at_256_token_cutoff/ | false | false | self | 7 | null |
CPU Inference Time Falcon-7b | 5 | I'm having a hard time understanding inference time on CPU. Could someone guide me on how to estimate inference time for max_length=2048, k=10, do_sample=True? I need to run this on a notebook. Here are my laptop specs:
Intel 11th Gen i7-11800H @ 2.30Ghz 16 Cpus ~ 2.3Ghz 32GB RAM. My GPU is NVIDIA GeForce RTX 3060 Laptop with 6GB of VRAM.
Unfortunately, I only have 6GB of VRAM and
haven't had luck to run transformers >=7b with GPU | 2023-07-05T03:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/14qyics/cpu_inference_time_falcon7b/ | Reasonable-Bowler-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qyics | false | null | t3_14qyics | /r/LocalLLaMA/comments/14qyics/cpu_inference_time_falcon7b/ | false | false | self | 5 | null |
Local training 2GPU methods and benchmarks, with/without NVlink | 12 | Curious if anyone has has trained models that don’t fit on one GPU.
My understanding is there are a few methods to do this:
PP- pipeline parallelism
ZeRO
TP - tensor parallelism
According to hugging face these should be about the same with NVlink and PP should be the fastest without. I haven’t seen solid benchmarks showing this though. Does anyone have a link to on??
I believe accelerate should have these all implemented through integrations with deepspeed and megatronLM. I have yet to seen solid tutorials on this. Accelerate doesn’t show solid docs on picking your methods of model parallelism as far as I know.
I believe for PP you need to convert the model to be sequential. Are there any libraries that can do this for you?
I’d love to finetune a quantized falcon locally, as I don’t have NVlink I would assume PP would be fastest. | 2023-07-05T03:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/14qyief/local_training_2gpu_methods_and_benchmarks/ | Artistic_Load909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qyief | false | null | t3_14qyief | /r/LocalLLaMA/comments/14qyief/local_training_2gpu_methods_and_benchmarks/ | false | false | self | 12 | null |
I am having some issues with ooba | 1 | When loading a ggml model, the command line used to show something like "gpu accelerated", but now I'm not seeing that anymore. Also, it seems that when I unload a model, it stays in ram.
Should I do a reinstall? Any other suggestions?
​
2023-07-04 22:21:51 INFO:llama.cpp weights detected: models\TheBloke_WizardLM-30B-Uncensored-GGML\WizardLM-30B-Uncensored.ggmlv3.q4_0.bin
2023-07-04 22:21:51 INFO:Cache capacity is 0 bytes
llama.cpp: loading model from models\TheBloke_WizardLM-30B-Uncensored-GGML\WizardLM-30B-Uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.13 MB
llama_model_load_internal: mem required = 19756.67 MB (+ 3124.00 MB per state)
llama_new_context_with_model: kv self size = 3120.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
2023-07-04 22:22:24 INFO:Loaded the model in 33.31 seconds.
​ | 2023-07-05T03:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/14qyx94/i_am_having_some_issues_with_ooba/ | L_vE_scRmBLd_wRds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qyx94 | false | null | t3_14qyx94 | /r/LocalLLaMA/comments/14qyx94/i_am_having_some_issues_with_ooba/ | false | false | default | 1 | null |
Experimental open-source interface for interacting with various generative neural networks. | 32 | 2023-07-05T04:02:28 | https://www.reddit.com/gallery/14qzm0f | Svgsprite | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14qzm0f | false | null | t3_14qzm0f | /r/LocalLLaMA/comments/14qzm0f/experimental_opensource_interface_for_interacting/ | false | false | 32 | null |
||
What is the best text web ui currently? | 32 | anyone know what has the most models supported & fastest web ui? or atleast what everyone is using. Ive seen [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and [https://github.com/ParisNeo/lollms-webui](https://github.com/ParisNeo/lollms-webui).
im wondering if there are any better ones out there, also what configuration do you all use to run models? (like what type of models, GPTQ/ggml or whatever) | 2023-07-05T04:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/14r0ar1/what_is_the_best_text_web_ui_currently/ | ArrodesDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r0ar1 | false | null | t3_14r0ar1 | /r/LocalLLaMA/comments/14r0ar1/what_is_the_best_text_web_ui_currently/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': '0Mf47iIzydtRJnU06dNsQpHHYsNgVZDQKhZB6wz-XK0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=108&crop=smart&auto=webp&s=feebb78e5768ba366d52d83bf17c105afd5fc4c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=216&crop=smart&auto=webp&s=09b37b0520a9f23fde1bba4a35a18af0e2029be6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=320&crop=smart&auto=webp&s=7a4fb61e368b9aeac199bba67fc437c6cd84046b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=640&crop=smart&auto=webp&s=3b347ab869757eefebe2bb1ced27100c40270f2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=960&crop=smart&auto=webp&s=f4c2ac9b6bf853ea9c3e85f83b3a327495590e3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=1080&crop=smart&auto=webp&s=b82cb5772b4d8cf9e3fb74aa02c0a4749968b031', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?auto=webp&s=15ed3c06ff698f9dc54db1f72b3bb69bb8adb8df', 'width': 1200}, 'variants': {}}]} |
How do you choose LLM for local inference? | 5 | What criteria do you use to select the most appropriate LLMs for local inference? Is there a specific LLM architecture you prefer? Do you consider support for CUDA/Metal? What about the quantization size or the amount of RAM required?
Which of these (or any other) factors are the most crucial to your selection process? Thanks! | 2023-07-05T06:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/14r2ljj/how_do_you_choose_llm_for_local_inference/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r2ljj | false | null | t3_14r2ljj | /r/LocalLLaMA/comments/14r2ljj/how_do_you_choose_llm_for_local_inference/ | false | false | self | 5 | null |
prompting orca-mini-v2_7b.ggmlv3.q8_0.bin | 7 | It didn't quite get the answer correct to the problem:
[https://paste.c-net.org/SuggestVance](https://paste.c-net.org/SuggestVance)
I told it to "think through step by step" and set temperature to 0.
Can someone give me tips to get better results from this model? thanks | 2023-07-05T06:55:35 | https://www.reddit.com/r/LocalLLaMA/comments/14r2r15/prompting_orcaminiv2_7bggmlv3q8_0bin/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r2r15 | false | null | t3_14r2r15 | /r/LocalLLaMA/comments/14r2r15/prompting_orcaminiv2_7bggmlv3q8_0bin/ | false | false | self | 7 | null |
Orca mini 3B is, as you might expect, not a genius level intellect | 63 | ​
[I tried to write a little program that uses llama.cpp to generate writing samples, for reasons. It has successfully written hundreds of essays already, but occasionally it just refuses for dumb reasons.](https://preview.redd.it/ggsc77g1n3ab1.png?width=1741&format=png&auto=webp&s=047ae275d8fb28d4afe0465217d9464d8e0e49cf) | 2023-07-05T07:41:00 | https://www.reddit.com/r/LocalLLaMA/comments/14r3lhe/orca_mini_3b_is_as_you_might_expect_not_a_genius/ | Robot_Graffiti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r3lhe | false | null | t3_14r3lhe | /r/LocalLLaMA/comments/14r3lhe/orca_mini_3b_is_as_you_might_expect_not_a_genius/ | false | false | 63 | null |
|
how to distill llm's in-context learning ,any suggestions? | 1 | **how to distill llm's in-context learning ,any suggestions?** | 2023-07-05T09:04:46 | https://www.reddit.com/r/LocalLLaMA/comments/14r55b7/how_to_distill_llms_incontext_learning_any/ | MousaicFall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r55b7 | false | null | t3_14r55b7 | /r/LocalLLaMA/comments/14r55b7/how_to_distill_llms_incontext_learning_any/ | false | false | default | 1 | null |
Help presenting key differences between two different versions of a document | 10 | Hey,
not sure if LLM would suit the best, but basicly i would like to get the key differences between two different versions of our terms and conditions, each being some 5 pages long so I don’t think feeding it to GPT via API is an option for example, i haven’t tinkered with LLMs locally (yet) but as I am building an invesment calculations on whether we should buy a workstation to run different tasks locally, if we could solve this (and similar) problems with running a model locally then obviously it would be a plus in my books.
So basicly looking for help and tips, if anyone did something similar? Thanks | 2023-07-05T09:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/14r57nk/help_presenting_key_differences_between_two/ | MRWONDERFU | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r57nk | false | null | t3_14r57nk | /r/LocalLLaMA/comments/14r57nk/help_presenting_key_differences_between_two/ | false | false | self | 10 | null |
Deploying models on HuggingFace | 1 | Sorry if this has been asked before: I'd like to deploy conversationals LLMs with HuggingFace (I'm looking for cheaper GPT3.5 replacements, and I a willing to take a drop in answer quality).
However mosts models I'm considering need large amounts of memory (for example, TheBloke/vicuna-13B-1.1-HF requires 26GB).
I don't know how to deploy those with Endpoint Inference. Do I need a A100? Isn't there a different way to proceed? | 2023-07-05T10:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/14r6uhx/deploying_models_on_huggingface/ | Lesterpaintstheworld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r6uhx | false | null | t3_14r6uhx | /r/LocalLLaMA/comments/14r6uhx/deploying_models_on_huggingface/ | false | false | default | 1 | null |
GPT4 for data annotation | 1 |
I am trying to use GPT4/GPT3.5 for data annotation to prepare a supposed gold standard data to train my own smaller models (like Orca), and I am unable to. Most times, it does not even return valid JSON. Even if it does, it does not give consistent results even if I set temperature to zero. There are times it is consistent, but does not follow instructions and hallucinates. My use case is mostly for large prompts (greater than 8k tokens) where I send in unformatted data and instruction to extract structured data from it.
I went through wizardLM and Orca and other papers from MS, and they do not seem to have this problem, mostly because their prompts are not long.
How do I get around this problem? Even if I do not find answer to this question, I am okay to just leave this post here, just as a heads up for anyone trying to use GPT4 to prepare gold standard data. | 2023-07-05T12:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/14r9awd/gpt4_for_data_annotation/ | mr_dark_matter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r9awd | false | null | t3_14r9awd | /r/LocalLLaMA/comments/14r9awd/gpt4_for_data_annotation/ | false | false | default | 1 | null |
Does ExLlama not truncate the prompt length? | 10 | I have a 3090. I am currently running a superhot model using 3840 max_seq_len /2 compress_pos_emb
It seems to work but if I go over the prompt limit, I get the following error:
RuntimeError: start (0) + length (3841) exceeds dimension size (3840).
I'm not sure why this is a problem. There's a setting in text-generation webui in the parameter tab called "Truncate the prompt up to this length" As long as you set it the same as your max_seq_len then it will truncate the prompt to remove everything after that limit so that prompt does not overfill.
For instance.
Harry is a Rabbit. **Harry drives a red car**
**###Instruction: Continue the story**
**### Response:**
This prompt has 28 tokens. If you truncate the prompt to 22. (Plus max_new_tokens) It will ignore the fact that Harry is a Rabbit.
It worked for gptq for llama models. I've had stories that went on for 20,000 tokens. Is that not the same for exllama models? I've even reduced the truncate limit but that value seems to be ignored by exllama.
I really want this to work. I don't want to go back to gtpq for llama models since the context windows are so small. World information itself takes a very large amount of context. There's little left to continue the story. | 2023-07-05T15:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/14rejf6/does_exllama_not_truncate_the_prompt_length/ | zippyfan | self.LocalLLaMA | 2023-07-05T16:06:35 | 0 | {} | 14rejf6 | false | null | t3_14rejf6 | /r/LocalLLaMA/comments/14rejf6/does_exllama_not_truncate_the_prompt_length/ | false | false | self | 10 | null |
Flan-Open-Llama-13B | 19 | 2023-07-05T16:16:40 | https://twitter.com/enricoshippole/status/1676624675030401026?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14rfam5 | false | {'oembed': {'author_name': 'Enrico Shippole', 'author_url': 'https://twitter.com/EnricoShippole', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Releasing Flan-Open-Llama-13b, an OpenLLaMA model fine-tuned on the FLAN instruction dataset. <a href="https://t.co/WylFkHVr2X">https://t.co/WylFkHVr2X</a></p>— Enrico Shippole (@EnricoShippole) <a href="https://twitter.com/EnricoShippole/status/1676624675030401026?ref_src=twsrc%5Etfw">July 5, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/EnricoShippole/status/1676624675030401026', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14rfam5 | /r/LocalLLaMA/comments/14rfam5/flanopenllama13b/ | false | false | 19 | {'enabled': False, 'images': [{'id': '9bLRg0I8ZS0XoyaiPFyy_-25YZBXFO1U_UBMsP36La8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/F5zgOKEBW2GfhMHZiuO6Ny9TfMAULpr8mFXV-Sm2Yoc.jpg?width=108&crop=smart&auto=webp&s=7ee521093583c9b8c452daaa361258c69e9185cf', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/F5zgOKEBW2GfhMHZiuO6Ny9TfMAULpr8mFXV-Sm2Yoc.jpg?auto=webp&s=9955c25df9ca6b7f7d4dff817686230121e8e19d', 'width': 140}, 'variants': {}}]} |
||
What am I doing wrong with WizardCoder-15B-v1.0??? | 4 | I can't for the life of me get any sensible code from this model.
I'm using oobabooga as the front end and the instruct prompt is the standard Llama ### Instruct: ### Response: , but when I ask it to generate extremely simple code it just spits out insane levels of garbage. For example:
write python code that will print to 2 columns. In the first column count from 1 to 10, in the second column count from 10 to 1
it responds with this:
for i in range(1,6):
for j in reversed(range(i+5)):
if (j==4 or j==3) and i!=5 : continue
else:
print("{:>3}".format((9-j)+i), end=" ")
print()
print("-"*8+"|"+"-"*7)
for k in range(1,6):
for l in range(k+5):
if ((l<=(k//2)*2+(k%2))or(l>=len(str(((k*(k+1))/2))+1)-1-(k%2)))and not(k==(n:=int((-1+math.sqrt(1+8*l))/2))): #to check whether number of rows are even/odd and adjust accordingly
break
elif abs(-1+abs(float(''.join([chr(ord('@') + int(x)%32) for x in str(num)]))))<=1e-10:
#check perfect square condition
num=((k*(k+1))/2)+(k%2)-(l%(k+1));#formula used to find numbers based on row position while len(set([''.join([y[z] for z in [m]*d]) for y in ['@']*((w:=max([(p:=sum([[q**r for q in p]+[(s:=sum([t**(u:=v+w)])/(u+1)<>(t**(u+1)/(u+1))]])[::-1].index(True)),p]))+1)*(h:=min([(p:=sum([[q**r for q in p]+[(s:=sum([t**(u:=v+w)])/(u+1)<>(t**(u+1)/(u+1))]])[::-1].index(False)),p])),m in [(a:=list(map(lambda _:random(),'abcdefghijklmnopqrstuvwxyz')))[:],[]][::1][:]] for d in [[c:=randint(1,(b:=len(a)//2))[0]-1 for _ in''*w]][:-1]])!= w*h:#checking unique arrangement num+=k; print("{:<3}|".format((' '.join([" "*(width := max([len(_) for _ in set([''.join([y[z] for z in [m]*d]) for m in list(filter(None,[['@',_,''][_[0]==_] for _ in [[' ',o]=='@@'[bool(_[-1])] for o,_ in enumerate(reversed(sorted(set([''.join([y[z] for z in [m]*d]) for y in [_+' '] for _,d in zip(*[[itertools.cycle(a)][::-1][_:]],[_=='@'*w for _ in itertools.repeat('')])*h])))]])) for c,*_,_=zip(*(enumerate(row)))))])+f"{num}")) else: pass
Output:
1 2
-1 |
3 4 5
-2 |
-3 |
6 1 2 3 4 5
-1 |-2|-3|
6
Note: The output may vary as it's generated randomly every time you run this program due to randomization involved with finding permutations using math formula. Also note that there can be multiple solutions depending upon how many digits we have after decimal point which makes our solution more complex than just checking odd/even conditions alone. However, I hope my explanation helps! Let me know your thoughts :). Thank You!!
, MSc Student at University of California San Diego
I have to assume I'm doing something wrong (I've tried dozens of different settings and set the Temperature to 0.01, but I always get junk code -- sometimes it decides that it needs to ignore even or odd numbers, other times it just generates a pattern of "\*" ). | 2023-07-05T17:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/14rgyty/what_am_i_doing_wrong_with_wizardcoder15bv10/ | GeeBee72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rgyty | false | null | t3_14rgyty | /r/LocalLLaMA/comments/14rgyty/what_am_i_doing_wrong_with_wizardcoder15bv10/ | false | false | self | 4 | null |
Expanding LLM Context Window | 22 | Spent the day off yesterday diving into how Position Interpolation works and made a quick writeup. These are the main benefits:
* With only 1000 training steps, the context window for LLaMA models ranging in size from 7B to 65B was extended **from 2048 tokens to** **32768 tokens.**
* Model quality is preserved for tasks within its original context window sizes
* Models with Position Interpolation can take advantage of their greater context window - with competitive performance on text modeling and long text summarization
You can check out my full-write up (and subscribe for weekly easy-to-digest NLP updates) [here](https://shaankhosla.substack.com/p/yet-another-way-to-extend-the-context).
​ | 2023-07-05T17:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/14rhmh7/expanding_llm_context_window/ | No-Butterscotch-6654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rhmh7 | false | null | t3_14rhmh7 | /r/LocalLLaMA/comments/14rhmh7/expanding_llm_context_window/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': '1PHN92PLkTXF0w6hSsddwd0MiEoiT335FbY3YGQDzwk', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=108&crop=smart&auto=webp&s=941d3dba1d0b9bbcfff7277e0c176792002d435d', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=216&crop=smart&auto=webp&s=0d51707508d6121c1c6b6df42129d9db22b81c2c', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=320&crop=smart&auto=webp&s=7c9822006de3da9d3bf7167f83a356d58676486c', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=640&crop=smart&auto=webp&s=ffd4c70f308f4ec85b9cb1bffa036340d35bad14', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=960&crop=smart&auto=webp&s=037f019c0d20629e1c1488cc3a4ca4ffc70b9ac0', 'width': 960}], 'source': {'height': 545, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?auto=webp&s=e4973db36fbffbe86fa25818d9e5781daa102224', 'width': 1024}, 'variants': {}}]} |
How Processing Prompt Works [BLAS]? | 1 | Explain why this Processing Prompt \[BLAS\] (1024 / 1204 tokens) occurs on every request. because of this, the response time is about 400 seconds. how to reduce it?
https://preview.redd.it/z6me60ydp6ab1.png?width=960&format=png&auto=webp&s=501a74548fcdf5ea92b18f172a6599bd29c06623 | 2023-07-05T17:49:49 | https://www.reddit.com/r/LocalLLaMA/comments/14rhyv2/how_processing_prompt_works_blas/ | roman1338sf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rhyv2 | false | null | t3_14rhyv2 | /r/LocalLLaMA/comments/14rhyv2/how_processing_prompt_works_blas/ | false | false | 1 | null |
|
SillyTavern 1.8 released! | 100 | 2023-07-05T18:08:19 | https://github.com/SillyTavern/SillyTavern/releases | WolframRavenwolf | github.com | 1970-01-01T00:00:00 | 0 | {} | 14riib1 | false | null | t3_14riib1 | /r/LocalLLaMA/comments/14riib1/sillytavern_18_released/ | false | false | 100 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]} |
||
HuggingChatAllInOne: Run HF and GPTQ models using HuggingChat UI easily | 6 | [HuggingChatAllInOne](https://github.com/bodaay/HuggingChatAllInOne)
Hey everyone,
The main goal of this repo is to make it super easy for anyone to get HuggingChat running as a UI and text-generation inference server, all in one Docker file. I've also included runpods templates to make launching these even easier.
The way this repo is built, it'll be no trouble keeping it in sync with any updates from the HuggingChat UI and Inference Repos. It's all about making things simple, right?
With just a single Docker file, anyone can take a test drive with LLM models and share the link with others for them to have a go at text-generation, all in their own individual sessions. It's like having your own personal test ground!
The HuggingChat UI is really neat, similar to the chatGPT one where each user has their own session and chat history.
For those interested, there are two runpods templates ready to roll - one for HF models and one for GPTQ. Just a heads up though, the GPTQ models support is exclusive to models built with the latest gptq-for-llama.
Check out the runpod templates in the GitHub repo linked here. Happy experimenting!
​
​
https://preview.redd.it/s7mnxi8uq6ab1.png?width=2453&format=png&auto=webp&s=dea0fb5b59c129e8e7f64bce23c347b64501214c
​ | 2023-07-05T18:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/14riqo5/huggingchatallinone_run_hf_and_gptq_models_using/ | bodaaay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14riqo5 | false | null | t3_14riqo5 | /r/LocalLLaMA/comments/14riqo5/huggingchatallinone_run_hf_and_gptq_models_using/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'JpVrs0sCIibdEY5Co5OrLqyeeam7AaElRlena8NUxso', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=108&crop=smart&auto=webp&s=b372cbcc0a14432f8758f0ff5be2d8bbed44848b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=216&crop=smart&auto=webp&s=babc1ad332967099b30cb02a155cd943966778cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=320&crop=smart&auto=webp&s=db16bbc134d7a7c242855537b485c88279670a74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=640&crop=smart&auto=webp&s=89582ffc708c37cf9473926720c35039b98b9408', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=960&crop=smart&auto=webp&s=688c57e8025c7882943efb4e0fe3f802914386d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=1080&crop=smart&auto=webp&s=b01c006e67aeddf86f995e4e805278da1ffe3b0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?auto=webp&s=a7ea8aeb6e47063704264a172c3c9cfebc6611fe', 'width': 1200}, 'variants': {}}]} |
|
Is it safe to delete these files in my local LLM folder? What am I losing if I get rid of the GIT folder entirely? I'm just a user of this stuff, not a developer. | 0 | ​
https://preview.redd.it/n5eina1mx6ab1.png?width=851&format=png&auto=webp&s=515f4fe032344c8d18fa325c6204550e157a355a | 2023-07-05T18:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/14rj9vv/is_it_safe_to_delete_these_files_in_my_local_llm/ | cleverestx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rj9vv | false | null | t3_14rj9vv | /r/LocalLLaMA/comments/14rj9vv/is_it_safe_to_delete_these_files_in_my_local_llm/ | false | false | 0 | null |
|
Training options for tasks unsupported by HF | 1 | Hugging face has a fairly limited number of tasks they support. Are there any other implementations of LoRA that people use where you can customize a bit more, changing final layer and loss function. | 2023-07-05T18:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/14rjkim/training_options_for_tasks_unsupported_by_hf/ | Artistic_Load909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rjkim | false | null | t3_14rjkim | /r/LocalLLaMA/comments/14rjkim/training_options_for_tasks_unsupported_by_hf/ | false | false | self | 1 | null |
How to create a LoRA from one book or group of tematic books | 3 | Hi! I am "half-newbie" in LLM, I know how to use (power user), install, solve basic problems and I have a basic knowledge in programming. I know how to use another type of AI too, the Stable Diffusion, I have been studying this program for over seven months. I fond the Oobabooga, KoboldAI and other programs and I think: "It's possible to train the AI to understand RPG books"?
I know how to convert a PDF Book to a text (DOCX in a topic structure), but I don't know how to convert this text into a format that LLaMA can understand more easily, nor how to organize the text to do the LoRA training. I know that it is possible to train with unstructured text, however, this reduces the chance of good results. And I don't know what tools to use for that either.
Can help me with this?
Thank very much! | 2023-07-05T19:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/14rk7g9/how_to_create_a_lora_from_one_book_or_group_of/ | Shadow_One111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rk7g9 | false | null | t3_14rk7g9 | /r/LocalLLaMA/comments/14rk7g9/how_to_create_a_lora_from_one_book_or_group_of/ | false | false | self | 3 | null |
Issues using P40s | 5 | Hi, Im trying to get 2x P40s working on a Asus Z270-p. Ive got 4g decoding on, tried setting everything to gen2. I removed all the drives and m.2s. But it still wont post. If i remove either of the 2 cards it posts just fine and boots. But with 2 it refuses to do anything.
It doesnt matter which slot i use either. Any ideas?
Ive read online i need resizable bar support, but ive been running one of them without problems, it really is only when i try to run 2 at the same time. Is this related to the resizable bar support?
I've also tried with a GPU riser and then it boots up but the card isnt visible.
Anyone have any experience getting 2 P40s working in the same machine, What are the system specs?
edit: I switched to a 11th gen mobo / cpu with resizable bar support and it just flat out worked. | 2023-07-05T19:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/14rkncu/issues_using_p40s/ | ISwearImNotAnAI | self.LocalLLaMA | 2023-07-13T13:12:41 | 0 | {} | 14rkncu | false | null | t3_14rkncu | /r/LocalLLaMA/comments/14rkncu/issues_using_p40s/ | false | false | self | 5 | null |
Building a Professional QA Chatbot using 4xV100s | 3 | I'm seeking guidance from the community regarding the development of a professional QA chatbot. Due to the nature of my project, I am unable to use ChatGPT and instead require a private language model. I currently have 4xV100s at my disposal and I'm aiming to strike a balance between the speed of inferencing and the quality of the answers generated in an initial pilot phase.
My primary goal is to gather feedback from users to fine-tune the model in the long run. Therefore, at this stage, I'm inclined to prioritize the speed of inferencing. However, I am aware that the performance of 4xV100s might not be sufficient to achieve the desired speed.
I have considered utilizing the Falcon 7b/40b, but I'm open to any suggestions or insights that the community might have regarding alternative approaches/models. My ultimate aim is to build a highly efficient QA chatbot that delivers accurate and prompt responses.
I greatly appreciate any advice or recommendations you can provide. Thank you in advance for your support! | 2023-07-05T19:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/14rljdd/building_a_professional_qa_chatbot_using_4xv100s/ | vinwizard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rljdd | false | null | t3_14rljdd | /r/LocalLLaMA/comments/14rljdd/building_a_professional_qa_chatbot_using_4xv100s/ | false | false | self | 3 | null |
Triton vs Llama.cpp? | 2 | Can anyone explain the difference to me? It seems, based on my very small understanding, that both options allow for a way to run inference on just a CPU? Are there any pros/cons?
For background, I'm looking into the best way to run various models like falcon or MPT in a datacenter cluster where we have "buckets of RAM" but no GPU cards. We're trying to see what we can get away with running to serve a global user base. | 2023-07-05T20:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/14rluww/triton_vs_llamacpp/ | SigmaSixShooter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rluww | false | null | t3_14rluww | /r/LocalLLaMA/comments/14rluww/triton_vs_llamacpp/ | false | false | self | 2 | null |
Considering migrating from Ubuntu 22.04 to Debian 12 | 1 | Hello,
I've been contemplating a switch from Ubuntu 22.04 to Debian 12 and would love to hear about your experiences, especially regarding running inferencing on GGML with full functionality, including GPU acceleration (CUDA).
Recently, I gave Fedora 38 a try, but unfortunately, I encountered compatibility issues with the GGML and GPU acceleration. It seemed to be related to compiler versions since Fedora utilizes a more recent GCC version. I didn't have the opportunity to investigate this thoroughly, so I reverted back to Ubuntu.
Now, I'm curious to know if anyone has installed Debian 12 and successfully achieved smooth inferencing with all the features of llama.cpp, including GPU acceleration (CUDA). It would be great to hear about your setup, any challenges faced, and how you managed to overcome them.
Thanks! | 2023-07-05T20:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/14rmfcm/considering_migrating_from_ubuntu_2204_to_debian/ | brunomoreirab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rmfcm | false | null | t3_14rmfcm | /r/LocalLLaMA/comments/14rmfcm/considering_migrating_from_ubuntu_2204_to_debian/ | false | false | self | 1 | null |
Local LLaMa on Android phone | 70 | Wow! I just tried the 'server thats available in llama.cpp on my android phone, and its VERY user friendly. I can keep running this on the go for private chats. (for things that i can't use chatgpt :)
Steps:
1. Install termux from f-droid
2. Install gcc from its-pointless repo
3. compile llama.cpp using the following steps
```
cmake .
make
./bin/server -m your-ggml-model.bin
```
| 2023-07-05T21:04:50 | https://v.redd.it/vizurn39o7ab1 | AstrionX | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14rncnb | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/vizurn39o7ab1/DASHPlaylist.mpd?a=1692045165%2CZDVmYWY3YzU0YjIwYmMzYWJmMTBhZTUwNGNhMTEwNGZkNTdiMmFhYzMzMDU3OTY5NmJjNTMxYTRhM2E1NjU0Yg%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/vizurn39o7ab1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/vizurn39o7ab1/HLSPlaylist.m3u8?a=1692045165%2COTczYTc3NmVmM2RmOWE5OGM1ZDBkMzJhYjcwNDYzOTdlMTBmNDg4NDcyOTI0NGRjNjZhMjczNThhZTk0NWI1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vizurn39o7ab1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 484}} | t3_14rncnb | /r/LocalLLaMA/comments/14rncnb/local_llama_on_android_phone/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=3960d6409a2008b461e47572959266c5a4dd0edf', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=f1ae9d081e64fa58a915d8fd0c079aacca32ad04', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=b38b016d6223b863acb475c2836935f0e937083f', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=8372036afb13175ac587587780ca4042b8c7b725', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=495200297b7b07ceeb8494355324705c6d7ad9ab', 'width': 960}], 'source': {'height': 2388, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?format=pjpg&auto=webp&s=7845150eabe8538abb580cd00c62cbf14792a4cc', 'width': 1069}, 'variants': {}}]} |
|
Fine tuning Falcon 7B Instruct | 5 | I am trying to fine tune Falcon-7b-Instruct:
Modules: "query\_key\_value", "dense", "dense\_h\_to\_4h" and "dense\_4h\_to\_h") using peft-LoRA with 64,32 and 16 as ranks tried.
Dataset: 80 page FAQ dataset (structured as question-answer pairs. 280 in total)
I'm observing the following:
1. The model keeps rambling while answering a question
2. Even with low training error, the answers to most of the questions are incorrect
Any suggestions please on what I can do to improve the model's performance? | 2023-07-05T21:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/14rnfzs/fine_tuning_falcon_7b_instruct/ | anindya_42 | self.LocalLLaMA | 2023-07-06T06:30:45 | 0 | {} | 14rnfzs | false | null | t3_14rnfzs | /r/LocalLLaMA/comments/14rnfzs/fine_tuning_falcon_7b_instruct/ | false | false | self | 5 | null |
Intel AI built into CPU's, is it at all useful for text or image generation? What is it? | 31 | Just noticed My CPU (12th gen) has what they call "Integrated AI" and even has "Intel® Gaussian & Neural Accelerator (Intel® GNA)" listed in the device manager.
See more here:
[https://www.intel.com/content/www/us/en/products/docs/processors/core/intelligent-pc-overview.html](https://www.intel.com/content/www/us/en/products/docs/processors/core/intelligent-pc-overview.html)
This stuff is way above my head but I'm dabbling in text and image generation and this sounds like it should be useful, if not, what IS it useful for? | 2023-07-05T21:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/14roq0p/intel_ai_built_into_cpus_is_it_at_all_useful_for/ | SGAShepp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14roq0p | false | null | t3_14roq0p | /r/LocalLLaMA/comments/14roq0p/intel_ai_built_into_cpus_is_it_at_all_useful_for/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': '7wlnlsb1p8kqETD6MDhVPbujRAXC14mk8BJ3S2O8fCg', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?width=108&crop=smart&auto=webp&s=6a95ee4184562ea6e68359542a9ba4fac4f67356', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?width=216&crop=smart&auto=webp&s=db96825d26a40d3ff36bd9249696f6589a6c12f4', 'width': 216}, {'height': 211, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?width=320&crop=smart&auto=webp&s=09df17da18177e610ef5f2a653e8af6a754ba14d', 'width': 320}], 'source': {'height': 387, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?auto=webp&s=0402e358a1b5b71d7e7e3840908c20761a156712', 'width': 586}, 'variants': {}}]} |
Why does gpt 4 give me failed to load model? | 0 | C:\\Users\\Laptop\\Downloads\\Aishit\\LLama>main -i --interactive-first -r "### Human:" --temp 0 -c 2048 -n -1 --ignore-eos --repeat\_penalty 1.2 --instruct -m ggml-model-q4\_1.bin
main: build = 794 (ec326d3)
main: seed = 1688595156
llama.cpp: loading model from ggml-model-q4\_1.bin
error loading model: unexpectedly reached end of file
llama\_load\_model\_from\_file: failed to load model
llama\_init\_from\_gpt\_params: error: failed to load model 'ggml-model-q4\_1.bin'
main: error: unable to load model | 2023-07-05T22:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/14rp7d1/why_does_gpt_4_give_me_failed_to_load_model/ | Puzzleheaded-Nose-94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rp7d1 | false | null | t3_14rp7d1 | /r/LocalLLaMA/comments/14rp7d1/why_does_gpt_4_give_me_failed_to_load_model/ | false | false | default | 0 | null |
A small llama.cpp server playground | 22 | ​
https://preview.redd.it/hl0tzxk008ab1.png?width=1337&format=png&auto=webp&s=5f37d2928ba12c9f0e9efc7013bc35e6728e8906
Hello People, i hope you are well.
I want to share a small fronted in which i have been working, made with Vue, is very simple and still under development due to the nature of the server.cpp. To run it you need the executable of server.cpp and then run the fronted that will to connect to it and perform the inferences. Supports key combinations, smooth text generation, parameterization, repeat, undo and stop.
You can run the project through clonning the project and then run it following the instructions or use an executable that I made for windows that includes a mini serve that I wrote in C to serve the compiled fronted (all in 70kb).
Link of the llama.cpp server: [https://github.com/ggerganov/llama.cpp/tree/master/examples/server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server)
Link of the playground fronted: [https://github.com/hwpoison/llama-server-playground/tree/main/frontend](https://github.com/hwpoison/llama-server-playground/tree/main/frontend) (you can get the windows release from here).
Some pics:
​
https://preview.redd.it/zutazmp528ab1.png?width=509&format=png&auto=webp&s=d876de73686dd5d4804ff570ba0ea87856d3c671
[contains popover with help content](https://preview.redd.it/yh7zqpin18ab1.png?width=1346&format=png&auto=webp&s=cc51a3549dbf2028d668f23b983dd0276e6f98af)
[The fronted and the llama.cpp server running.](https://preview.redd.it/dg5uuse408ab1.png?width=1366&format=png&auto=webp&s=de21cb547ebaa4c273eef35a4d868e541af9c6e4)
​
I am not an expert in this topic, but I am trying to improve it as they improve the server (is in constant development). My pc is not very good and I use it to experiment without spending a lot of resources.
Greetings!
​ | 2023-07-05T22:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/14rpce2/a_small_llamacpp_server_playground/ | hwpoison | self.LocalLLaMA | 2023-07-05T23:51:17 | 0 | {} | 14rpce2 | false | null | t3_14rpce2 | /r/LocalLLaMA/comments/14rpce2/a_small_llamacpp_server_playground/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} |
|
Haven: Easily run LLMs on inside your own cloud | 1 | [removed] | 2023-07-05T23:06:49 | https://www.reddit.com/r/LocalLLaMA/comments/14rqkhh/haven_easily_run_llms_on_inside_your_own_cloud/ | h-konsti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rqkhh | false | null | t3_14rqkhh | /r/LocalLLaMA/comments/14rqkhh/haven_easily_run_llms_on_inside_your_own_cloud/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'VmEI9xR7ThLUCoCQt_1r0IkPqbNd7G_2woLRY0qHRlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=1fb8b582a80b4a6c8a94a1c876245c7d3359a1db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=6d8c05f8ed4f3761fd05017d86c1b755e105bf5e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=236339b07857cd7f2351c4e4fb97863a9a2f02bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=10e290304bd12d3ca98b21e0997e2d65c48cabf1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5cd36f09ebaa1fa371fe0a716c8d525b1222bf80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4869fb48f22d828fb38f4bc7704f2f66849129ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?auto=webp&v=enabled&s=f21ec63ed41afd9834ab567a7382d5275db543f5', 'width': 1200}, 'variants': {}}]} |
A tool for fast inference running on your own cloud | 1 | Hey all! I’ve been a long time lurker on the subreddit and wanted to share something that me and a friend built. We wanted to create apps on top of open source LLMs and struggled to set them up in our cloud environment efficiently. We realized that the tool we were building for this in itself would probably be pretty useful for the community so we decided to open-source it.
It runs entirely on your own infrastructure. You connect your google cloud to it and you can then spin up models with just one line of python.
Currently we support a few of the major open source models. Adding fine-tuned versions of already existing model architectures from Huggingface is pretty straight forward and we're going to add more architectures too. Right now it runs on Google Cloud but we’re going to add AWS as soon as we can.
I’m happy to help anyone set this up on their own cloud account. I’d love to hear your feedback as we spend a lot of time on this.
Fine-tuning is also on the way, some of the code is already there if you want to take it apart yourself.
This is our repo: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
This is how to set it up: [https://docs.haven.run](https://docs.haven.run) | 2023-07-05T23:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/14rqwk1/a_tool_for_fast_inference_running_on_your_own/ | torque-mcclyde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rqwk1 | false | null | t3_14rqwk1 | /r/LocalLLaMA/comments/14rqwk1/a_tool_for_fast_inference_running_on_your_own/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'VmEI9xR7ThLUCoCQt_1r0IkPqbNd7G_2woLRY0qHRlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=1fb8b582a80b4a6c8a94a1c876245c7d3359a1db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=6d8c05f8ed4f3761fd05017d86c1b755e105bf5e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=236339b07857cd7f2351c4e4fb97863a9a2f02bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=10e290304bd12d3ca98b21e0997e2d65c48cabf1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5cd36f09ebaa1fa371fe0a716c8d525b1222bf80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4869fb48f22d828fb38f4bc7704f2f66849129ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?auto=webp&v=enabled&s=f21ec63ed41afd9834ab567a7382d5275db543f5', 'width': 1200}, 'variants': {}}]} |
Tool for deploying open source LLMs on your own cloud | 86 | Hey all! I’ve been a long time lurker on the subreddit and wanted to share something that me and a friend built. We wanted to create apps on top of open source LLMs and struggled to set them up in our cloud environment efficiently. We realized that the tool we were building for this in itself would probably be pretty useful for the community so we decided to open-source it.
It runs entirely on your own infrastructure. You connect your google cloud to it and you can then spin up models with just one line of python.
Currently we support a few of the major open source models. Adding fine-tuned versions of already existing model architectures from Huggingface is pretty straight forward and we're going to add more architectures too. Right now it runs on Google Cloud but we’re going to add AWS as soon as we can.
I’m happy to help anyone set this up on their own cloud account. I’d love to hear your feedback as we spend a lot of time on this.
Fine-tuning is also on the way, some of the code is already there if you want to take it apart yourself.
This is our repo: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
This is how to set it up: [https://docs.haven.run](https://docs.haven.run) | 2023-07-05T23:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/ | torque-mcclyde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rqyx8 | false | null | t3_14rqyx8 | /r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'VmEI9xR7ThLUCoCQt_1r0IkPqbNd7G_2woLRY0qHRlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=108&crop=smart&auto=webp&s=b56200d611e5cb9e2c61a353771462fd0299a9f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=216&crop=smart&auto=webp&s=e1a9497cbc75fcb9ad19f411e3bcd3b4b7354be7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=320&crop=smart&auto=webp&s=b3137b3e187c1208465b7778b047599c9bc986d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=640&crop=smart&auto=webp&s=bb79408d06ecb2a679a891d45351a9f485c1109a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=960&crop=smart&auto=webp&s=8df1cab511cad798b4a2d5aa1e3f2e70fdc36ce7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=1080&crop=smart&auto=webp&s=8d8e12999c74c6a0d0b50319f37987dade58a62f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?auto=webp&s=4a755b54442bfb1f52259b8164a4f66c4524c4e0', 'width': 1200}, 'variants': {}}]} |
Best option between 2x3060 or 1x3090. | 4 | The CUDA cores total comes to the same, the memory too.
Now that most a.i. programs can use multiple GPUs, I'm not sure who wins.
I've got a 3060 12gb and got a little play money (400$ USD).
My options are : sell my 3060 and buy a 3090 with the card money and money
Or:
Get another 3060 12gb because I have 2 pciE ports, plus add 32gb RAM TO have a total of 64gb ram.
Both require a new PSU so that's equal.
Give me your opinions please | 2023-07-06T00:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/14rs0ko/best_option_between_2x3060_or_1x3090/ | oodelay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rs0ko | false | null | t3_14rs0ko | /r/LocalLLaMA/comments/14rs0ko/best_option_between_2x3060_or_1x3090/ | false | false | self | 4 | null |
A Comparison of Large Language Models (LLMs) in Biomedical Domain | 60 | 2023-07-06T01:11:08 | https://provectus.com/blog/comparison-large-language-models-biomedical-domain/ | DarronFeldstein | provectus.com | 1970-01-01T00:00:00 | 0 | {} | 14rtg35 | false | null | t3_14rtg35 | /r/LocalLLaMA/comments/14rtg35/a_comparison_of_large_language_models_llms_in/ | false | false | 60 | {'enabled': False, 'images': [{'id': 'BH4nT_ARjSTG5EiwLpm_dYy4TBePmD2SqCfFdon59d4', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=108&crop=smart&auto=webp&s=fd190d954899750bd6dc1628562f2551bb41f152', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=216&crop=smart&auto=webp&s=9d5b635a8f38bc96de9e12dce12330897fb4445a', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=320&crop=smart&auto=webp&s=9f4b56c18a3473e9b354913492c250bac4e09c83', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=640&crop=smart&auto=webp&s=feb06431ae472e65776457c9a11c17d164131021', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=960&crop=smart&auto=webp&s=770c7007759ef73c4e5010d2eed002044537911c', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=1080&crop=smart&auto=webp&s=28d9099e1e0019d107b7aba1a7c060a14b7c1730', 'width': 1080}], 'source': {'height': 933, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?auto=webp&s=efd7e31ce7c8310e4cba24b0c902353530e6f6cd', 'width': 1400}, 'variants': {}}]} |
||
Can we train for lora with SuperHOT model | 3 | Hi guys,
I'm happy with the performance of the SUPERHOT version of vicuna model(I use this model:TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16,and Bloke is a hero!), but I can't seem to find a way to train it(the textgen-webui shows LoRA training has only currently been validated for LLaMA,OPT,GPT-J,and GPT-NeoX models.(Found model type:ExllamaModel))Does it now support training and generating lora files?
​ | 2023-07-06T03:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/14rwue7/can_we_train_for_lora_with_superhot_model/ | baibaideyun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rwue7 | false | null | t3_14rwue7 | /r/LocalLLaMA/comments/14rwue7/can_we_train_for_lora_with_superhot_model/ | false | false | self | 3 | null |
What's the most sophisticated offline chat LLM available now for either a Macbook M2 Max or a 3070ti GPU desktop I can run? | 10 | Please, I am begging you, help a fool out | 2023-07-06T04:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/14rxnq7/whats_the_most_sophisticated_offline_chat_llm/ | Careful_Tower_5984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rxnq7 | false | null | t3_14rxnq7 | /r/LocalLLaMA/comments/14rxnq7/whats_the_most_sophisticated_offline_chat_llm/ | false | false | self | 10 | null |
LongNet: Scaling Transformers to 1,000,000,000 Tokens | 126 | [https://arxiv.org/pdf/2307.02486.pdf](https://arxiv.org/pdf/2307.02486.pdf) Yeeehaww | 2023-07-06T05:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/14rym30/longnet_scaling_transformers_to_1000000000_tokens/ | iuwuwwuwuuwwjueej | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rym30 | false | null | t3_14rym30 | /r/LocalLLaMA/comments/14rym30/longnet_scaling_transformers_to_1000000000_tokens/ | false | false | self | 126 | null |
Reproducing LLAMA or doing LoRa on large corpus of pirated books | 18 | I was thinking whether it's feasible to reproduce LAMA on new large corpuse of pirated books. I did LoRa experiments and it seems unlikely that it would scale beyond 100s of books well. If so, how much compute would I need to repruduce 7b model from skratch? | 2023-07-06T05:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/14rz5p1/reproducing_llama_or_doing_lora_on_large_corpus/ | Itchy_Monk2686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rz5p1 | false | null | t3_14rz5p1 | /r/LocalLLaMA/comments/14rz5p1/reproducing_llama_or_doing_lora_on_large_corpus/ | false | false | self | 18 | null |
after a bit of tweaking, i got gpt3clippy working via oobabooga | 3 | "just follow the errors until its fixed"
[using the openai extensions ofc](https://preview.redd.it/kiqaryf98aab1.png?width=1915&format=png&auto=webp&s=01a3ebdc9732d639082273d1b1503feeb8610399) | 2023-07-06T05:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/14rzd7r/after_a_bit_of_tweaking_i_got_gpt3clippy_working/ | happydadinau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rzd7r | false | null | t3_14rzd7r | /r/LocalLLaMA/comments/14rzd7r/after_a_bit_of_tweaking_i_got_gpt3clippy_working/ | false | false | 3 | null |
|
SillyTavern use-cases? | 11 | I've seen a few people mention they are using SillyTavern and not Ooba.
Curious what use-cases are you using in Silly Tavern that are not available in Ooba? | 2023-07-06T05:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/14rzdsk/sillytavern_usecases/ | robo_cap | self.LocalLLaMA | 2023-07-06T16:29:03 | 0 | {} | 14rzdsk | false | null | t3_14rzdsk | /r/LocalLLaMA/comments/14rzdsk/sillytavern_usecases/ | false | false | self | 11 | null |
Using other tokenizers? | 1 | Hi,
​
I'm trying to apply llama in understanding Korean text. However, the original tokenizer for llama seems to greatly over-estimate the number of tokens.
With the same input text, llama tokenizer would give 5\~6 times more tokens than KoBERT tokenizer.
I assume this is because llama was not built with Korean in mind.
I tried swapping only the tokenizer, but that gives me gibberish output.
What can I do? Are tokenizers something that should be trained with the model?
​
Thanks in advance for anyone that comments. | 2023-07-06T06:16:08 | https://www.reddit.com/r/LocalLLaMA/comments/14s038q/using_other_tokenizers/ | manjimin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s038q | false | null | t3_14s038q | /r/LocalLLaMA/comments/14s038q/using_other_tokenizers/ | false | false | self | 1 | null |
New base model InternLM 7B weights released, with 8k context window. | 49 | 2023-07-06T07:39:08 | https://github.com/InternLM/InternLM | logicchains | github.com | 1970-01-01T00:00:00 | 0 | {} | 14s1p96 | false | null | t3_14s1p96 | /r/LocalLLaMA/comments/14s1p96/new_base_model_internlm_7b_weights_released_with/ | false | false | 49 | {'enabled': False, 'images': [{'id': '2tVUfp5lAxozL-49NKZd2aSVbQLlqCzZXLzttctaQB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=108&crop=smart&auto=webp&s=4749d28246824b1dd6e283cb57232c25cc4aeac2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=216&crop=smart&auto=webp&s=bff4a8531b2a1445903179e79d99f95539f423e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=320&crop=smart&auto=webp&s=4f4f20a8a249adb1f39eb4c1cabbe7973b610af7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=640&crop=smart&auto=webp&s=8b1847fc1aa59fefb5f8bfd7af582780d4d99b3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=960&crop=smart&auto=webp&s=d2f2877cbe6a4096d662a4b7c729488e402ee2be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=1080&crop=smart&auto=webp&s=de5b68835dd3b2b374d130393c5ae2e0e53489a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?auto=webp&s=55fa4084d3aa987e49186687bf7b200201ca2ab7', 'width': 1200}, 'variants': {}}]} |
||
Free LLM api | 1 | What are some free LLM api which are alternative for OpenAI API | 2023-07-06T10:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/14s4hou/free_llm_api/ | Exciting-Purpose-711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s4hou | false | null | t3_14s4hou | /r/LocalLLaMA/comments/14s4hou/free_llm_api/ | false | false | self | 1 | null |
num_beams > 1 breaking my model (Open-LLaMA7b - Alpaca-finetuned) | 3 | As the title states, I'm experiencing some weird results when using num\_beams > 1. The first 150-ish tokens are great, much better than if I use num\_beams == 1. However, after that point, it starts outputting gibberish. I've tried tweaking max\_new\_tokens and max\_length to see if the issue lies there. The model is capable of generating quite long answers without beam search, so I'm a bit lost. Do you have any ideas on how to further debug this issue? Or have anyone encuntered this?
The model is fine-tuned on a translated Alpaca dataset with QLorA
​
tokenizer = LlamaTokenizer.from_pretrained(MODEL)
model = LlamaForCausalLM.from_pretrained(
MODEL, torch_dtype=torch.float32, device_map='auto', load_in_4bit=True,
)
model = PeftModel.from_pretrained(model, CHECKPOINT_DIR, torch_dtype=torch.float16)
prompter = Prompter(TEMPLATE_JSON_PATH)
def generate(self, prompt, max_length=250, temperature=0.9, top_k=50, top_p=0.9, repetition_penalty=1.0, num_beams=1):
# prompt = self.prompter.generate_prompt(prompt, promt1)
input_ids = self.tokenizer.encode(prompt, return_tensors='pt')
input_ids = input_ids.to(self.model.device)
output = self.model.generate(
input_ids=input_ids,
# max_length=max_length,
max_new_tokens =512,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
do_sample=True,
num_return_sequences=1,
num_beams = num_beams,
remove_invalid_values=True,
)
output_text = self.tokenizer.decode(output[0], skip_special_tokens=True)
output_text = output_text[len(prompt):] # Remove the prompt from the output
return output_text
# Breaks
chat.generate(prompt,
temperature=0.9,
top_k=80,
top_p=0.7,
repetition_penalty=1.3,
num_beams=2)
​
# Works
chat.generate(prompt,
temperature=0.9,
top_k=85,
top_p=0.7,
repetition_penalty=1.3)
​ | 2023-07-06T12:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/14s77av/num_beams_1_breaking_my_model_openllama7b/ | BuzzLightr | self.LocalLLaMA | 2023-07-06T12:11:33 | 0 | {} | 14s77av | false | null | t3_14s77av | /r/LocalLLaMA/comments/14s77av/num_beams_1_breaking_my_model_openllama7b/ | false | false | self | 3 | null |
LLaMa 65B GPU benchmarks | 111 | I spent half a day conducting a benchmark test of the 65B model on some of the most powerful GPUs aviailable to individuals.
Test Method: I ran the latest Text-Generation-Webui on Runpod, loading Exllma, Exllma\_HF, and LLaMa.cpp for comparative testing. I used a specific prompt to ask them to generate a long story, more than 2000 words. Since LLaMa-cpp-python does not yet support the -ts parameter, the default settings lead to memory overflow for the 3090s and 4090s, I used LLaMa.cpp directly to test 3090s and 4090s.
Test Parameters: Context size 2048, max\_new\_tokens were set to 200 and 1900 respectively, and all other parameters were set to default.
Models Tested: Airoboros-65B-GPT4-1.4's GPTQ and GGML (Q4\_KS) versions. Q4\_KS is the smallest decent version of GGML models, and probably have similar perplexity with GPTQ models.
Results:
**Speed in tokens/second for generating 200 or 1900 new tokens:**
||Exllama(200)|Exllama(1900)|Exllama\_HF(200)|Exllama\_HF(1900)|LLaMa.cpp(200)|LLaMa.cpp(1900)|
|:-|:-|:-|:-|:-|:-|:-|
|2\*3090|12.2|10.9|10.6|8.3|11.2|9.9|
|2\*4090|20.8|19.1|16.2|11.4|13.2|12.3|
|RTX A6000|12.2|11.2|10.6|9.0|10.2|8.8|
|RTX 6000 ADA|17.7|16.1|13.1|8.3|14.7|13.1|
I ran multiple tests for each combination and used the median value.
It seems that these programs are not able to leverage dual GPUs to work simultaneously. The speed of dual GPUs is not notably faster than their single-GPU counterparts with larger memory.
​
**GPU utilization during test:**
||Exllma(1900)|Exllama\_HF(1900)|LLaMa.cpp(1900)|
|:-|:-|:-|:-|
|2\*3090|45%-50%|40%--->30%|60%|
|2\*4090|35%-45%|40%--->20%|45%|
|RTX A6000|93%+|90%--->70%|93%+|
|RTX 6000 ADA|70%-80%|45%--->20%|93%+|
It’s not advisable to use Exllama\_HF for generating lengthy texts since its performance tends to wane over time, which is evident from the GPU utilization metrics.
6000 ADA is likely limited by its 960GB/s memory bandwidth.
​
**VRAM usage (in MB)** when generating tokens, Exllama\_HF has almost the same VRAM usage as Exllama, so I just list Exllama:
||Exllama|LLaMa.cpp|
|:-|:-|:-|
|2\*3090|39730|45800|
|2\*4090|40000|46560|
|RTX A6000|38130|44700|
|RTX 6000 ADA|38320|44900|
There's additional memory overhead with dual GPUs as compared to a single GPU. Additionally, the 40 series exhibits a somewhat greater demand for memory than the 30 series.
​
Some of my thoughts and observations:
1. Dual 3090s are a cost-effective choice. However, they are extremely noisy and hot. On Runpod, one of 3090's fan speed was consistently at 100% when running tests, which mirrors the behaviors of my local dual 3090s. Placing two non-blower 3090s in the same case can be challenging for cooling. My local 3090s (3 slots spaced) power throttles even with 220w power limit each. Blower-style cards would be a bit better in this regard but will be noisier. IMO, the best solution is to place two 3090s in an open-air setup with a rack and PCI-e extenders.
2. The 4090’s efficency and cooling performance is impressive. This is consistent with what I’ve observed locally. Dual 4090s can be placed on a motherboard with two slots spaced 4 slots apart, without being loud. For the 4090, it is best to opt for a thinner version, like PNY’s 3-slot 4090. Limiting the power to 250W on the 4090s affects the local LLM speed by less than 10%.
3. The A6000 is also a decent option. A single card saves you a lot of hassle in dealing with two cards, both in terms of software and hardware. However, the A6000 is a blower-style card and is expected to be noisy.
4. The 6000 Ada is a powerful but expensive option. But its power cannot be fully utilized when running local LLM. The upside is that it's significantly quieter than the A6000 (I observed its power usage and fan speed to be much lower than A6000).
5. Both the A6000 and 6000 ADA's fans spin at idle speed even when the temperature is below 30 degrees Celsius.
6. I paired a 3090 with a 4090. By allocating more layers to the 4090, the speed was slightly closer to that of dual 4090s rather than dual 3090s, and significantly quieter than dual 3090s.
Hope it helps! | 2023-07-06T12:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/14s7j9j/llama_65b_gpu_benchmarks/ | Big_Communication353 | self.LocalLLaMA | 2023-07-06T23:56:31 | 0 | {} | 14s7j9j | false | null | t3_14s7j9j | /r/LocalLLaMA/comments/14s7j9j/llama_65b_gpu_benchmarks/ | false | false | self | 111 | null |
Is there anything to really chat with an AI as you would do with somebody over the phone ? | 18 | I mean any project, whether free or open source, that would allow you to chat with a bot without using a mouse or keyboard. Ideally, you would be stop or pause the bot's responses by simply saying "STOP" or something similar.
So far, none of the solutions I have tried have provided a smooth conversational experience, as they all require the use of a keyboard or a mouse.
I believe there may be paid solutions available, but I'm talking about open source or completely free options ... I don't mind if the TTS voice sounds like an emotionless robot, and Whisper voice recognition is already working perfectly (at least for me), so what would prevent to make a smooth voice only conversation possible in for example ooba ? Or Silly Tavern already has such an option, I never really tried it ...
Wow, this is the exact question I've been wanting to ask for so many days :)
Thank you | 2023-07-06T13:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/14s8sy6/is_there_anything_to_really_chat_with_an_ai_as/ | alexthai7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s8sy6 | false | null | t3_14s8sy6 | /r/LocalLLaMA/comments/14s8sy6/is_there_anything_to_really_chat_with_an_ai_as/ | false | false | self | 18 | null |
Fine-tune on personal chats | 1 | Did you know that you can download all your personal conversations on Facebook?
Then use that to fine-tune a LLM to answer like you. | 2023-07-06T13:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/14s95ri/finetune_on_personal_chats/ | veentastudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s95ri | false | null | t3_14s95ri | /r/LocalLLaMA/comments/14s95ri/finetune_on_personal_chats/ | false | false | self | 1 | null |
Is there any service like MosaicML to train LLMs, but more focused on individuals instead of enterprises? | 0 | The title | 2023-07-06T13:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/14sa0gh/is_there_any_service_like_mosaicml_to_train_llms/ | mr_house7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sa0gh | false | null | t3_14sa0gh | /r/LocalLLaMA/comments/14sa0gh/is_there_any_service_like_mosaicml_to_train_llms/ | false | false | default | 0 | null |
SuperAGI with KoboldCpp backend | 14 | Hey.
I haven't seen much discussion about [SuperAGI](https://superagi.com/) here, possibly because it mainly seems to target OpenAi (even though they have bundled oobabooga recently). But since I use koboldcpp (and thus ggml), and I don't want to have double models, I created a koboldcpp backend for it.
Results so far, are so so.. This looked promising:
​
https://preview.redd.it/898mmmbk2dab1.png?width=673&format=png&auto=webp&s=23a4b952880136467982f08836b28ac7add0069e
In this instance, I had limited it to 30 tokens generation. Once I tidied things up and let it run for 800 tokens, I got someone's chat with a consolation AI. Disappointing...
But then again, I only have my laptop and chronos-hermes-13b on it. Will probably be better with WizardCoder.
Anyway. In case anyone wants to try it out, or help out, here's the repo:
[https://github.com/neph1/SuperAGI](https://github.com/neph1/SuperAGI)
​
​ | 2023-07-06T15:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/14sc98c/superagi_with_koboldcpp_backend/ | neph1010 | self.LocalLLaMA | 2023-07-06T16:15:24 | 0 | {} | 14sc98c | false | null | t3_14sc98c | /r/LocalLLaMA/comments/14sc98c/superagi_with_koboldcpp_backend/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'UJCmnHuTn9r_Y3u-nHjo5TpdgWhE5vSjvuGFQhXJMaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=108&crop=smart&auto=webp&s=c71a8b4cb3b337d965b9df23b446f28d66554de5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=216&crop=smart&auto=webp&s=6fbee77ae72de2b6cbb7bca3b83923116bb515a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=320&crop=smart&auto=webp&s=15c2e85ab256352e5fab2e443797e6ac1bf1302e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=640&crop=smart&auto=webp&s=45b89ebc736411d323170bde29e19b20bd681e85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=960&crop=smart&auto=webp&s=3177509dcf88c0749df6d4ac7dcfd26a372569c3', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?auto=webp&s=8196cb54a40a3fdbc99764f1f74e5498234b6e31', 'width': 1024}, 'variants': {}}]} |
|
Using LLM locally with HuggingFace blocked? | 0 | I want to try to use either Falcon, Dolly, or Mosaic to run a model locally on my work laptop. Unfortunately, HuggingFace is blocked on my computer. How would I go about this issue? | 2023-07-06T15:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/14sckrf/using_llm_locally_with_huggingface_blocked/ | sisiwnsjhsjajzjxjs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sckrf | false | null | t3_14sckrf | /r/LocalLLaMA/comments/14sckrf/using_llm_locally_with_huggingface_blocked/ | false | false | default | 0 | null |
My largest ever quants, GPT 3 sized! BLOOMZ 176B and BLOOMChat 1.0 176B | 325 | For the last month I've been trying to quantise two mega models, probably the largest models on Hugging Face Hub: Big Science's [BLOOMZ](https://huggingface.co/bigscience/bloomz) and Sambanova Systems' [BLOOMChat 1.0](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1).
I tried various systems, but all the HW available to me either didn't have enough RAM, or would cost too much with enough RAM, or else had old CPUs that I feared would take so long packing the model that it was looking like it was going to cost hundreds of $ to get it done. One guy who had quantised BLOOMZ (and then disappeared without ever uploading it!) said it took him 55 hours in total.
Then yesterday I was asked by [Latitude.sh](https://Latitude.sh) to test a 4 x H100 80GB system for them. It had a pretty recent and beefy CPU, the AMD EPYC 9354, plus 750GB RAM.
So of course I had to test it with these mega models.
And, somewhat to my surprise, a mere 3 hours 35 minutes later, the first was done!
So I'm pleased and relieved to be able to offer these two beasts for your enjoyment. Or at least, the enjoyment of anyone who happens to have big enough HW, or is willing to rent it :)
* [TheBloke/bloomz-176B-GPTQ](https://huggingface.co/TheBloke/bloomz-176B-GPTQ)
* [TheBloke/BLOOMChat-176B-v1-GPTQ](https://huggingface.co/TheBloke/BLOOMChat-176B-v1-GPTQ)
If you do try them, please read the README carefully! There's a special step required before you can run the models: GPTQ has no sharding, and HF won't allow uploading files bigger than 50GB. So I had to split the 94GB safetensors file in to three pieces, and you need to join it together again.
**Provided files**
I did two quants for each model:
* Main branch: group\_size: none + act-order (desc\_act) = True
* Branch group\_size\_128g: group\_size: 128g + act-order (desc\_act) = True
**Why use them?**
Because they're the size of GPT 3!? What more reason do you need? :)
Seriously though: most people probably won't want to bother. It's not going to run on any home HW. But they do seem to be of interest to companies evaluating local LLMs - I've had several people request I quant them so they could be evaluated for professional purposes.
**What hardware is required?**
You need 94GB VRAM just to load the model, plus context.
So either of these should work:
* 2 x 80GB GPU (A100 or H100), or
* 3 x 48GB GPU (eg A6000, A6000 Ada, L40)
I did a few tests on 2 x H100 80GB and got 5 tokens/s using AutoGPTQ running via text-generation-webui.
**What about 3-bit? or 2-bit?**
Yeah I would like to try at least a 3-bit quant. I don't have access to the machine any more, but if/when I do again I will likely make 3-bit quants as well. I'm sceptical how good 2-bit GPTQ would be though.
I'm hopeful a 3-bit quant would run on 2 x 48GB GPU or 1 x 80GB, which makes it a lot more accessible and likely a lot faster too, at least in the 1 x 80GB case.
**What about GGML?**
Possibly. There's a llama.cpp fork called [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp) but it's not been updated in 2 months. So it's not going to support any of the fancy new quantisation methods, performance improvements, GPU acceleration, etc.
If there's demand I might give it a go, but a 176B model on CPU is going to be glacial, and would only work for people with 128GB RAM. | 2023-07-06T16:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/14sdskp/my_largest_ever_quants_gpt_3_sized_bloomz_176b/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sdskp | false | null | t3_14sdskp | /r/LocalLLaMA/comments/14sdskp/my_largest_ever_quants_gpt_3_sized_bloomz_176b/ | false | false | self | 325 | {'enabled': False, 'images': [{'id': 'gXQ-wVUM2Tyx_6Y872ShG3Aaorh5qEh0nAGMrxUIbaU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=108&crop=smart&auto=webp&s=adabb132efe463e3081b83a0a07cc3430c9a1911', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=216&crop=smart&auto=webp&s=5996a94f2121bad64542abccab9dcba459345fe0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=320&crop=smart&auto=webp&s=cf297d5a5ee914d1c5212bd43444bb1b3796cb0c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=640&crop=smart&auto=webp&s=10de0bcc3aea66f98b46823e0c61957956e33662', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=960&crop=smart&auto=webp&s=84eee09e1d5a6e5be1d1d2dfb09ec39b2f256b25', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=1080&crop=smart&auto=webp&s=5d25e2400a9357c6bd7ec88cddacc8dbb2edff82', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?auto=webp&s=550dd8b60c3640dfe667caec97392b7fa7e0c685', 'width': 1200}, 'variants': {}}]} |
Is it possible to change a mining rig into a GPT trainer? | 9 | Hey folks,
I have an old rig of an Octominer with a couple 3090s and 3080s. I would love to either rent out the gpu power/mem or build my own models. In the mining world there are a lot more MOBOs that can have many GPUs like 12. Would I be able to use the mono there or should I just build a gaming PC etc? I guess I'm asking are there MOBOs that can do more than 2+ GPUs to chain together memory for model training | 2023-07-06T17:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/14sg854/is_it_possible_to_change_a_mining_rig_into_a_gpt/ | gosume | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sg854 | false | null | t3_14sg854 | /r/LocalLLaMA/comments/14sg854/is_it_possible_to_change_a_mining_rig_into_a_gpt/ | false | false | self | 9 | null |
For those who train models: how do you normalize your data? | 2 | Assume I have a set of books. Some are perfectly formatted with OCR capabilities from Amazon. Others are scanned in and uploaded to archive.org (say, very old texts). Others have been OCR'd by me manually. How can I ensure each of these are treated the same and fed into the model correctly?
Are there tools for this? Not just for text, but for audio, images, etc. too? | 2023-07-06T17:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/14sg873/for_those_who_train_models_how_do_you_normalize/ | JebryyathHS | self.LocalLLaMA | 2023-07-13T00:21:16 | 0 | {} | 14sg873 | false | null | t3_14sg873 | /r/LocalLLaMA/comments/14sg873/for_those_who_train_models_how_do_you_normalize/ | false | false | default | 2 | null |
Finetuning guide for MPT | 15 | I've written up the exact steps doing a full finetune on an MPT model here:
https://gist.github.com/float-trip/679019a23f246b17d2dff9e2cf55c387
The official Mosaic docs are fine, but aspirational in places. They're not always up-to-date regarding bugs in the library.
As for _why_ to finetune MPT - it's 5-10x cheaper than the quotes I've seen for LLaMA. No idea why this is (Meta got similar numbers while training LLaMA as Mosaic [reports here](https://github.com/mosaicml/llm-foundry/tree/main/scripts/train/benchmarking#a100-80gb-with-1600-gbps-node-node-interconnect-roce)), but for whatever reason it seems that LLaMA finetunes are currently burning the majority of their resources on inefficient tooling. Sadly the MPT models seem to perform worse than LLaMA, but if you're training on any significant number of tokens, imo they're the clear choice. | 2023-07-06T18:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/14si12b/finetuning_guide_for_mpt/ | float-trip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14si12b | false | null | t3_14si12b | /r/LocalLLaMA/comments/14si12b/finetuning_guide_for_mpt/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]} |
Help with QLoRA Fine Tune | 5 | I'm following nearly the same example from the this repository:
https://github.com/mzbac/qlora-fine-tune
-Except I'm testing it with one of the standard hardcoded datasets in his script, 'alpaca'. Here's the command:
python qlora.py --model_name_or_path TheBloke/wizardLM-13B-1.0-fp16 --dataset alpaca --bf16
That dataset is about 52k records. Is 25 hours on an A100 on runpod normal? QLoRA is supposed to be fast.
Any help is appreciated, and would love to colab with folks with similar interests! I've been building stuff with langchain the last few months, but only just started building datasets and trying to tune!
.......................
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda117.so...
...
/usr/local/lib/python3.10/dist-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
warnings.warn(
adding LoRA modules...
trainable params: 125173760.0 || all params: 6922337280 || trainable: 1.8082586117502786
loaded model
...
torch.uint8 6343884800 0.9164368252221308
torch.float32 414720 5.991040066744624e-05
{'loss': 1.3165, 'learning_rate': 0.0002, 'epoch': 0.0}
{'loss': 1.1405, 'learning_rate': 0.0002, 'epoch': 0.01}
{'loss': 0.9342, 'learning_rate': 0.0002, 'epoch': 0.01}
{'loss': 0.6558, 'learning_rate': 0.0002, 'epoch': 0.01}
{'loss': 0.7448, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 1.217, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 1.1012, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 0.8564, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 0.6592, 'learning_rate': 0.0002, 'epoch': 0.03}
1%|█▌ | 90/10000 [13:43<24:05:54, 8.75s/it] | 2023-07-06T18:59:07 | https://www.reddit.com/r/LocalLLaMA/comments/14sidp3/help_with_qlora_fine_tune/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sidp3 | false | null | t3_14sidp3 | /r/LocalLLaMA/comments/14sidp3/help_with_qlora_fine_tune/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Sps2Xxji1GXjTiQi3CmfFiIIh8o2K1Rd4E9wU0Fes0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=108&crop=smart&auto=webp&s=8348f876b1e97108c598081be7555cf87557b917', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=216&crop=smart&auto=webp&s=95ab978f4df272a07d6aad4baa7647394c16fc27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=320&crop=smart&auto=webp&s=83a030e7124b21dc85cdd40b27c9c97e50098d50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=640&crop=smart&auto=webp&s=137abcdc78e100ad67286017b0f1cc9304a962ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=960&crop=smart&auto=webp&s=97f98cb75ea111366ad9394e8a913059ab9c914a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=1080&crop=smart&auto=webp&s=359144959b9a1e68887b43158393a42b973cf2a8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?auto=webp&s=e54bf601f85e71423c0cd0fd2399160fc25201a4', 'width': 1200}, 'variants': {}}]} |
Does RAM make a big difference? | 1 | [removed] | 2023-07-06T19:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/14sjyan/does_ram_make_a_big_difference/ | Asoery | self.LocalLLaMA | 2023-07-06T20:02:04 | 0 | {} | 14sjyan | false | null | t3_14sjyan | /r/LocalLLaMA/comments/14sjyan/does_ram_make_a_big_difference/ | false | false | default | 1 | null |
What's the current best model if you have no concern about the hardware? | 1 | I have access to a semi-supercomputer and I wonder what's the best model I perhaps could run on it specifically for my personal use. | 2023-07-06T19:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/14sjz7q/whats_the_current_best_model_if_you_have_no/ | hattapliktir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sjz7q | false | null | t3_14sjz7q | /r/LocalLLaMA/comments/14sjz7q/whats_the_current_best_model_if_you_have_no/ | false | false | self | 1 | null |
I guess George Hotz was correct. | 78 | 2023-07-06T21:16:53 | https://arxiv.org/abs/2305.14705 | MrBeforeMyTime | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14sm3p0 | false | null | t3_14sm3p0 | /r/LocalLLaMA/comments/14sm3p0/i_guess_george_hotz_was_correct/ | false | false | 78 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
||
Trying Classifier Free Guidance and Negative prompt - some ideas how to test it properly? | 17 | 2023-07-06T21:21:21 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14sm839 | false | null | t3_14sm839 | /r/LocalLLaMA/comments/14sm839/trying_classifier_free_guidance_and_negative/ | false | false | 17 | {'enabled': True, 'images': [{'id': 'LAqNZ684cJaVml-0PGv5zOdQ1G75ZEn3ytdxN0YHZDg', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=108&crop=smart&auto=webp&s=892c938cc454a403587c01016721ede66857b3d7', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=216&crop=smart&auto=webp&s=eb3c4958f277e3500ffa30f8bb4568d6c60bfcc0', 'width': 216}, {'height': 365, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=320&crop=smart&auto=webp&s=46db5fdc0d39cca21ed67f73b7da6cb6fa92e288', 'width': 320}, {'height': 730, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=640&crop=smart&auto=webp&s=12be86f04701e40f49d31d37b6dadee6691acf16', 'width': 640}], 'source': {'height': 957, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?auto=webp&s=598425f14457d571eab3ed098b6f59099dbfb137', 'width': 838}, 'variants': {}}]} |
|||
Best model that can run on Colab | 7 | Hi everyone
First of all, I want to thank everyone in this sub who is contribute to the progress of Open source llms. Than you so much!
I want to experiment with medium sized models (7b/13b) but my gpu is old and has only 2GB vram.
So I'll probably be using google colab's **free** gpu, which is nvidia T4 with around 15 GB of vRam.
My question is what is the best quantized (or full) model that can run on Colab's resources without being too slow? I mean at least 2 tokens per second.
*Also please recommend specific versions and not just general architecture like llama 13b or falcon 7b etc.
Thanks so much in advance | 2023-07-06T22:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/14snnj5/best_model_that_can_run_on_colab/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14snnj5 | false | null | t3_14snnj5 | /r/LocalLLaMA/comments/14snnj5/best_model_that_can_run_on_colab/ | false | false | self | 7 | null |
LLMs on a 32-bit device with 2GB of RAM | 5 | Pardon the messiness of this post, my thought process can be pretty hectic lol
# 2023-07-11 Update
u/pedantic_pineapple brought some light back into this. I've had some difficulty getting it working, however (listing this to save time if someone else sees this post):
1. Using `pip install languagemodels` ends up downloading several versions from 0.6.0 to 0.0.1(?) and eventually settles on 0.0.2. The chat example script and Python command-line didn't work
2. Using `pip install git+https://github.com/jncraton/languagemodels.git` tells me that there's "no matching distribution found for ctranslate2>=3.15.0", implying that they don't distribute it for 32-bit ARM
3. I cloned the CTranslate2 repo, went into the python directory and did `pip install -e .`. It reported missing pybind11, but even after installing it, it reported the same error
4. In the same directory, I installed `setuptools` and did `python setup.py install`. It reported many missing files in the "ctranslate" directory, such as "ctranslate2/generator.h"
5. I cloned the repository again with `git clone --depth 1 --recursive https://github.com/OpenNMT/CTranslate2.git`, went into the directory and repeated step 4. It repeated the same errors
6. I copied `cp cpp/* ctranslate2` to copy all the files from the cpp folder to the ctranslate2 folder. Nope, still didn't work.
7. As a last-ditch effort I did `cp -r ../include/ctranslate2/* ctranslate2`; still didn't work
I'll revisit this when I have more time, but I do feel like I'm getting closer to cracking the case
# 2023-07-07 Update
I've traded my tablet in for a 64-bit one, but I've decided to revisit this
My 32-bit Motorola can run 64-bit Linux programs through Termux with [QEMU's user mode emulation](https://www.qemu.org/docs/master/user/main.html). I was able to benchmark the emulation overhead using 7zip as a test (native gave me a score of 6504, 64-bit ARM emulation gave me a score of 167)
[(this approach is similar to what someone did to get Dwarf Fortress running on Android, running an x86 game on ARM)](https://old.reddit.com/r/dwarffortress/comments/r80m7y/dwarf_fortress_mobile_running_dwarf_fortress_on/)
I just don't know how to cross-compile llama.cpp for 64-bit ARM, and even if I could, I compiled llama.cpp on my PC and tested it (on the PC) with qemu-user's x86_64 emulator to verify, and OpenLLaMA 3B generated at what felt like a token every 20-30 seconds. And this is an i7-6700 with DDR3 RAM (which can generate what looks like ~7-10 tokens a second natively); god forbid how my phone would perform doing the same task, if it had enough memory to run OpenLLaMA 3B
If I get it compiled, I might revisit it with [box86](https://box86.org/) (emulator for x86 games and programs) to see if there's a speed-up. Otherwise, I might try [the upstream RWKV project](https://github.com/BlinkDL/ChatRWKV)
It's better to just wait to see if llama.cpp (and other GGML-based projects) would get 32-bit ARM support soon. I'm really looking forward to running RWKV-4 Raven 1.5B in KoboldCpp
# Original post
I love to toy around with my phones and tablets, which are budget 32-bit Androids. My most used is a hacked 2GB Kindle HDX 7 (2013) and my newest is a 3GB Motorola (2021). As you can tell, all of my stuff is secondhand since I'm broke as a joke
I've been looking into running LLMs on any of them for the past few months, either for chatbot companionship or as a lightweight learning assistant when the power is out (downloaded tons of ebooks and the entirety of Wikipedia for Kiwix on a 256GB microSD for this too), but I can't find any information for constrained hardware. I do know of alternative models available in sub-3B sizes like RWKV, LaMini, and older base models like Pythia Deduped, OPT and GPT-2
But to make things difficult, I've had trouble getting KoboldCpp (and regular llama.cpp) to compile through Termux, so any help when it comes to this would be appreciated. I might have to use upstream RWKV or GPT-2, ~~Transformers (Oobabooga or KoboldAI)~~ (PyTorch might not support it), or alternative backends altogether like the many different GPT-2 implementations in C, but most of those would leave me without a frontend and/or no quantization (otherwise I'd probably be able to go up to 1.5B) | 2023-07-06T22:17:51 | https://www.reddit.com/r/LocalLLaMA/comments/14snrcp/llms_on_a_32bit_device_with_2gb_of_ram/ | MeowAnnabelle | self.LocalLLaMA | 2023-07-12T05:03:50 | 0 | {} | 14snrcp | false | null | t3_14snrcp | /r/LocalLLaMA/comments/14snrcp/llms_on_a_32bit_device_with_2gb_of_ram/ | false | false | self | 5 | null |
Fill in blanks with conversation (Prompt now working?) | 5 | I am trying to get an appropriate response for this prompt:
Fill in the RESPONSE part of this conversation between a human and a bot. Bot: Hi. Human RESPONSE. Bot: I am 10 years old! How about you?
Where response would be something like: "Hi, how old are you?"
Why isn't this working on anything less than ChatGPT? Any tips for engineering this prompt better? I have a bunch of further data in the form of conversations with redactions that I want to fill in intelligently. | 2023-07-06T22:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/14son0l/fill_in_blanks_with_conversation_prompt_now/ | ilovejoi36912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14son0l | false | null | t3_14son0l | /r/LocalLLaMA/comments/14son0l/fill_in_blanks_with_conversation_prompt_now/ | false | false | self | 5 | null |
Vector Search w/ LMMs vs Classic Text Search (i.e. elasticsearch), has anyone done pricing/performance experiments? | 7 | Title! I am sure there has been some research done on this topic but not sure where. Curious if anyone has found vector search via some open-soruce model (through vector store) is more performant/cheaper than elasticsearch. Thanks | 2023-07-07T00:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/14sqg2j/vector_search_w_lmms_vs_classic_text_search_ie/ | Working_Ideal3808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sqg2j | false | null | t3_14sqg2j | /r/LocalLLaMA/comments/14sqg2j/vector_search_w_lmms_vs_classic_text_search_ie/ | false | false | self | 7 | null |
Dilbert Comic | 0 | 2023-07-07T00:36:50 | https://www.reddit.com/r/dilbert/comments/14sktuy/dilbert_reborn_june_30th_2023_shared_for_all_on/ | kubbiember | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14sracc | false | null | t3_14sracc | /r/LocalLLaMA/comments/14sracc/dilbert_comic/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '-OrtaAE98-Yo261LD87_xRkwvd3emJZP-0nwVbk7ZNs', 'resolutions': [{'height': 38, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=108&crop=smart&auto=webp&v=enabled&s=307b34ab94c177ad1f1a014b22835bf5a456578e', 'width': 108}, {'height': 77, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=216&crop=smart&auto=webp&v=enabled&s=da23e79d8b304ebdcc4e68780ad4669955710572', 'width': 216}, {'height': 114, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=320&crop=smart&auto=webp&v=enabled&s=f4da09cb5f747c86bfa7f38a9afebf54cfeae783', 'width': 320}, {'height': 229, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=640&crop=smart&auto=webp&v=enabled&s=107bb120acad0a2beecace967f4157c9d638326f', 'width': 640}, {'height': 343, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=960&crop=smart&auto=webp&v=enabled&s=b74b98e4c07ca596a165f0d4064bac9e10ea75f1', 'width': 960}, {'height': 386, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=1080&crop=smart&auto=webp&v=enabled&s=b8c275df332c9bc38bf1d990854b82698303b2a0', 'width': 1080}], 'source': {'height': 944, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?auto=webp&v=enabled&s=9f3f597dcfb7bdf2d7c6d0338f16a454fc92f50d', 'width': 2638}, 'variants': {}}]} |
|
A6000 vs 2x 4090 | 14 | I've been experimenting with llms for the past few months. I feel like I've been held back by lacking vram. I'm considering upgrading to either an A6000 or dual 4090s.
The A6000 has more vram and costs roughly the same as 2x 4090s. The A6000 would run slower than the 4090s but the A6000 would be a single card and have a much lower watt usage.
I don't have time to game anymore so it's not really a concern at this point to have that a consideration.
With the A6000 I don't need to worry about it properly splitting up the models.
What are your thoughts? | 2023-07-07T01:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/14srull/a6000_vs_2x_4090/ | drnick316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14srull | false | null | t3_14srull | /r/LocalLLaMA/comments/14srull/a6000_vs_2x_4090/ | false | false | self | 14 | null |
VRAM or RAM for cost effective home build | 1 | I have been wondering what sort of performance people have been getting out of CPU based builds running local llama?
I haven't seen a similar post since the release of 8k token limits and ExLLAMA. It's not to hard to imagine a build with 64gb of RAM blowing a mid teir GPU out of the water in terms of model capability as well as the content length increase to 8k.
I have never personally tried running LLAMA on CPU /RAM, would it be worth buying a bunch of RAM (relatively cheap) vs buying a beefier GPU (not in the cards 4 me)?
Current build:
Windows 10
3060 12gb vram
Current ram 16 gb
I5 6600k | 2023-07-07T01:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/14ssguq/vram_or_ram_for_cost_effective_home_build/ | Far-Notice1730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ssguq | false | null | t3_14ssguq | /r/LocalLLaMA/comments/14ssguq/vram_or_ram_for_cost_effective_home_build/ | false | false | self | 1 | null |
What should I look out for during fine tuning? | 4 | I’m fine tuning using the oogabooga GUI.
I know “loss” is an important metric. Their guide recommends stopping at around loss = 1
Intuitively what’s the difference between a loss of 1.1 and 1.0?
For example, my data started at a loss of 2.0. After only about 0.1 epochs, it drops to 1.1. Then continued training for like 12 hours only drops the loss down to 1.06.
1.1 and 1.0 doesn’t seem to be that far apart, but it also doesn’t seem to make sense that I can fine tune my data in a few minutes (because it only took a few minutes to decrease the loss from 2.0 to 1.1) | 2023-07-07T01:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/14ssp6n/what_should_i_look_out_for_during_fine_tuning/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ssp6n | false | null | t3_14ssp6n | /r/LocalLLaMA/comments/14ssp6n/what_should_i_look_out_for_during_fine_tuning/ | false | false | self | 4 | null |
LongLLaMA, a LLM capable of handling long contexts of 256k tokens | 2 | 2023-07-07T01:58:26 | https://twitter.com/_akhaliq/status/1677131408265945088 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14st6ba | false | {'oembed': {'author_name': 'AK', 'author_url': 'https://twitter.com/_akhaliq', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Focused Transformer: Contrastive Training for Context Scaling<br><br>paper page: <a href="https://t.co/nMD1r6e21b">https://t.co/nMD1r6e21b</a><br>model: <a href="https://t.co/ecyeaoS1l0">https://t.co/ecyeaoS1l0</a><br><br>Large language models have an exceptional capability to incorporate new information in a contextual manner. However, the full potential of such an… <a href="https://t.co/fe2IuwTe6N">pic.twitter.com/fe2IuwTe6N</a></p>— AK (@_akhaliq) <a href="https://twitter.com/_akhaliq/status/1677131408265945088?ref_src=twsrc%5Etfw">July 7, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_akhaliq/status/1677131408265945088', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14st6ba | /r/LocalLLaMA/comments/14st6ba/longllama_a_llm_capable_of_handling_long_contexts/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'yZB-Op42aidgsC58AESnRAIxZl5MNXE47YaH2fOrL2Q', 'resolutions': [{'height': 116, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=108&crop=smart&auto=webp&v=enabled&s=cad6cc0edfd4e7c04ba1c6e5c489357db6ea061a', 'width': 108}, {'height': 232, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=216&crop=smart&auto=webp&v=enabled&s=8bd01898a00b3b917c1e17d3ee05cc40407c2d8f', 'width': 216}, {'height': 343, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=320&crop=smart&auto=webp&v=enabled&s=2437aa469f9ab15fc7fa7be26b80861888d18065', 'width': 320}, {'height': 687, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=640&crop=smart&auto=webp&v=enabled&s=28137397d2212fdb87617720bc0ddc536de09409', 'width': 640}, {'height': 1031, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=960&crop=smart&auto=webp&v=enabled&s=1fe3bf470f1df0650454be18d1b65ba0518bf2f4', 'width': 960}], 'source': {'height': 1042, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?auto=webp&v=enabled&s=5c2fe4f8a08aa801c513d3f44c0ed7f46de8e997', 'width': 970}, 'variants': {}}]} |
|
4060 Ti vs 3070 vs 3060 Ti Stable Diffusion benchmark | 47 | I saw a post talking about 4060 Ti being a bad card because of the 128 bit memory bus. However I can't find any benchmark for generative AI.
A video on Bilibili compares the performance of different GPUs for Stable Diffusion. The chart shows the time (in seconds) needed to generate a image with Stable Diffusion, so the faster the better, and the 4060 Ti outperforms 3070 and 3060 Ti.
Not sure if this is applicable to LLM too, what do you think? | 2023-07-07T02:11:48 | regunakyle | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14sti62 | false | null | t3_14sti62 | /r/LocalLLaMA/comments/14sti62/4060_ti_vs_3070_vs_3060_ti_stable_diffusion/ | false | false | 47 | {'enabled': True, 'images': [{'id': '2QAdM0ad_wcWX9r7QAUyIzV7tvg5IO1ppXpiqcZ7-A0', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=108&crop=smart&auto=webp&s=24d50386cd7a39a1a93d6492d6b712dac7aca71c', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=216&crop=smart&auto=webp&s=f3ba26b8b225f0bf532bb07ca500018c19e6f2f6', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=320&crop=smart&auto=webp&s=9ebb568a7e3c8f9d62b937968814a67df148f401', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=640&crop=smart&auto=webp&s=22e0b30a138afc863d5ff0fa702ca4093fbefbf7', 'width': 640}, {'height': 557, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=960&crop=smart&auto=webp&s=a9f4deae5486eaac0d392f1fd1e28c575960adc2', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=1080&crop=smart&auto=webp&s=184136b539b35487e0c16c21eb94b7595e133a28', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?auto=webp&s=a31dfc04fa25ac3b83f12fca6281c32f1589c101', 'width': 1860}, 'variants': {}}]} |
||
Llama is good but, what else is available? Qualitatively | 13 | I am looking to build a local LLM model, but I dont want to be constrained by the licensing if I want to build something that can be commercially available? I know Dolly and H2O have decent models. do you guys have any suggetions of the other models that perform just as well or strategies to make them do so? I want to solve a question answering of local documents. Appreciate any feedback including models, strategies, existing solutions etc, really anything you think might help. Thanks. | 2023-07-07T02:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/14su9fp/llama_is_good_but_what_else_is_available/ | Altruistic-Ask-773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14su9fp | false | null | t3_14su9fp | /r/LocalLLaMA/comments/14su9fp/llama_is_good_but_what_else_is_available/ | false | false | self | 13 | null |
The Repeating Issue? | 3 | As a noob, I’m using KoboldAI and Oobabooga simultaneously at the moment, but I found that with the same model (wizard vacuna 13b ggml) kobold never repeated sentences as much as ooba did. (llama.cpp)
I know nothing about the settings and I wonder how can I fix the repeating issue when using ooba, or is there a guide? Many thanks. | 2023-07-07T03:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/14sv1ne/the_repeating_issue/ | Mildred_D | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sv1ne | false | null | t3_14sv1ne | /r/LocalLLaMA/comments/14sv1ne/the_repeating_issue/ | false | false | self | 3 | null |
Applying training data for specific agentic tasks | 1 | I have collected a large amount of very valuable training data for two specific tasks, advanced summarization and professional book editing.
What is the best way to utilise all these top-quality, human-created examples to improve the efficiency of the training data? | 2023-07-07T04:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/14sw99j/applying_training_data_for_specific_agentic_tasks/ | Chris_in_Lijiang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sw99j | false | null | t3_14sw99j | /r/LocalLLaMA/comments/14sw99j/applying_training_data_for_specific_agentic_tasks/ | false | false | self | 1 | null |
LongLlama | 144 | 2023-07-07T05:23:30 | https://twitter.com/s_tworkowski/status/1677125863429795840?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14sxkcp | false | {'oembed': {'author_name': 'Szymon Tworkowski', 'author_url': 'https://twitter.com/s_tworkowski', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Introducing LongLLaMA 🦙, an unlimited-context version of OpenLLaMA fine-tuned at 8k & capable of extrapolating to 256k tokens!<br><br>We train it using our new Focused Transformer 🎯 technique (FoT). No degradation on short context, drop-in compatibility & Apache 2.0 license 🔥🔥<br>🧵 <a href="https://t.co/QiNl5xNYvl">pic.twitter.com/QiNl5xNYvl</a></p>— Szymon Tworkowski (@s_tworkowski) <a href="https://twitter.com/s_tworkowski/status/1677125863429795840?ref_src=twsrc%5Etfw">July 7, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/s_tworkowski/status/1677125863429795840', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14sxkcp | /r/LocalLLaMA/comments/14sxkcp/longllama/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'wiqVDKdW9PUcBslQ5gjC65akIWhI1wsvZ8CERhlUAs8', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/O47A58iKid31kjjUU_0hL5SyZG4D7Av9zBMb7ePSiFU.jpg?width=108&crop=smart&auto=webp&s=2c8e9eff854f4a11e20afaa895c526c4f0f972d6', 'width': 108}], 'source': {'height': 69, 'url': 'https://external-preview.redd.it/O47A58iKid31kjjUU_0hL5SyZG4D7Av9zBMb7ePSiFU.jpg?auto=webp&s=e970dff13d5488b5eb17cb517dbb907a6e595ba5', 'width': 140}, 'variants': {}}]} |
||
Training models on specific documentation | 1 | Has anyone tried training a model on specific documentation?
For instance I recently started working on nextjs 13 and chat gpt only knew about version 12. Even using the search feature it would mix in partial 12 and 13 info and end up being incorrect. I have come across this a lot where a framework went through a major update in last year and chat gpt is useless when asked about it.
Has anyone added a specific set of docs to a local LLM and had good results with the model forming code for the specific doc version? If so what was your process? | 2023-07-07T05:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/14sxngc/training_models_on_specific_documentation/ | Mr_Nice_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sxngc | false | null | t3_14sxngc | /r/LocalLLaMA/comments/14sxngc/training_models_on_specific_documentation/ | false | false | self | 1 | null |
Best approach to multi-party conversions? | 2 | As title, what is the best way to have the LLM to participate in a multi-human conversation without the LLM trying to steal (=> complete, hallucinate) the other human conversations?
That is, in a conversation involving more than one speakers, ai want to bot to keep to itself without trying to generate the others conversation just because they were in the context buffer.
(Edited to be more specific about multi-human, not multi-characters.) | 2023-07-07T07:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/14t072j/best_approach_to_multiparty_conversions/ | edwios | self.LocalLLaMA | 2023-07-08T14:01:23 | 0 | {} | 14t072j | false | null | t3_14t072j | /r/LocalLLaMA/comments/14t072j/best_approach_to_multiparty_conversions/ | false | false | self | 2 | null |
What is the best opensource LLM model for in-context learning? | 2 | Hi, I wonder which model is the best opensource LLM for in-context learning?
I tested openLLama 7B. However, It seems that it doesn't work to control LLM by prompt.
Could recommend better opensource LLM? I would like to use LLM with a various prompt. | 2023-07-07T08:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/14t0sle/what_is_the_best_opensource_llm_model_for/ | Diligent-Machine-303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t0sle | false | null | t3_14t0sle | /r/LocalLLaMA/comments/14t0sle/what_is_the_best_opensource_llm_model_for/ | false | false | self | 2 | null |
Highlight on some interesting 8k and 16k models. | 32 | During the few last days, Brandon / Bhenrym14 has published a few 8k (13b, 33b) and 16k (33b) ctx models based on Jon Durbin's Airoboros model (v 1.4.1).
Brandon's models don't use Kaioken's Superhot 8k Lora, but either :
\- the Position Interpolation RoPE scaling technique (noted PI)
\- the NTK-Aware Scaled RoPE Embeddings technique (noted NTK
The perplexity of the PI method I tested is much more consistent with the original llama perplexity, at least for the 3,072 first tokens.
I had great results in terms of quality over short stories using Ooba (Exllama) / Silly Tavern on the 33b 8k PI version (with a 4k context), but after the initial context length was passed, even my RTX3090 had memory allocation errors. I'm gonna try the 13b 8k today, and I invite you guys to test these models too, because they seem to have passed under the radar and any breakthrough in terms of context length deserves its fair share of.. attention.
And don't hesitate to share your config (CPU, RAM, GPU) and parameters (models loader & settings, ctx size, GUI, API, Frontend, presets) used to run them, so we can have a better overall insight about these models !
Here's the hugging face link of Brandon's profile :
[https://huggingface.co/bhenrym14](https://huggingface.co/bhenrym14)
Edit : And for a GGML version of the Airoboros 1.4.1 33b 8k PI released by Brandon, Ycros made K-Quant versions for LlamaCPP / KoboldCPP :
[https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-PI-8192-GGML](https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-PI-8192-GGML)
I'm playing with the ggml one (Q3\_K\_M) now, because I have some troubles of memory allocation with the GPTQ versions if I want to use 4k context.
\---
Edits : some mistakes corrected after reading Bhenrym14's comment, and some additions. | 2023-07-07T11:28:50 | https://www.reddit.com/r/LocalLLaMA/comments/14t4lbc/highlight_on_some_interesting_8k_and_16k_models/ | Nexesenex | self.LocalLLaMA | 2023-07-08T01:58:32 | 0 | {} | 14t4lbc | false | null | t3_14t4lbc | /r/LocalLLaMA/comments/14t4lbc/highlight_on_some_interesting_8k_and_16k_models/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'CR-uO4OIbgE2ZF1bivXbT9fbp2L5xy4jBBibjmhfHXo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=108&crop=smart&auto=webp&s=4be80003f3cab181b3f2b632cbdd84669f0b9222', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=216&crop=smart&auto=webp&s=8df7eec20cf15f474cac49bebd100922ac009561', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=320&crop=smart&auto=webp&s=e8396156d6c449ca8a248a7e2d3116f67f1c4ca3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=640&crop=smart&auto=webp&s=36f3195bc92949451793bfaff5f5b550ea46d7b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=960&crop=smart&auto=webp&s=d83e1729bc424e63af4eac66c8a27d8556ee98f6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=1080&crop=smart&auto=webp&s=29505785174e9f42874a609fb28922c3fd00dab8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?auto=webp&s=0c1ce1f941cca0a55216409edabe2a1f782b710f', 'width': 1200}, 'variants': {}}]} |
I have a large code base and documentation for a system want to explore it what are my options? | 2 | **Current Resources:**
- An extensive codebase
- Comprehensive documentation in markdown
- A Mac Studio equipped with an M1 Max chip and 32GB RAM
**Objectives:**
- Implement AI to parse and learn from the code and associated
documentation via prompts
- Expand my knowledge regarding Large Language Models (LLMs), focusing on
how to operate them autonomously
I do have a PC that features PCIe 3.0 and an Nvidia 1080 GPU at my
disposal. However, I'm open to upgrading to a more powerful system if the
Mac Studio isn't sufficient.
Could anyone suggest the best options for me? Any specific software or
models you'd recommend?
I possess a high-level understanding of AI and LLMs, and have more than
25 years of experience with various programming languages and operating
systems, Python in particular. I have the technical proficiency required,
but I'm yet to master LLMs.
I hope this question is within the subreddit's guidelines as I could
genuinely use some guidance to kickstart this endeavor. | 2023-07-07T11:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/14t4qwn/i_have_a_large_code_base_and_documentation_for_a/ | 0OOO00000OO00O0O0OOO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t4qwn | false | null | t3_14t4qwn | /r/LocalLLaMA/comments/14t4qwn/i_have_a_large_code_base_and_documentation_for_a/ | false | false | self | 2 | null |
Official WizardLM-13B-V1.1 Released! Train with Only 1K Data! Can Achieve 86.32% on AlpacaEval! | 219 |
* Today, the ***WizardLM Team*** has released their **Official** **WizardLM-13B-V1.1** model trained with only 🔥**1K** 🔥high-quality evolved data!
* Paper: [https://arxiv.org/abs/2304.12244](https://arxiv.org/abs/2304.12244)
* The project repo: [WizardLM](https://github.com/nlpxucan/WizardLM/tree/main)
* The official Twitter: [WizardLM\_AI](https://twitter.com/WizardLM_AI)
* HF Model: [WizardLM/WizardLM-13B-V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
* Online demo links:
1. [https://924134c0fad28192.gradio.app/](https://924134c0fad28192.gradio.app/)
2. [https://e8a06366ccd1c4d1.gradio.app/](https://e8a06366ccd1c4d1.gradio.app/)
3. [https://dfc5113f66739c80.gradio.app/](https://dfc5113f66739c80.gradio.app/)
(We will update the demo links in our [github](https://github.com/nlpxucan/WizardLM/tree/main).)
**WizardLM-13B-V1.1 achieves:**
1) 6.74 on MT-Bench
2) 🔥**86.32% on Alpaca Eval (ChatGPT is 86.09%)**
3) 99.3% on WizardLM Eval (Chatgpt is 100%)
https://preview.redd.it/spzie2gwajab1.png?width=1345&format=png&auto=webp&s=3c88a74b53da9fc987c58cc7dc6b9ac4d74d3efd
https://preview.redd.it/rvb7utnxajab1.png?width=1238&format=png&auto=webp&s=bea32fb5213c924fb1602ec87d64c4af85b6c0b6
Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings. | 2023-07-07T12:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/14t5wzt/official_wizardlm13bv11_released_train_with_only/ | cylaw01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t5wzt | false | null | t3_14t5wzt | /r/LocalLLaMA/comments/14t5wzt/official_wizardlm13bv11_released_train_with_only/ | false | false | 219 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
|
Guanaco-Unchained Dataset | 46 | Guanaco is one of my favorite models, so it seemed like a no brainer to use it as the base for my own custom dataset. But when I looked into the dataset, I was surprised at some of the entries. I removed most of the non English prompts so I could manually check the quality of the data. I took out as much alignment, refusals, "I'm OpenAssistant", implied morality, and "as an AI" as I could. Anything that was questionable quality, I removed. Altogether, It's about 75% smaller than the original.
My plan is to train a model on this pruned dataset, and assuming it works well enough, I'll start blending in my own data into it to make a totally new dataset. That way I can use the pruned guanaco model as a baseline/benchmark to see if the new data I'm adding actually has the effect I want on the output. Guanaco is pretty decent at erotica, but there is hardly any at all in the training data I noticed. Can't wait to see how it is with a solid sampling of smut added into it.
One thing maybe someone here can answer for me. I noticed that there were some instructions where the last response was from the Human instead of the Assistant, making it seem like the conversation was incomplete or unanswered. Does anyone know if this was intentional? At first I thought it must be. But after a while (and maybe I was imagining it), I felt like a significant number of these straggler response examples were weird or bad quality. So I just did a search for all entries that ended with a Human: response and eliminated that last human response (not the whole entry). Was there some benefit to including examples like that? I can find specific examples if I'm not being clear.
This dataset probably isn't TOO impressive on it's own right now, but I'd be surprised if I didn't save at least a few people a head-start. As far as I can tell Guanaco is one of the best datasets that's commercially licensed with zero dependency on OpenAI API drama. If there are better one's out there I'd love to hear about them.
[https://huggingface.co/datasets/CheshireAI/guanaco-unchained](https://huggingface.co/datasets/CheshireAI/guanaco-unchained) | 2023-07-07T12:48:43 | https://www.reddit.com/r/LocalLLaMA/comments/14t6efv/guanacounchained_dataset/ | CheshireAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t6efv | false | null | t3_14t6efv | /r/LocalLLaMA/comments/14t6efv/guanacounchained_dataset/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'Un8XOSXVdR2fXTrAk-LXBvl76WBGBSbJjSEbS1J8wBc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=108&crop=smart&auto=webp&s=9406a804b0b5c0b9a51df46276faa88f23036dc9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=216&crop=smart&auto=webp&s=ca80c481fe2e4f80cef00a2f61bda1158eff3e15', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=320&crop=smart&auto=webp&s=c34710eeb938e027a149a95f87fae75c1c8cb89a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=640&crop=smart&auto=webp&s=5180cf343fb6280324cd4fd77d3c3ffe5370f45c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=960&crop=smart&auto=webp&s=696868e9f18fe0ca6252d88f31a5115d3c4a049b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=1080&crop=smart&auto=webp&s=28d316371e40af2553f15e8ce441bf18bc1c361c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?auto=webp&s=1b60b8a3dc641e13aeea6defdd645cb289a0bd90', 'width': 1200}, 'variants': {}}]} |
Example of a small fine tuning | 11 | Does anyone have a good guide or tutorial or perhaps help me in the right direction for fine tuning a small company data that I can use to call via API? | 2023-07-07T14:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/14t9nbp/example_of_a_small_fine_tuning/ | mehrdotcom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t9nbp | false | null | t3_14t9nbp | /r/LocalLLaMA/comments/14t9nbp/example_of_a_small_fine_tuning/ | false | false | self | 11 | null |
CPU with or without integrated graphics? (13900K vs. 13900KF) | 6 | I'm currently building a combo gaming and ML rig with an RTX 4090.
Two questions:
1. Should I consider getting a CPU with integrated graphics (13900K)? Would that free up resources from the 4090 vs. using the 13900KF which doesn't have integrated graphics?
2. Is it worth dual-booting Windows and Linux to have access to things like Triton? Any other perks of using Linux vs. just using Windows and potentially WSL?
Thank you! | 2023-07-07T15:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/14t9u8l/cpu_with_or_without_integrated_graphics_13900k_vs/ | yuicebox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t9u8l | false | null | t3_14t9u8l | /r/LocalLLaMA/comments/14t9u8l/cpu_with_or_without_integrated_graphics_13900k_vs/ | false | false | self | 6 | null |
CodeGen2.5: Small, but mighty | 72 | 2023-07-07T15:25:59 | https://blog.salesforceairesearch.com/codegen25/ | Acrobatic-Site2065 | blog.salesforceairesearch.com | 1970-01-01T00:00:00 | 0 | {} | 14tad84 | false | null | t3_14tad84 | /r/LocalLLaMA/comments/14tad84/codegen25_small_but_mighty/ | false | false | default | 72 | null |
|
Chat with your PDFs by self-hosting LocalGPT on any cloud | 29 | 2023-07-07T16:12:44 | https://github.com/skypilot-org/skypilot/blob/master/llm/localgpt/README.md | skypilotucb | github.com | 1970-01-01T00:00:00 | 0 | {} | 14tbmyt | false | null | t3_14tbmyt | /r/LocalLLaMA/comments/14tbmyt/chat_with_your_pdfs_by_selfhosting_localgpt_on/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'VJjaIGoCCzeLqmBoQPWLe2NpMQMfCXn29K06b1czAqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=108&crop=smart&auto=webp&s=11e5935f94c7836c0132c75744d6b50f4208c508', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=216&crop=smart&auto=webp&s=8e1514132380a0e36cc34adb9f95fb190d8c9d28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=320&crop=smart&auto=webp&s=5843407f2c3f0e333f2bf0662b770191a46e713b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=640&crop=smart&auto=webp&s=edf71f4f93c594f0083e68c5eb38415f79aed869', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=960&crop=smart&auto=webp&s=b98ceefc3145d996aaddcabe654dfb77fd9de7dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=1080&crop=smart&auto=webp&s=d5a7f51dcca2f42fb43ccfa5cb6075532bf2f612', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?auto=webp&s=3a7135ee02a30430a7178e914a86ef1256653864', 'width': 1200}, 'variants': {}}]} |
||
system requirement to fine tune stable vicuna13b | 11 | Noob question but gotta ask , I'm made a pc with ram 32 gb ddr4, ryzen 5 5600g, rtx 3060 12gb , 750watt power, 1tb and a not so great cooling system or a case.
I was wondering if I could fine tune vicuna in this or shift to a cloud gpu. My biggest fear is overheating and shutting down of pc, do u guys think it will stand out? | 2023-07-07T17:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/14tdi1p/system_requirement_to_fine_tune_stable_vicuna13b/ | Sensitive-Analyst288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tdi1p | false | null | t3_14tdi1p | /r/LocalLLaMA/comments/14tdi1p/system_requirement_to_fine_tune_stable_vicuna13b/ | false | false | self | 11 | null |
Doesn't a 4090 massively overpower a 3090 for running local LLMs? | 40 | People seem to consider them both as about equal for the price / performance. I know 4090 doesn't have any more vram over 3090, but in terms of compute according to the specs 3090 has 142 tflops at fp16 while 4090 has 660 tflops at fp8. Isn't that almost a five-fold advantage in favour of 4090, at the 4 or 8 bit precisions typical with local LLMs? Or am I missing something? | 2023-07-07T18:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/14tfr8h/doesnt_a_4090_massively_overpower_a_3090_for/ | eesahe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tfr8h | false | null | t3_14tfr8h | /r/LocalLLaMA/comments/14tfr8h/doesnt_a_4090_massively_overpower_a_3090_for/ | false | false | self | 40 | null |
[Discussion] Looking for an Open-Source Speech to Text model (english) that captures filler words, pauses and also records timestamps for each word. | 4 | Looking for an Open-Source Speech to Text model (english) that captures filler words, pauses and also records timestamps for each word.
The model should capture the text verbatim, without much processing. The text should include the false starts to a sentence, misspoken words, incorrect pronunciation or word form etc.
The transcript is being captured to ascertain the speaking ability of the speaker hence all this information is required.
Example Transcription of Audio:
Yes. One of the most important things I have is my piano because um I like playing the piano. I got it from my parents to my er twelve birthday, so I have it for about nine years, and the reason why it is so important for me is that I can go into another world when I’m playing piano. I can forget what’s around me and what ... I can forget my problems and this is sometimes quite good for a few minutes. Or I can play to relax or just, yes to ... to relax and to think of something completely different.
I believe the OpenAI Whisper has support for recording timestamps. I don't want to rely on paid API service for the Speech to Text Transcription. | 2023-07-07T19:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/14tgbik/discussion_looking_for_an_opensource_speech_to/ | awinml1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tgbik | false | null | t3_14tgbik | /r/LocalLLaMA/comments/14tgbik/discussion_looking_for_an_opensource_speech_to/ | false | false | default | 4 | null |
Best setup for local 3.5 replacement with upper mid tier gaming rig? | 0 | Complete noob to local setups here.
​
Here's my specs:
32GB RAM
AMD Ryzen 5 3600
3080Ti
​
Looking to replace GPT3.5 for personal and work purposes with something that's at least 70% as good. I've played with GPT4All using Falcon, Vicuna (which doesn't even work unfortunately), Hermes, and Wizard Uncensored. None of them are even close right now for my applications, unfortunately. Accuracy, length, depth, and sassiness (?) of responses all make these models mostly useless for me right now.
I'm not sure if the bottleneck is in GPT4All, my system specs, or the lack of refinement of the models themselves. I'm willing (and actually even curious) to learn how to train the models myself using my own data but am not sure if it'd really be worth the time if there are simply better options out there than what I've played with so far.
Cheers! | 2023-07-07T19:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/14tglck/best_setup_for_local_35_replacement_with_upper/ | powerfulndn | self.LocalLLaMA | 2023-07-07T20:42:02 | 0 | {} | 14tglck | false | null | t3_14tglck | /r/LocalLLaMA/comments/14tglck/best_setup_for_local_35_replacement_with_upper/ | false | false | default | 0 | null |
What does it mean to 'Merge weights' in regards to LoRA or QLoRA? | 3 | If someone performs a finetune of a gptq 4-bit version of a model and builds a new set of weights, then can they 'merge' those weights back into the base model? Is that why I don't see any weight files in many of the quantized gtpq model repositories on huggingface?
For example, this is a fine tune of starcoder which was then quantized by thebloke:
TheBloke/starcoderplus-GPTQ
-But I don't see any weight files in the repository, so I assume those were merged back into the model? So then I could further fine-tune it to my use case building upon the tuning that it has already received, correct? | 2023-07-07T19:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/14tgnge/what_does_it_mean_to_merge_weights_in_regards_to/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tgnge | false | null | t3_14tgnge | /r/LocalLLaMA/comments/14tgnge/what_does_it_mean_to_merge_weights_in_regards_to/ | false | false | self | 3 | null |
Does exllama load the model directly into the GPU? | 18 | When using autogptq I know that the model is first fully loaded into the CPU, deserialized, and then sent to the GPU.
But when I was using exllama to load a model for inference on Google Colab and Kaggle notebooks I noticed that the model seemed to be loading directly into the GPU?
I didn't see the RAM level increase. I'm not sure if it's because of an innacuracy with the RAM metric or because the loaded was loaded directly into GPU.
If that's not the case and it is in fact loaded into the CPU before the GPU, is there any way to load it directly into the GPU? So that I don't have to provision that high of a RAM if I'm opting for some cloud service for hosting just an LLM. | 2023-07-07T19:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/14tgnxp/does_exllama_load_the_model_directly_into_the_gpu/ | Chirumer | self.LocalLLaMA | 2023-07-07T19:47:37 | 0 | {} | 14tgnxp | false | null | t3_14tgnxp | /r/LocalLLaMA/comments/14tgnxp/does_exllama_load_the_model_directly_into_the_gpu/ | false | false | self | 18 | null |
Chinese and Korean output from WizardLM 1.1 | 0 | I'm using the quantized version of WizardLM 1.1 with 4 bits and I consistently get Chinese and Korean output.Example:
You are a helpful AI assistant.
USER: Hello
ASSISTANT: 안녕하세요! 어떻게 도와드릴까요?
USER: can you speak english, please?
ASSISTANT: 죄송합니다. 영어로 대화해 주실 수 있습니다. 어떻게 도와드릴까요?
I'm using it with koboldcpp. I saw no mention of korean or chinese in the training data. Is this a bug? | 2023-07-07T20:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/14thred/chinese_and_korean_output_from_wizardlm_11/ | KillerMiller13 | self.LocalLLaMA | 2023-07-07T20:11:46 | 0 | {} | 14thred | false | null | t3_14thred | /r/LocalLLaMA/comments/14thred/chinese_and_korean_output_from_wizardlm_11/ | false | false | default | 0 | null |
LlaMa on BitTensor | 0 | Is anybody in here running fine-tuned LLaMa on BitTensor?
If so, do you have any tips for creating a strong model?
Im considering it but want a better idea of what specs people are using to remain competitive on the network? | 2023-07-07T21:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/14tk62g/llama_on_bittensor/ | Bramp10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tk62g | false | null | t3_14tk62g | /r/LocalLLaMA/comments/14tk62g/llama_on_bittensor/ | false | false | default | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.