title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I can't find a code example of using a prompt in langchain with a GGML quantized llama-based model! | 6 | Hello!
I just can't find any code snippet to run a simple prompt using a GGML model using a llama-based language on langchain.
I've found LlamaIndex but I'm not sure this is what I'm looking for
Does anyone have something in their code stash?
Thanks in advance! | 2023-06-28T19:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/14liwen/i_cant_find_a_code_example_of_using_a_prompt_in/ | Factemius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14liwen | false | null | t3_14liwen | /r/LocalLLaMA/comments/14liwen/i_cant_find_a_code_example_of_using_a_prompt_in/ | false | false | self | 6 | null |
[koboldcpp] How to get bigger context size? | 2 | Hi, I'm pretty new to all this AI stuff and admit I haven't really understood how all the parts play together. One thing I'd like to achieve is a bigger context size (bigger than the 2048 token) with kobold.cpp but I don't know what the limiting factor is. Is it kobold.cpp itself? Is it the model version? Is it the model type? Is it kobold.ai? Is it my hardware? Is it a combination of the above?
So in short: What requirements do I have to fulfill to get a text generation with a longer context window? Preferably still with koboldcpps "smart context" enabled so it doesn't re-process the whole context every time once the window is full. | 2023-06-28T19:47:25 | https://www.reddit.com/r/LocalLLaMA/comments/14lj4iz/koboldcpp_how_to_get_bigger_context_size/ | AllesMeins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lj4iz | false | null | t3_14lj4iz | /r/LocalLLaMA/comments/14lj4iz/koboldcpp_how_to_get_bigger_context_size/ | false | false | self | 2 | null |
Help with LLM Stable Diffusion Prompt Generator | 4 | Hello folks
My company has asked me to come up with a Stable Diffusion prompt generator using oobabooga+llm that will run on a local machine that everyone can access. The older heads higher up don't want to use chatgpt for privacy reasons. I have managed to figure out how to do this but I'm pretty sure its not the right way. So I am here asking for your help/feedback. With that TL;DR out of the way I will now explain the situation in more detail.
Hardware specs: i9, 3090, 64gb ram, windows 11
As mentioned earlier, I've got a working prototype. The full instruction is about 950 tokens/3900 characters where I explain the structure of a stable diffusion prompt, followed by explanations of the different elements in it, followed by examples and finally instruct the llm to ask me for input and it spits out prompts.
I am using WizardLM 13B/33B and from my testing there isn't much difference between the outputs from 13B vs 33B so I usually stick to 13B as it takes less VRAM and that leaves some memory for Stable Diffusion. The prompts it generates are comparable to Chatgpt. Obviously Chatgpt knows more artists/styles but in terms of "flowery" text WizardLM is good enough. I've set oobabooga to 512 max\_new\_tokens and Instruction template to Vicuna-v1.1.
Now here's a list of issues I've come across that I'd like help with
\- Both 13B/33B cannot handle the full prompt in one shot(in the text generation tab). I have to break it up into 3 or 4 parts and mention at the end of every part to not generate prompts, further instructions to follow(also in the text generation tab). Only then does it behave and waits till the end before asking me for input. I thought the model has a 2048 context so why does this happen?
\- Even after breaking it up into 3/4 parts it seems to forget things I've asked for. My guess is I need to get better at prompt engineering so it can understand what is a requirement vs what is an explanation. Is that right? Are there any preset characters/brackets/shortcodes I should be using so it understands my instructions better?
\- Usually when I am iterating on the instructions I will clear history and start from scratch, pasting the instructions one block at a time. The other night I noticed after a while all replies ended with "hope you have a good night" or "have a good day" type sentences. Not sure what to make of that...
\- I am using instruct mode as its the only one that seems to work, should I be using another mode?
\- Changing the Generation Parameters preset seems to change its behavior from understanding what I am asking for to going off the rails. I cant find which one is recommended for WizardLM. Right now I am using LLama-precise and using the "Creative" mods as recommended in this subreddit wiki. Is that the right way? Does every model require me to use a different preset?
\- Finally, what other models would you recommend for this task? I do have a bunch downloaded but I cannot seem to get any of them to work(besides wizardlm). None of them will accept the full prompt and even if I break it up into parts it either starts talking to itself or generates prompts for random things while I am in the process of feeding it instructions. Would be cool if I could use a storytelling LM to paint a vivid picture with words as that would be very useful in a stable diffusion prompt.
\- (OPTIONAL) Once everything is working I save a json file of the chat history and manually load it next time I run oobabooga. Is it possible to automate this so when I deploy in the office it loads the model+json when the webui auto launches?
\- (OPTIONAL) Can someone point me to how I can have oobabooga and automatic1111 talk to each other so I don't have to copy paste prompts from one window to another? Best case: Have this running as an extension in Automatic1111. Acceptable case: Have a send to Automatic1111 button in oobabooga or something along those lines.
I can barely understand what's going on but somehow managed to get this far from mostly crappy clickbait youtube videos. Hopefully I can get some answers that point me in the right direction. Please help lol. Thank you.
​ | 2023-06-28T20:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/14ljwz5/help_with_llm_stable_diffusion_prompt_generator/ | Tarubali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ljwz5 | false | null | t3_14ljwz5 | /r/LocalLLaMA/comments/14ljwz5/help_with_llm_stable_diffusion_prompt_generator/ | false | false | self | 4 | null |
Salesforce releases XGen-7B, a new 7B foundational model trained on up to 8K sequence length for 1.5T tokens, under Apache-2.0 | 202 | 2023-06-28T20:50:15 | https://blog.salesforceairesearch.com/xgen/ | Covid-Plannedemic- | blog.salesforceairesearch.com | 1970-01-01T00:00:00 | 0 | {} | 14lkqgj | false | null | t3_14lkqgj | /r/LocalLLaMA/comments/14lkqgj/salesforce_releases_xgen7b_a_new_7b_foundational/ | false | false | default | 202 | null |
|
Is it possible to train a Lora on a 6GB vram GPU? | 6 | Hello everyone!
I want to fine-tune OpenLlaMA 3B and make something similar to this project but on top of Llama model ([https://github.com/stephwag/doki-rnn](https://github.com/stephwag/doki-rnn)). But I don't have a very powerful GPU. It is GTX 1660 with 6GB vram. I can easily run 13B models in GGML formats but can't make a Lora for 3B model. For the first test I tried to create a small lora trained on 10 letters in Oobabooga WebUI. I tried to load the model in GPTQ and GGML formats, but got only a few errors. When I try with GGML format I get the error "LlamaCppModel' object has no attribute 'decode'". When I try with GPTQ-for-Llama format using monkey\_patch I get the error "NotImplementedError". When I try with AutoGPTQ format using monkey\_patch I get the error "Target module QuantLinear() is not supported". As I understand it, to create a lora in Oobabooga you need to load the model in Transformers format, but I can't to load the model in Transformers format because of Out Of Memory error. If I load it in 4-bit or 8-bit I get error "size mismatch for base\_model"
​
So, here are my questions
1 Is it even possible to make a Lora on a GPU with 6GB vram now?
2 Is it possible to make Lora on GPQ or GGML model formats?
3 As far as I know, there is such a thing as qlora which seems to be well optimized. Is it possible to use it on a 6GB vram?
4 If it is possible, I would really appreciate if you could provide links to guides on how to do it | 2023-06-28T21:43:41 | https://www.reddit.com/r/LocalLLaMA/comments/14lm3t1/is_it_possible_to_train_a_lora_on_a_6gb_vram_gpu/ | MindInTheDigits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lm3t1 | false | null | t3_14lm3t1 | /r/LocalLLaMA/comments/14lm3t1/is_it_possible_to_train_a_lora_on_a_6gb_vram_gpu/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'rtomf2LJuH4XC-ShOknw402gtwc2-r7Bd4nSsnjpyc4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?width=108&crop=smart&auto=webp&s=01212e082497bf2d6b77eee8a890618f5c931419', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?width=216&crop=smart&auto=webp&s=5fccd30d457d1f3b31d5fc1c7ccb04be804e0e91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?width=320&crop=smart&auto=webp&s=ed5efae0a29f67da320574a0d4c8af826df7f2b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?width=640&crop=smart&auto=webp&s=fef7442ede41a934e31831ba161fe2c6fab71dc8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?width=960&crop=smart&auto=webp&s=71f9f96cd0c9176c1774dd0e905b6af5441ce1bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?width=1080&crop=smart&auto=webp&s=100766138e9a909f0812788e01f3e2a366ce48f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o25PLkZ1h_CuHupI__UluVQEoCAIffo3U0qFMiBx6zw.jpg?auto=webp&s=8695a76bee4fc1749d3fd43de0734e7b713def1a', 'width': 1200}, 'variants': {}}]} |
Orca Mini 3B on a Pi 4 (in real time) | 104 | 2023-06-28T23:03:47 | https://v.redd.it/5a20csoyau8b1 | MoffKalast | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14lo34l | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/5a20csoyau8b1/DASHPlaylist.mpd?a=1696149312%2COWY0NmMwZGQ1ZWZjZmY5NmU3ODY1MmQ2NTRhODczZWY2YzdjOWQ4MTQ4Mzc1MThkNmIyNzZhZGJhMjA3YzgyNQ%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/5a20csoyau8b1/DASH_480.mp4?source=fallback', 'height': 406, 'hls_url': 'https://v.redd.it/5a20csoyau8b1/HLSPlaylist.m3u8?a=1696149312%2CMmZmNGY3NjYxYWViYjYxYjI0ZTE4NDAxYWY2YTJkMTk3MjE2MjQ5YWFkNjYwNTVlZmFkMDg3YmNjMWRjNjY5OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5a20csoyau8b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_14lo34l | /r/LocalLLaMA/comments/14lo34l/orca_mini_3b_on_a_pi_4_in_real_time/ | false | false | 104 | {'enabled': False, 'images': [{'id': 'Raoi9DFiYChZbdsxHieGSvhgMExVQ6z_1e2CR3d_erU', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?width=108&crop=smart&format=pjpg&auto=webp&s=46646bb23662520f7037db346ea953f104a93b7a', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?width=216&crop=smart&format=pjpg&auto=webp&s=32ba2195133147021d55b1be668cde93e3552d5b', 'width': 216}, {'height': 152, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?width=320&crop=smart&format=pjpg&auto=webp&s=5bf111843656f27693b5384a76063b63308e721b', 'width': 320}, {'height': 305, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?width=640&crop=smart&format=pjpg&auto=webp&s=e79e4670c99826e31861ec8ae355aecac9c3b887', 'width': 640}, {'height': 457, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?width=960&crop=smart&format=pjpg&auto=webp&s=64190b53de9235ae973619174bbd33f9178b4260', 'width': 960}, {'height': 514, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d156b14e96e479658bb5de53361993d5f00be0dd', 'width': 1080}], 'source': {'height': 653, 'url': 'https://external-preview.redd.it/XxvsEJXVu_o3jM2HiO3qV-kziKzVdMGn5qqBp9G9gVM.png?format=pjpg&auto=webp&s=dcd156922ad955e5e9cd50490eba1c183976fe3a', 'width': 1370}, 'variants': {}}]} |
||
Any 7b SuperHOT models? | 19 | I'm poor and I have a 8gb card, but I could still use longer context.
Is there any way to merge this with a 7b model, or use cpu+ram?
Thanks. | 2023-06-28T23:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/14loqyh/any_7b_superhot_models/ | Hopeful_Donut4790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14loqyh | false | null | t3_14loqyh | /r/LocalLLaMA/comments/14loqyh/any_7b_superhot_models/ | false | false | self | 19 | null |
Thunderbolt and multiple eGPUs | 3 | I'm new to running local LLMs and wanted to do a sanity check before spending some money.
I'll be purchasing a gaming laptop soon, likely with a mobile 4090, so 16GB of VRAM and around 32GB of main memory. It'll also have a single Thunderbolt 4 port.
It's my understanding that llama.cpp and other inference programs like ExLlama can split the work across multiple GPUs. Suppose I buy a Thunderbolt GPU dock like a TH3P4G3 and put a 3090/4090 with 24GB VRAM in it, then connect it to the laptop via Thunderbolt.
At that point, I'll have a total of 16GB + 24GB = 40GB VRAM available for LLMs. Which should allow me to run 65B 4-bit models entirely in VRAM or almost entirely in VRAM, with a tiny bit remaining that I can offload to regular RAM with llama.cpp.
I've read that aside from the initial loading of the model, which I'm guessing is a one time cost, the PCIe usage for this type of GPU VRAM pooling will be very small. Basically completely negligible.
[https://github.com/turboderp/exllama/discussions/16#discussioncomment-6245573](https://github.com/turboderp/exllama/discussions/16#discussioncomment-6245573)
[https://old.reddit.com/r/LocalLLaMA/comments/142rm0m/llamacpp\_multi\_gpu\_support\_has\_been\_merged/jo1a8sj/](https://old.reddit.com/r/LocalLLaMA/comments/142rm0m/llamacpp_multi_gpu_support_has_been_merged/jo1a8sj/)
So if I need even more VRAM later on, I have the possibility of buying another eGPU dock and 3090/4090.
It looks like Thunderbolt 4 can support 3 downstream Thunderbolt ports that I can attach an eGPU to. So up to 3 eGPUs total with a hub.
[https://egpu.io/forums/thunderbolt-enclosures/connecting-multiple-egpus-to-a-thunderbolt-4-pc-using-a-thunderbolt-4-hub-dock/](https://egpu.io/forums/thunderbolt-enclosures/connecting-multiple-egpus-to-a-thunderbolt-4-pc-using-a-thunderbolt-4-hub-dock/)
And Thunderbolt 3 could previously handle a daisy chain of devices, including eGPUs, which should be recognized correctly in Linux but not Windows. I'll probably be using Linux for LLMs so this is an option too.
[https://egpu.io/forums/which-gear-should-i-buy/multiple-egpus-non-apple-laptops/](https://egpu.io/forums/which-gear-should-i-buy/multiple-egpus-non-apple-laptops/)
It seems both the hub or chain architecture should support the full Thunderbolt PCIe 3.0 x4 throughput for each individual eGPU when they're all connected. But the bandwidth will obviously be shared among all the eGPUs if multiple devices are sending or receiving. The max data rate looks like it's about 22 Gb/s bidirectional, after accounting for overhead from Thunderbolt and PCIe.
[https://www.owc.com/blog/whats-the-difference-between-thunderbolt-3-and-thunderbolt-4](https://www.owc.com/blog/whats-the-difference-between-thunderbolt-3-and-thunderbolt-4)
[https://tripplite.eaton.com/products/thunderbolt-4](https://tripplite.eaton.com/products/thunderbolt-4)
How realistic is it that some better implementation of multiple GPU support could saturate the Thunderbolt 4 connection in the future with 1, 2, or more attached eGPUs?
Inference aside, are there cool things I could do with LLMs that would be severely bottlenecked by PCIe bandwidth like training or finetuning?
Any other thoughts or problems about the setup? | 2023-06-28T23:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/14lp392/thunderbolt_and_multiple_egpus/ | throwaway075489 | self.LocalLLaMA | 2023-06-29T00:15:07 | 0 | {} | 14lp392 | false | null | t3_14lp392 | /r/LocalLLaMA/comments/14lp392/thunderbolt_and_multiple_egpus/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'EPgTNE7yluBjHTsahyFJm-K6RT0yQPa8VPGo7BDF2OU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?width=108&crop=smart&auto=webp&s=b4e6d3a0c3c2a6a0123e379b0083fcdacaf5dea7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?width=216&crop=smart&auto=webp&s=10b808c3660202702bceb4ba635e1d8e334fe77d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?width=320&crop=smart&auto=webp&s=6fcb65c38364680759862c04831fefcf520d1ce3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?width=640&crop=smart&auto=webp&s=e203de8ef2fe4f0c7619f71ebeb4717619364779', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?width=960&crop=smart&auto=webp&s=1582057c2d0a93c7acd721bac6459214a6191118', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?width=1080&crop=smart&auto=webp&s=23108881040d5b045db058463a54700ac7e4cd22', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S8lq5R4JwnifYKxrE1RqsFaaZozKzJlvh3zVf7wOME0.jpg?auto=webp&s=5f7ecea6aa0b82afb3cef4c118499153efa03ed4', 'width': 1200}, 'variants': {}}]} |
Orange Pi 5 Plus Koboldcpp Demo (MPT, Falcon, Mini-Orca, Openllama) | 16 | I managed to get Koboldcpp and lollms webui working on an Orange Pi 5 plus with 16gb of ram. It's still pretty slow, I don't think it's using the GPU at all. I'm hoping if I install android on it that I can take advantage of one of the WebGPU projects.
Video is two hours of messing around, pretty boring but if you are curious how fast different model sizes at different quants and context sizes on the Pi 5, it might be helpful. If anyone else is working on the orange pi platform with generative AI I'd love to hear how people are approaching it.
[https://www.youtube.com/watch?v=zxRBsFYDvsk](https://www.youtube.com/watch?v=zxRBsFYDvsk)
​ | 2023-06-29T00:48:48 | https://www.reddit.com/r/LocalLLaMA/comments/14lqh35/orange_pi_5_plus_koboldcpp_demo_mpt_falcon/ | CheshireAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lqh35 | false | null | t3_14lqh35 | /r/LocalLLaMA/comments/14lqh35/orange_pi_5_plus_koboldcpp_demo_mpt_falcon/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'VOGzXgM_hSdSceIUZb7xZS74amg9FhL8rsppWiqvL3Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hxtINU9-m5HZR8lWdbuoNNE32lXKdXhzAg0uxB5WVV4.jpg?width=108&crop=smart&auto=webp&s=ae2d0d2a4e7dc65a52eca9a0d179c2ab83044c73', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hxtINU9-m5HZR8lWdbuoNNE32lXKdXhzAg0uxB5WVV4.jpg?width=216&crop=smart&auto=webp&s=939d6b67c4473c777db6168861936ca7faa7e2b7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hxtINU9-m5HZR8lWdbuoNNE32lXKdXhzAg0uxB5WVV4.jpg?width=320&crop=smart&auto=webp&s=55bfa4fbf1e8ffd94b9a3baef757eb1556da3652', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hxtINU9-m5HZR8lWdbuoNNE32lXKdXhzAg0uxB5WVV4.jpg?auto=webp&s=ce342b71597ff3a628def3abed9809fd10070bf3', 'width': 480}, 'variants': {}}]} |
OpenOrca | 332 | [deleted] | 2023-06-29T00:53:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14lqkzd | false | null | t3_14lqkzd | /r/LocalLLaMA/comments/14lqkzd/openorca/ | false | false | default | 332 | null |
||
Fine-tuning with alpaca_lora_4bit on 8k context SuperHOT models | 5 | I wasn't able to find any information on this - Does anyone know if alpaca\_lora\_4bit supports training on longer context lengths out of the box, or does it need some modification in light of the longer context length afforded by the SuperHOT models?
I opened a ticket against the repo here:
[https://github.com/johnsmith0031/alpaca\_lora\_4bit/issues/129](https://github.com/johnsmith0031/alpaca_lora_4bit/issues/129)
...but thought this would be a good place to ask too.
Has anyone fine-tuned a GPTQ 4bit 8k llama using alpaca\_lora\_4bit successfully? Any gotchas specific to 8k?
Also, if there are any web GUI's that leverage alpaca\_lora\_4bit for their training, I'd love to know about them.
For anyone new to fine-tuning who may stumble across this post; alpaca\_lora\_4bit is an amazing repo that is highly optimized for training against quantized GPTQ models. This repo trains about \*twice as fast\* as qlora; signficantly faster than any other technique I'm aware of. It tends to fly under the radar, so if you're looking at LMFlow, qLORA, axolotl, or any other package for fine-tuning, be sure to consider alpaca\_lora\_4bit, and give johnsmith0013 a fistbump for me if you see him.
​ | 2023-06-29T01:24:26 | https://www.reddit.com/r/LocalLLaMA/comments/14lr93d/finetuning_with_alpaca_lora_4bit_on_8k_context/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lr93d | false | null | t3_14lr93d | /r/LocalLLaMA/comments/14lr93d/finetuning_with_alpaca_lora_4bit_on_8k_context/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'RWQYM0TH_tBX_ZtQTc4EAczSVjV2SLa6W_56BlOZGEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?width=108&crop=smart&auto=webp&s=ecae225045706635503650f5db00544d5402715f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?width=216&crop=smart&auto=webp&s=72ca0f9844f010f644fdd025590455cd16327e60', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?width=320&crop=smart&auto=webp&s=ed2bb3af54405dc0920f7cd5dfad7024fab60769', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?width=640&crop=smart&auto=webp&s=7abd9827e3bd0f87825233e3815e1be949ec6390', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?width=960&crop=smart&auto=webp&s=75884d08b38989993e6dbb50e808235b0f1aeb31', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?width=1080&crop=smart&auto=webp&s=86b02137246cfcda2405ba984e090e3afb10dde7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5gHuSxecJlTJWmUgmTMQDxIAMeFd9nbJp4v34NswkXE.jpg?auto=webp&s=4708d8961cb68e454ecc240cc4aafe07e0d8526b', 'width': 1200}, 'variants': {}}]} |
Return of Clippy | 1 | [removed] | 2023-06-29T01:31:15 | https://github.com/FireCubeStudios/Clippy | BasicsOnly | github.com | 1970-01-01T00:00:00 | 0 | {} | 14lreh8 | false | null | t3_14lreh8 | /r/LocalLLaMA/comments/14lreh8/return_of_clippy/ | false | false | default | 1 | null |
What's currently the strongest model I can run on a Mac? M2 Max with 96GB VRAM. If it's necessary for me to convert and quantize a model, I can do that if llama.cpp is capable of it. | 1 | [removed] | 2023-06-29T01:52:58 | https://www.reddit.com/r/LocalLLaMA/comments/14lrukg/whats_currently_the_strongest_model_i_can_run_on/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lrukg | false | null | t3_14lrukg | /r/LocalLLaMA/comments/14lrukg/whats_currently_the_strongest_model_i_can_run_on/ | false | false | default | 1 | null |
Warning with ConversationChain in Langchain | 1 | [removed] | 2023-06-29T03:56:54 | https://www.reddit.com/r/LocalLLaMA/comments/14luesj/warning_with_conversationchain_in_langchain/ | CmplxQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14luesj | false | null | t3_14luesj | /r/LocalLLaMA/comments/14luesj/warning_with_conversationchain_in_langchain/ | false | false | default | 1 | null |
UltraLM-13B reaches top of AlpacaEval leaderboard | 55 | https://tatsu-lab.github.io/alpaca_eval/
Not that AlpacaEval is the best evaluator, but I haven't seen anything about this model and it seemed impressive that it beat WizardLM-13B, airoboros-65B, airoboros-33B, Guanaco-65B, etc. Also that its score is approaching ChatGPT's.
Has anyone tried it out or know about it?
https://huggingface.co/openbmb/UltraLM-13b | 2023-06-29T05:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/14lwbzx/ultralm13b_reaches_top_of_alpacaeval_leaderboard/ | TNTOutburst | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lwbzx | false | null | t3_14lwbzx | /r/LocalLLaMA/comments/14lwbzx/ultralm13b_reaches_top_of_alpacaeval_leaderboard/ | false | false | self | 55 | null |
Load Xgen Models in Oobabooga | 3 | 2023-06-29T07:06:58 | https://github.com/oobabooga/text-generation-webui/pull/2922 | KlutzyRemove9936 | github.com | 1970-01-01T00:00:00 | 0 | {} | 14lxyk8 | false | null | t3_14lxyk8 | /r/LocalLLaMA/comments/14lxyk8/load_xgen_models_in_oobabooga/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'kFO_-6uL5EZjgiQm3uOQuAwKNEuiMKk7cFNuPhEWNzM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?width=108&crop=smart&auto=webp&s=000e365233a0cdc37010784cf22b52a271f554be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?width=216&crop=smart&auto=webp&s=716782658bfa4cba17d14e174b1981222653eb91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?width=320&crop=smart&auto=webp&s=8bf67b51e735cc608a37fc653f24e8f5e57ea134', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?width=640&crop=smart&auto=webp&s=41ece1a7bee36199746d523b288ba93134b052bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?width=960&crop=smart&auto=webp&s=8e11cd4f1ab1edbc9dcc2104f0e848a0299ec811', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?width=1080&crop=smart&auto=webp&s=9b0dfd28e4a85352863c78ee641f195262bc6729', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rsXpPcKBP39c-SRgfeHAc2S0nUlBfCQErkyA1FeoV_Y.jpg?auto=webp&s=b777b1eb709ca9c87889aa7e7357b87cb00ed09e', 'width': 1200}, 'variants': {}}]} |
||
Is there an open sourced model thats comparable to gpt3.5 and allows Explicit content? | 17 | Hey Guys! asking for a friend. | 2023-06-29T07:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/14lxysp/is_there_an_open_sourced_model_thats_comparable/ | chriscarmy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lxysp | false | null | t3_14lxysp | /r/LocalLLaMA/comments/14lxysp/is_there_an_open_sourced_model_thats_comparable/ | false | false | self | 17 | null |
https://huggingface.co/Salesforce/xgen-7b-8k-base/tree/main - to ggml? | 6 | What are the steps to make this happen. I'm looking on how to join this and run a quantization of 5\_K\_M. I managed to combine the files into a single bin, but now there's a missing tokenizer\_model.
How can I do it myself?
| 2023-06-29T07:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/14lyji0/httpshuggingfacecosalesforcexgen7b8kbasetreemain/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lyji0 | false | null | t3_14lyji0 | /r/LocalLLaMA/comments/14lyji0/httpshuggingfacecosalesforcexgen7b8kbasetreemain/ | false | false | self | 6 | null |
Useful post that summarizes instruction tuning base llms | 7 | [https://yaofu.notion.site/June-2023-A-Stage-Review-of-Instruction-Tuning-f59dbfc36e2d4e12a33443bd6b2012c2](https://yaofu.notion.site/June-2023-A-Stage-Review-of-Instruction-Tuning-f59dbfc36e2d4e12a33443bd6b2012c2) | 2023-06-29T07:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/14lyk76/useful_post_that_summarizes_instruction_tuning/ | mr_dark_matter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lyk76 | false | null | t3_14lyk76 | /r/LocalLLaMA/comments/14lyk76/useful_post_that_summarizes_instruction_tuning/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'Dz3maUMoNwKz6EBFGZDTV5DUCdBglIUkX9OJJbEyUV8', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?width=108&crop=smart&auto=webp&s=5b40ec677c4b544382ef867278a5bf70c91ecb83', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?width=216&crop=smart&auto=webp&s=b48f26fe489bd218dbd235ca67cf075ec044841c', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?width=320&crop=smart&auto=webp&s=2b5779f2b69a32b9b69000baf859d7a46fb7a5bf', 'width': 320}, {'height': 376, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?width=640&crop=smart&auto=webp&s=87662881fbe8b114b94f3d8dbe7d87a13c24b073', 'width': 640}, {'height': 564, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?width=960&crop=smart&auto=webp&s=bb73cb41098c986b902d557fe14bfc8d69ecee2a', 'width': 960}, {'height': 634, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?width=1080&crop=smart&auto=webp&s=aa394ccd84b79bcb0bb7c8bac154c2b841911655', 'width': 1080}], 'source': {'height': 1175, 'url': 'https://external-preview.redd.it/wpPP9c0Ree4yvUBMt2uj2muQYU0NMRby4GxqASCNYyo.jpg?auto=webp&s=208acf1597a43f5352b7614052371ef29f262162', 'width': 2000}, 'variants': {}}]} |
Best model for executing http requests? | 2 | I've been lurking here for a while now
I am very new to the open source ai community, and most of the time. The conversations and posts in this community goes right over my head.But I've been thinking, OpenAi GPT-4 model has gone through some updates to its' command execution capabilities.
But, the reason why I don't want to use it is because I've noticed GPT-4 has become more, how do I say this without sounding to harsh: "retarded" and "useless".
But anyways, since I am so new to this, I was wondering. What model is the best for when it comes to code and command execution (I plan on using said model for commercial purposes so I'm obviously looking for a model with Apache or MIT license)
edit: I am a developer, and been so for a couple of years now. I lack insight and knowledge when it comes to using ML, AI, NN frameworks. Not actual development itself. I find it annoying when I ask questions on reddit about certain aspects of an API or SDK and redditors comment shit like: "yOu sHoUlD LeArN ThE BaSiCs oF PrOgRaMmInG BeFoRe yOu sTaRt lEaRnInG HoW To uSe aN ApI"
Hopefully this community isn't as toxic as the other open source communities on this website.
Thank you in advance for any help.
https://preview.redd.it/znah5a51ww8b1.png?width=1890&format=png&auto=webp&s=a7530d2505f5a116e856e8b1f265a8dca54e4b5d | 2023-06-29T07:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/14lyo09/best_model_for_executing_http_requests/ | holistic-engine | self.LocalLLaMA | 2023-06-29T07:55:58 | 0 | {} | 14lyo09 | false | null | t3_14lyo09 | /r/LocalLLaMA/comments/14lyo09/best_model_for_executing_http_requests/ | false | false | 2 | null |
|
NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation. | 384 | I've seen the posts about SuperHOT and just recently, the paper from Meta which uses RoPE interpolation, and I've noticed an immediate improvement that can be brought to this method. Basically if you apply Neural Tangent Kernel (NTK) theory to this problem, it becomes clear that simply interpolating the RoPE's fourier space "linearly" is very sub-optimal, as it prevents the network to distinguish the order and positions of tokens that are very close by. Borrowing from NTK literature, scaling down the fourier features too much will eventually even prevent succesful finetunes (this is corroborated by the recent paper by Meta that suggests an upper bound of \~600x)
Instead of the simple linear interpolation scheme, I've tried to design a nonlinear interpolation scheme using tools from NTK literature. Basically this interpolation scheme changes the base of the RoPE instead of the scale, which intuitively changes the "spinning" speed which each of the RoPE's dimension vectors compared to the next. Because it does not scale the fourier features directly, all the positions are perfectly distinguishable from eachother, even when taken to the extreme (eg. streched 1million times, which is effectively a context size of 2 Billion)
To my surprise, this method works extremely well, so much so that you don't even need to fine tune the LLaMA 7B model for 4096 context size! The perplexity degradation is minimal. I'm sure with fine tuning this would become even better.
Enough explanations, here's some empirical results. All the perplexity measurements are done on LLaMA 7b with the [tau/scrolls · Datasets at Hugging Face](https://huggingface.co/datasets/tau/scrolls/viewer/gov_report/test) dataset (I only used a subset of gov\_report).
​
Here's a graph showing the average perplexity of LLaMA 7b on a set of 40 very long prompts (12k+ context size). Compared to changing the scale (from SuperHOT, which was set to 4), we change a factor alpha, which when equal to 8 provides the same context size increase but with much less perplexity degradation. All without any finetuning!
[Graph showing the average perplexity of LLaMA 7b on set of 40 very long prompt \(12k+ context size\) with previous and new interpolation scheme](https://preview.redd.it/9xwjqyy3zw8b1.png?width=846&format=png&auto=webp&s=dbcf24c29ad1598a6ccc51b4e5efffd4c5267381)
Here's more results, showing more scale and alpha factors.
[Here's more results, showing more scale and alpha factors.](https://preview.redd.it/ebisi5d4zw8b1.png?width=846&format=png&auto=webp&s=c59a2427c9c54d17b5ff4d18d46e4696b531fca7)
Zoomed-in version of the second graph, showing the details.
[Zoomed-in version of the second graph, showing the details.](https://preview.redd.it/eihrvqm4zw8b1.png?width=846&format=png&auto=webp&s=8ca0a722afe04526b00367023e6e7281c309ccb3)
Code can be found on a colab notebook with a test example. [NTKAwareScaledRotaryEmbedding.ipynb - Colaboratory (google.com)](https://colab.research.google.com/drive/1VI2nhlyKvd5cw4-zHvAIk00cAVj2lCCC#scrollTo=e431d2cd)
Again, the changes to the RoPE code is only 3 lines.
Also just for fun, I've pasted the entirety of the Meta paper [\[2306.15595\] Extending Context Window of Large Language Models via Positional Interpolation (arxiv.org)](https://arxiv.org/abs/2306.15595), with the abstract and conclusions removed, and asked OASST-sft-6-llama-30b some questions. I think it fared pretty well given it hasn't been fine tuned at all for 6000+ context size. I used an alpha of 8 for this prompt.
The entire paper is about 6200 tokens. These prompts can be found in the colab notebook linked above. The following questions are simply appended at the end of the prompt which contains the entire research paper.
QUESTION
>Please give me a brief summary of this research paper in a few bullet points.
ANSWER
>\* Large Language models (LLMs) often have a pre-defined context window size, but sometimes they need to operate on longer contexts, e.g., in conversation systems or summarization tasks.
>
>\* Extending the context window of an existing pre-trained LLM is challenging because direct extrapolation of positional encodings can lead to poor performance.
>
>\* Researchers propose "Position Interpolation" to extend the context window of LLMs by downscaling the positional encodings.
>
>\* They prove that interpolated positional encodings have a much smaller upper bound than extrapolated ones, allowing the model to adapt to longer contexts more efficiently.
>
>\* They experimentally validate the effectiveness of Position Interpolation on LLaMA models, reducing the perplexity gap between the original and extended contexts.
>
>\* They compare Position Interpolation with direct extrapolation and find that Position Interpolation works better.
>
>\* They apply Position Interpolation to LLaMa models and fine-tune them on the Pile dataset, finding that the models can effectively utilize longer contexts.
>
>\* They also evaluate the models' abilities to attend to longer contexts and perform well on benchmarks within the original context size.
>
>\* They apply Position Interpolation to long document summarization and obtain competitive results.
>
>\* They provide insights into the interpolation/extrapolation tradeoff and discuss potential improvements.
>
>\* They plan to explore more advanced methods like data augmentation and multi-task learning to improve the interpolation.
​
QUESTION
>Please write me the abstract for this paper.
ANSWER
>This paper presents a technique called "Position Interpolation" for extending the context window size of pre-trained Large Language Models (LLMs) without modifying their architecture. Previous methods like ALiBi and LeX were unable to efficiently extend the context window size of LLMs, leading to poor performance. The authors propose Position Interpolation, which involves downscaling the positional encoding of the LLM's attention mechanism to match the desired longer context window size. They prove mathematically that interpolated attention scores are more stable than extrapolated ones, allowing the model to adapt to longer contexts with just a few hundred fine-tuning steps. Experiments showed that Position Interpolation enables LLMs to perform well on longer contexts, reducing perplexity by up to 0.8 points on average, and generating strong summarizations. Extending the context window size to 32768 led to slight degradation on the original benchmarks, but the models adapted to longer texts. The authors conclude that interpolating positional encodings allows the model to attend to relevant evidence in the longer contexts, and the stability of the interpolated attention score helps the model avoid catastrophic behaviors.
​
QUESTION
>How many steps was the model fine tuned for the final results? Give a short answer.
ANSWER
>The authors of the paper "Extending Context Windows of Large Language Models via Position Interpolation" fine-tuned the model for 1000 batches for the experiments presented in the paper.
​
QUESTION
>How big is the interpolation bound compared to the extrapolation bound? Give a short answer.
ANSWER
>The interpolation bound is at least 2 - 29.7 \* 10\^(-6) times smaller than the extrapolation bound.
​
Sorry for the large dump of information on reddit, I'm too lazy to write a blog post for this. I might give a more detailed explanation on how I derived the formula used to calculate the base if enough people are interested.
I did not test fine-tuning performance as I do not have the resources or the time to fine tune an LLM, I just derived this formula during lunch and experimented with it. However, I think that this method will do even better with fine tuning. Also thanks to the people behind the SuperHOT blogpost, it was their hard work that inspired me and allowed me to make this contribution for everyone!
Finally, I really hope this post will inspire others to start experimenting on ways to improve LLMs. There's so much to learn and so much left to discover! What a time to be alive! | 2023-06-29T08:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/ | bloc97 | self.LocalLLaMA | 2023-06-29T08:41:57 | 0 | {'gid_3': 1} | 14lz7j5 | false | null | t3_14lz7j5 | /r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/ | false | false | 384 | {'enabled': False, 'images': [{'id': 'z2HdRfGrX_QS4_TnwDeHjTgrpOd2uGmfmEZQf63iZWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=108&crop=smart&auto=webp&s=d840bf220765e7b6df8c36771f071c82dc53eee4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=216&crop=smart&auto=webp&s=714db9b135c12543746691b8a956acfd07122580', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=320&crop=smart&auto=webp&s=e1a8f89ae830c69fa429ef112b425aba1b64bdf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=640&crop=smart&auto=webp&s=31e2c79449868e179793a1f2d70f5d78de751d08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=960&crop=smart&auto=webp&s=262b4daf154aadda8f746529eb973650ecbe9e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=1080&crop=smart&auto=webp&s=700bfff52f422ffd0ff53c1ea12551bbdee98a62', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?auto=webp&s=c2f80796e75ceb2043e71b915e84ad78ae348afa', 'width': 2024}, 'variants': {}}]} |
|
Now that ExLlama is out with reduced VRAM usage, are there any GPTQ models bigger than 7b which can fit onto an 8GB card? | 18 | Basically as the title states. I've just updated the Oobabooga WebUI and I've loaded a model using ExLlama; the speed increase is REALLY impressive. For my initial test the model I loaded was TheBloke\_guanaco-7B-GPTQ, and I ended up getting 30 tokens per second!
Then I tried to load TheBloke\_guanaco-13B-GPTQ and unfortunately got CUDA out of memory. So I switched the loader to ExLlama\_HF and I was able to successfully load the model. But upon sending a message it gets CUDA out of memory again.
I'm aware that there are GGML versions of those models, but the inference speed is painfully slow compared to GPTQ. So are there any models bigger than 7B which might fight onto 8GB of VRAM, or any tricks I can do to try and make that happen? Or am I stuck with 7B models for now?
It's kind of disappointing to run out of memory on a 13B model when according to the console I was only 20 megabytes short, lol. | 2023-06-29T08:25:46 | https://www.reddit.com/r/LocalLLaMA/comments/14lza1k/now_that_exllama_is_out_with_reduced_vram_usage/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lza1k | false | null | t3_14lza1k | /r/LocalLLaMA/comments/14lza1k/now_that_exllama_is_out_with_reduced_vram_usage/ | false | false | self | 18 | null |
with EXLlama, can I use RTX and GTX cards together ? | 10 | Hi,
I've got a RTX4070, would it work to add an old GTX card with 12Gb ? They're cheap in second hand, but would this work and help with the token speed ?
What about adding a RTX 3060 with 12Gb ?
THANK YOU ! | 2023-06-29T09:52:42 | https://www.reddit.com/r/LocalLLaMA/comments/14m0sdz/with_exllama_can_i_use_rtx_and_gtx_cards_together/ | alexthai7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m0sdz | false | null | t3_14m0sdz | /r/LocalLLaMA/comments/14m0sdz/with_exllama_can_i_use_rtx_and_gtx_cards_together/ | false | false | self | 10 | null |
Does the model "see" it's own output during generation? | 4 | Sorry if this has been asked before but here is what I'm trying to figure out.
Does the model 'see' it's own output during the generation? Here is what I mean by that:
This is going to be one single prompt:
Task1: translate "text A" from Ukrainian to English
Task2: correct the translated text generated in task1. ensure that it is grammatically correct, uses plain English and is easily readable.
Task3: determine how can the text generated in task2 be improved.
Task4: based on the suggestions generated in task3 improve the text generated in task2.
When using such a prompt GPT-4 CONSISTENTLY generates the best result when outputing Task4 despite the fact that it is all one single prompt (one call to API) and single output. I've also tested it with GPT-3.5 and while the results are not as consistent. The final text generated in Task4 is in most cases the best.
The evaluation of how good the texts are is done using GPT-4 in a separate chat window using random order.
p.s I should also stress that when you ask a model to simply translate textA from Ukrainian to English the results are almost always subpar. So the above example cannot be regarded as hallusination where the model purposely creates a subpar translation in task1, improved translation in task2, and the best translation in task4 | 2023-06-29T10:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/14m15wm/does_the_model_see_its_own_output_during/ | ahtoshkaa2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m15wm | false | null | t3_14m15wm | /r/LocalLLaMA/comments/14m15wm/does_the_model_see_its_own_output_during/ | false | false | self | 4 | null |
Anyone know how "commercial" use is defined? | 7 | Hi all, I've been lurking here for a while (not logged in), digging through various models. I'm hoping to pair privateGPT with one of the nice opensource UIs available and create a tool which can be used to search through company documentation. Unfortunately, almost every single open source model states they're not for commercial use.
To me, my use-case isn't commercial, as there is no monetary gain from the project, and the intention is for absolutely nothing to be seen by anyone outside the company. But some of the licensing (WizardLM for instance) is super unclear and just says "not for commercial use", and the authors rarely reply to any questions around their licensing (probably because they don't really have a clue either).
I was under the impression that any model fine-tuned from LLaMa should be licensed under GPLv3 but because of these mentions of non-commercial use I'm not certain I can use them. Anyone know of any models where the licensing is clearer? Or does anyone happen to know some of the authors of the various Wizard models (I've found their 7Bs do the job really well) who could maybe clarify the "noncommercial" statements? | 2023-06-29T10:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/14m16j6/anyone_know_how_commercial_use_is_defined/ | IcyRutabaga8837 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m16j6 | false | null | t3_14m16j6 | /r/LocalLLaMA/comments/14m16j6/anyone_know_how_commercial_use_is_defined/ | false | false | self | 7 | null |
Looks like there is a new model UltraLM that topped the AlpacaEval Leaderboard | 1 | [deleted] | 2023-06-29T11:09:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14m26fw | false | null | t3_14m26fw | /r/LocalLLaMA/comments/14m26fw/looks_like_there_is_a_new_model_ultralm_that/ | false | false | default | 1 | null |
||
Anyone knows a good local model for Japanese language? | 3 | Llama apparently wasn’t trained on Japanese much. | 2023-06-29T11:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/14m2rnq/anyone_knows_a_good_local_model_for_japanese/ | SunnyJapan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m2rnq | false | null | t3_14m2rnq | /r/LocalLLaMA/comments/14m2rnq/anyone_knows_a_good_local_model_for_japanese/ | false | false | self | 3 | null |
How to run models from hugging face with 32gb ram? | 1 | [removed] | 2023-06-29T12:33:23 | https://www.reddit.com/r/LocalLLaMA/comments/14m3u9s/how_to_run_models_from_hugging_face_with_32gb_ram/ | albertoZurini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m3u9s | false | null | t3_14m3u9s | /r/LocalLLaMA/comments/14m3u9s/how_to_run_models_from_hugging_face_with_32gb_ram/ | false | false | default | 1 | null |
Guidance regarding accurate information transposition | 1 | Where can I find resources regarding the state-of-the-art with feeding information, such as in a structured data format, into an LLM as well as a form to transpose it into, e.g. a research paper rewritten as a simpsons episode, without the LLM embellishing / misstating / hallucinating / lying about any of the information provided to it? I'm still wrapping my head around how LLMs work, but is this part of the process of fine-tuning, or do you simply need a very good LLM and well-written prompt? Prompt engineering, from my experience, seems to only go so far and doesn't feel as 'structured' as I'd imagine one could have when trying to ensure that nothing within the LLMs' output is incorrect (as far as the information provided goes). | 2023-06-29T12:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/14m421r/guidance_regarding_accurate_information/ | shpw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m421r | false | null | t3_14m421r | /r/LocalLLaMA/comments/14m421r/guidance_regarding_accurate_information/ | false | false | self | 1 | null |
AI crowdsourcing project | 1 |
Hi everyone,
do you know if an AI crowdsourcing project exists like
zooniverse ? I'd like to participate to a crowdsourcing open project to
offer my assistance and learn from what others are doing. Do you have
any proposition ? | 2023-06-29T13:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/14m4gjh/ai_crowdsourcing_project/ | Savings_Scholar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m4gjh | false | null | t3_14m4gjh | /r/LocalLLaMA/comments/14m4gjh/ai_crowdsourcing_project/ | false | false | self | 1 | null |
I made a subreddit for hacking and modifying LLMs! | 0 | [removed] | 2023-06-29T13:25:06 | https://www.reddit.com/r/LocalLLaMA/comments/14m4z5z/i_made_a_subreddit_for_hacking_and_modifying_llms/ | cstein123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m4z5z | false | null | t3_14m4z5z | /r/LocalLLaMA/comments/14m4z5z/i_made_a_subreddit_for_hacking_and_modifying_llms/ | false | false | default | 0 | null |
Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning. | 4 | I took a look on this notebook [bnb-4bit-training.ipynb - Colaboratory (google.com)](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=s6f4z8EYmcJ6) and i can't figured out how to use my custom dataset in it.
Here is the dataset I created after I consulted other datasets on huggingface :
{"question": "What is Jesus?", "answer": "Jesus is a generic name for a specific aspect of mankind’s Soul."}
{"question": "What is the essential message?", "answer": "The essential message is that the higher mind uses and knows the kingdom of God, not only as material here on Earth, but as an inner spiritual transformation for the future."}
{"question": "How can the consciousness be transformed into Soul on Earth?", "answer": "The only way to achieve transformation of the consciousness into Soul here on Earth is for the higher consciousness to follow the unique source back home."}
My dataset loading code (I load it from local folder):
from datasets import load_dataset
data = load_dataset("my_data", data_files="data.jsonl")
I'm wondering is that enough or do I need to add some more lines before i start training my model? Because I found an extra line from the notebook:
data = data.map(lambda samples: tokenizer(samples["quote"]), batched=True)
But in this notebook [Falcon-Guanaco.ipynb - Colaboratory (google.com)](https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing) they didn't use it. Kinda confuse to me.
I'm just a noob to these things and i did try to look up for more informations, but can't find any post that go deeply in this part. If someone can give me some sources, docs, videos about this, i would appreciate! | 2023-06-29T13:25:48 | https://www.reddit.com/r/LocalLLaMA/comments/14m4zsq/need_a_detailed_tutorial_on_how_to_create_and_use/ | CKOSMICC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m4zsq | false | null | t3_14m4zsq | /r/LocalLLaMA/comments/14m4zsq/need_a_detailed_tutorial_on_how_to_create_and_use/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
What are your favorite commercial LLMs for roleplaying? | 2 | Been seeing a lot of open-source commercial LLMs lately like Falcon, MPT, OpenLLaMa, RedPajama INCITE, Dollyetc, but they don't come close to the quality of models like Guanaco-65B , Manticore 13B and WizardLM 30B.
I'm wondering whether there are any good derivates of open-source LLMs on HuggingFace for roleplaying. | 2023-06-29T13:36:30 | https://www.reddit.com/r/LocalLLaMA/comments/14m58vo/what_are_your_favorite_commercial_llms_for/ | tripathiarpan20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m58vo | false | null | t3_14m58vo | /r/LocalLLaMA/comments/14m58vo/what_are_your_favorite_commercial_llms_for/ | false | false | self | 2 | null |
Open Orca: meet the devs give feedback ask questions! | 44 | the Orca paper has been replicated to as fine of a degree of precision as several obsessive nerds sweating for weeks could pull off(a very high degree)
We will be releasing Orca's as the models continue to be trained
And the dataset after we wipe off all the sweat and tears.
Right now, we're testing our fifth iteration of orca on a subset of the final data, and are just about to jump into the final stages!
Thanks to
the
Team:
Teknium1
winglian/caseus
erhartford
Nanobit
Pankajmathur
AlignmentLab.ai:
Entropi
AtlasUnified
NeverendingToast
Autometa
And of course, as always TheBloke, for being the backbone of the whole community.
Be sure to check out Axolotl on github, developed by Nano and Winglian, the platform that developed and trained manticore, minotaur, and many others!
OrcaMini on huggingface!
Samantha, WizardVicuna, and more!
and maybe even one of our projects at: Alignmentlab.ai which links to the dev server
If anyone wants to contribute feel free to join the server there and come chat, meet the team, give feedback and suggestions, or take a peek at one of the other projects ALAI is working on! Like landmark attention, recurrent memory transformers, and more as more cool things turn up!
Currently the first few models we release are going to be over the llama architecture to come as close to the paper as possible, since we've gone through such massive amounts of effort otherwise to do the same, whoever else we do is going to be subject to what we can get sponsored as far as training time (it's a huge amount of data). We've also been developing open orca + in parallel, as the paper was only a study the authors didn't have the room to apply optimizations without muddying their results, thankfully we are under no such restrictions! At least, before we release our initial models anyways. | 2023-06-29T13:37:23 | http://alignmentlab.ai | Alignment-Lab-AI | alignmentlab.ai | 1970-01-01T00:00:00 | 0 | {} | 14m59ju | false | null | t3_14m59ju | /r/LocalLLaMA/comments/14m59ju/open_orca_meet_the_devs_give_feedback_ask/ | false | false | default | 44 | null |
Title: Join LLMOps: The Growing Community for Large Language Model Deployment Enthusiasts! | 0 | [deleted] | 2023-06-29T13:59:23 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14m5rbv | false | null | t3_14m5rbv | /r/LocalLLaMA/comments/14m5rbv/title_join_llmops_the_growing_community_for_large/ | false | false | default | 0 | null |
||
Why does positional encodings add to token embedding instead of having a separate embedding dimension just for the position? | 5 | ^ | 2023-06-29T15:52:51 | https://www.reddit.com/r/LocalLLaMA/comments/14m8kpp/why_does_positional_encodings_add_to_token/ | gi_beelzebub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14m8kpp | false | null | t3_14m8kpp | /r/LocalLLaMA/comments/14m8kpp/why_does_positional_encodings_add_to_token/ | false | false | self | 5 | null |
koboldcpp-1.33 Ultimate Edition released! | 118 | 2023-06-29T16:12:17 | https://github.com/LostRuins/koboldcpp/releases/tag/v1.33 | WolframRavenwolf | github.com | 1970-01-01T00:00:00 | 0 | {} | 14m92is | false | null | t3_14m92is | /r/LocalLLaMA/comments/14m92is/koboldcpp133_ultimate_edition_released/ | false | false | 118 | {'enabled': False, 'images': [{'id': 'sP-w6kv0D8CL8jte3KDaDo-S9k6h04WTxY4I0sYpDms', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?width=108&crop=smart&auto=webp&s=96a8f45f7060e3a4e237372a325ca13e00987044', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?width=216&crop=smart&auto=webp&s=01673e3b941227bda2c340370ba6deb9eacd2958', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?width=320&crop=smart&auto=webp&s=14d3f3b0fdab71326c368132c5a757914e9f75c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?width=640&crop=smart&auto=webp&s=af77fa22be26dd733e53b7c126b3a3253529a51a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?width=960&crop=smart&auto=webp&s=83a53dc5db3954e0e4220f32adbe5522f6b3dda9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?width=1080&crop=smart&auto=webp&s=3e910f2f6b936d3f05088dfe3bfa216f7376b844', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KZfikXN0JrvOiNjULMgOxqlRm57yYJTKwgWyf4-VTxI.jpg?auto=webp&s=e668f35d87435e0fcce04ceec105d2a932d9b3a0', 'width': 1200}, 'variants': {}}]} |
||
Exploring LLMs for Data Synthesizing & Anonymization: looking for Insights on Current & Future Solutions | 5 | Hey there,
I'm currently working on an university paper and looking to delve deeper into the world of Large Language Models (LLMs) for data synthesizing and anonymization, particularly focusing on personal data in free text such as bank transfer text fields.
So far, I’ve been using GPT-4 for detecting personal data points as well as Presidio. I have also worked with some classification of the text field and generating syntetic ones. However, I'm keen on exploring a broader spectrum of options and understanding what’s on the horizon for LLMs.
I would greatly appreciate any insights or advice on the following:
1. **Current Options**: What are some LLMs or tools that are currently being used for data synthesis and anonymization? How do they compare in terms of performance and capabilities and how could they i.e. be used to train a model to understand the task, realiably perform it?
2. **Corporate Solutions**: For a large corporation looking to invest in this area, what kind of custom solutions could be developed using LLMs? What might be the range of costs involved in developing and maintaining such solutions?
3. **Future Developments**: What’s next for LLMs in the context of data synthesis and anonymization? Are there any upcoming models or technologies that show promise?
4. **Challenges & Best Practices**: What are the challenges faced when implementing LLMs for data synthesis and anonymization, and what best practices can be followed to address them?
5. **Research & Resources**: Are there any research papers, articles, or other resources that you would recommend for understanding the current and future landscape of LLMs in this context?
Thanks in advance for any insights and sharing your knowledge. It's highly valued and I hope you guys could help me
Best regards. | 2023-06-29T17:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/14maa8a/exploring_llms_for_data_synthesizing/ | Jotsohn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14maa8a | false | null | t3_14maa8a | /r/LocalLLaMA/comments/14maa8a/exploring_llms_for_data_synthesizing/ | false | false | self | 5 | null |
is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning? | 7 | curious, since qlora [paper](https://arxiv.org/abs/2305.14314) only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in [paper](https://arxiv.org/abs/2305.14314))it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora? | 2023-06-29T17:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/14mayp7/is_lora_finetuning_on_13b33b65b_comparable_to/ | distributional-hypot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mayp7 | false | null | t3_14mayp7 | /r/LocalLLaMA/comments/14mayp7/is_lora_finetuning_on_13b33b65b_comparable_to/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
ROCm to officially support the 7900 XTX starting this fall, plus big ROCm update today for LLMs and PyTorch. | 42 | 2023-06-29T17:43:37 | https://community.amd.com/t5/rocm/new-rocm-5-6-release-brings-enhancements-and-optimizations-for/ba-p/614745 | TeakTop | community.amd.com | 1970-01-01T00:00:00 | 0 | {} | 14mbczq | false | null | t3_14mbczq | /r/LocalLLaMA/comments/14mbczq/rocm_to_officially_support_the_7900_xtx_starting/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'Z8lH27MmAVPIU9TlwsJrXci_vE-sjMgh6jFimT17GZQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?width=108&crop=smart&auto=webp&s=e600472d07933f2a1991cd5da480ec73bd9e1c46', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?width=216&crop=smart&auto=webp&s=a97f37d60b58f29234016e388ec0b35012397303', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?width=320&crop=smart&auto=webp&s=30563391eec4fb4360f4894c61e141df0661f5a1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?width=640&crop=smart&auto=webp&s=73e03100814ce798afffee8d929cb34ecb0a58f2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?width=960&crop=smart&auto=webp&s=188f164ca5479174e10151d747ff640217492a98', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?width=1080&crop=smart&auto=webp&s=5cabf38a01e537bedf2dd054e89d75792f09c499', 'width': 1080}], 'source': {'height': 709, 'url': 'https://external-preview.redd.it/qVAvptCO0AWyF9wPRDKA8ackEKKTMlk0-THW2AwJjFc.jpg?auto=webp&s=c9bdfa4ab6f29004cce4a787edd505968ee19d4b', 'width': 1260}, 'variants': {}}]} |
||
GPU support on M1 Mac in gpt4all; smaller models on smaller Macs | 2 | The latest version of gpt4all as of this writing, v. 2.4.10, has an improved set of models and accompanying info, and a setting which forces use of the GPU in M1+ Macs. I'm using Nomics recent GPT4AllFalcon on a M2 Mac Air with 8 gb of memory. It's a sweet little model, download size 3.78 gb. I find it useful for chat without having it make the Air useless because of taking up most of the memory. Too, using the GPU roughly doubles the speed of the output. All in all, I'm delighted with this version of gpt4all and grateful to be able to run a chatbot that is useful for my purposes on my minimum spec laptop. Well done Nomic! | 2023-06-29T17:51:45 | https://www.reddit.com/r/LocalLLaMA/comments/14mbkin/gpu_support_on_m1_mac_in_gpt4all_smaller_models/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mbkin | false | null | t3_14mbkin | /r/LocalLLaMA/comments/14mbkin/gpu_support_on_m1_mac_in_gpt4all_smaller_models/ | false | false | self | 2 | null |
Trouble getting ANY 30b/33b 8k context model to work in ooba without OOM | 8 | I have 24GB vram on nvidia GPU with linux, but i cannot run a single 30/33b 8k model without OOMing on the first request. I have the latest ooba and use exllama\_hf (module version). Has anyone else had issues like this? Is there a fix? I tried using 4096/2 in exllama\_hf and lower values like 3000/2, 3000/4 etc. but nothing I've tried has worked. | 2023-06-29T18:09:09 | https://www.reddit.com/r/LocalLLaMA/comments/14mc0iu/trouble_getting_any_30b33b_8k_context_model_to/ | OnaHeat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mc0iu | false | null | t3_14mc0iu | /r/LocalLLaMA/comments/14mc0iu/trouble_getting_any_30b33b_8k_context_model_to/ | false | false | self | 8 | null |
Running Multiple AI Models Sequentially for a Conversation on a Single GPU | 9 | *EDIT: I mentioned using two agents in my original post but my goal is experiment with an ensemble of agents.
I'm currently working on a project where I'm trying to have a conversation between two AI models. However, due to my hardware constraints (I'm running this locally on my GPU), I can only load one model at a time into memory.
My current plan is to load and unload the models as needed during the conversation. For example, I would load the first model, generate a response, unload the first model, load the second model, generate a response, and so on. Here's a simplified version of what I'm thinking:
```python
# Start the conversation with a seed prompt
prompt = "Discuss the benefits and drawbacks of artificial intelligence."
for i in range(10): # Decide the number of turns in the conversation
if i % 2 == 0: # If the turn number is even, agent1 speaks
load_model("agent1")
response = generate(prompt)
print("Agent 1: ", response)
unload_model()
else: # If the turn number is odd, agent2 speaks
load_model("agent2")
response = generate(prompt)
print("Agent 2: ", response)
unload_model()
# The response becomes the new prompt for the next turn
prompt = response
```
I'm aware that constantly loading and unloading models can be time-consuming and might slow down the conversation. But given my current hardware constraints, this seems like the best solution.
I wanted to ask if anyone in this community has tried something similar or has any suggestions or insights. Is there a more efficient way to handle this situation? Any potential pitfalls I should be aware of?
Thanks in advance for your help! | 2023-06-29T19:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/14mdssp/running_multiple_ai_models_sequentially_for_a/ | RokaMic | self.LocalLLaMA | 2023-06-30T13:18:58 | 0 | {} | 14mdssp | false | null | t3_14mdssp | /r/LocalLLaMA/comments/14mdssp/running_multiple_ai_models_sequentially_for_a/ | false | false | self | 9 | null |
Open Orca Dataset Released! | 149 | Today we are releasing a dataset that lets open source models learn to think like GPT-4!
We call this Open Orca, as a tribute to the team who has released the Orca paper describing the data collection methods we have attempted to replicate in an open-source manner for the benefit of humanity.
With this data, we expect new open source models to be developed which are smaller, faster, and smarter than ever before because were going to be the ones doing the developing!
[https://huggingface.co/datasets/Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
We'd like to give special recognition to the following contributors for their significant efforts and dedication:
​
caseus
Eric Hartford
NanoBit
Pankaj
winddude
Rohan
​
[http://alignmentlab.ai/:](http://alignmentlab.ai/:)
Entropi
neverendingtoast
AtlasUnified
AutoMeta
​
The Orca paper has been replicated to as fine of a degree of precision as a motley crew of ML nerds toiling for weeks could pull off (a very high degree).
We will be releasing trained Orca models as the training currently in progress completes.
The dataset is still in final cleanup, and we will continue with further augmentations beyond the base Orca data in due time.
Right now, we are testing our fifth iteration of Orca on a subset of the final data, and are just about to jump into the final stages!
​
Many thanks to NanoBit and Caseus, makers of Axolotl \[[https://github.com/OpenAccess-AI-Collective/axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)\] for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
If you want to follow along, meet the devs, ask us questions, get involved, or check out our other projects!
​ | 2023-06-29T19:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/14me1ha/open_orca_dataset_released/ | Alignment-Lab-AI | self.LocalLLaMA | 2023-06-30T19:02:59 | 0 | {} | 14me1ha | false | null | t3_14me1ha | /r/LocalLLaMA/comments/14me1ha/open_orca_dataset_released/ | false | false | self | 149 | {'enabled': False, 'images': [{'id': 'qSwkU7QsB99ezqoIxydVN3fQ3aYKyaqOVcuwH0SFCRI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?width=108&crop=smart&auto=webp&s=d3769a896c40c46194ee277f391fe3689ef818e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?width=216&crop=smart&auto=webp&s=4f667042dec8defbaf5ac22a51029be8f45a422b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?width=320&crop=smart&auto=webp&s=431e127d3ef07ab7aaeef624b28ddedb084f969f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?width=640&crop=smart&auto=webp&s=132858a12e246138745f0fa7e10a7d27327cb2b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?width=960&crop=smart&auto=webp&s=3580430e6500a0debad421479f214df62f402b81', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?width=1080&crop=smart&auto=webp&s=0583b007153e6020bc9c8bd8973a2ad9cd801dcb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tYe6-cOyLlVoAuugorgK2CNdn5WTKwlG3I5e0lLYvaI.jpg?auto=webp&s=f053d6e80d039205fcd8df761f3d587f58ccb898', 'width': 1200}, 'variants': {}}]} |
In Search of Memory | 27 | Question for those deeper down the rabbit hole:
What's the mechanism through which a LLM knows a fact? I was playing with Wizard-Vicuna-30B, and was asking it geography questions. I keep narrowing it down to the area I grew up (small community) and it knew some stuff and the state park nearby. Presumably the model at some point was trained on data that talked about these cities/towns/parks. But I got to thinking, we're talking about straight optimization problems for the next best word right? Where in that pile of statistics does the fact reside? Can we think of it like the attention mechanism multiplies the raw storage power of the weight matrix in order to hold all the 'facts' these models 'know'? I confess my understanding of the models isn't complete, but I have looked under the hood a bit and aside from delving into the mathematical forumals I'm unqualified to look at, I don't mind going as deep as anyone wants to explain. Thanks! | 2023-06-29T19:50:57 | https://www.reddit.com/r/LocalLLaMA/comments/14mel7o/in_search_of_memory/ | mslindqu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mel7o | false | null | t3_14mel7o | /r/LocalLLaMA/comments/14mel7o/in_search_of_memory/ | false | false | self | 27 | null |
Want to talk Info Retrieval? | 0 | [removed] | 2023-06-29T21:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/14mgeuo/want_to_talk_info_retrieval/ | GrandmasterSiv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mgeuo | false | null | t3_14mgeuo | /r/LocalLLaMA/comments/14mgeuo/want_to_talk_info_retrieval/ | false | false | default | 0 | null |
Qnap TS-264 | 0 | I know [that device](https://www.qnap.com/en/product/ts-264) isn't a powerhouse in the context of AI, but it should be capable of running something, no? A 3B model? Perhaps even some 7B models? If neither, maybe something smaller?
Not hoping to run an actually useful AI, just want to start playing around a bit. Tell me your thoughts.
Some of the specs:
* Intel Celeron N5095
* Built-in Intel UHD Graphics
* 8GB RAM
* 65W adapter | 2023-06-29T23:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/14mklg3/qnap_ts264/ | id278437 | self.LocalLLaMA | 2023-06-30T01:23:11 | 0 | {} | 14mklg3 | false | null | t3_14mklg3 | /r/LocalLLaMA/comments/14mklg3/qnap_ts264/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nSotlOy4onNXjOZA4YHxBKq1wRDyTaQcr46eMKNOTHE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/P0bsyY5iwpNakLjBXMxX39kX4ruTeg5bFZyN4ZH_DKU.jpg?width=108&crop=smart&auto=webp&s=40c6db3d7bfaeef0ccf69ecae667bc7e5ea78e6b', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/P0bsyY5iwpNakLjBXMxX39kX4ruTeg5bFZyN4ZH_DKU.jpg?width=216&crop=smart&auto=webp&s=1fb4387947d6c61bd4bf3ab312060b323e805540', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/P0bsyY5iwpNakLjBXMxX39kX4ruTeg5bFZyN4ZH_DKU.jpg?width=320&crop=smart&auto=webp&s=ba8f110cebf6fa7b6741b1cf39305f8ae6aeefb8', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/P0bsyY5iwpNakLjBXMxX39kX4ruTeg5bFZyN4ZH_DKU.jpg?width=640&crop=smart&auto=webp&s=ea7dbf24814bb82e92a0d0704c33b3832cb25fdf', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/P0bsyY5iwpNakLjBXMxX39kX4ruTeg5bFZyN4ZH_DKU.jpg?width=960&crop=smart&auto=webp&s=202863b6446ab26fad5d5ed5e8e73f1ef7e3cac5', 'width': 960}], 'source': {'height': 625, 'url': 'https://external-preview.redd.it/P0bsyY5iwpNakLjBXMxX39kX4ruTeg5bFZyN4ZH_DKU.jpg?auto=webp&s=e86f0c97d2f2c8cf6921c2dcc32729aa68d284ca', 'width': 1000}, 'variants': {}}]} |
Benchmarks (TheBloke_wizard-vicuna-13B-GGML, TheBloke_WizardLM-7B-V1.0-Uncensored-GGML, and TheBloke_WizardLM-7B-V1.0-Uncensored-GPTQ) | 47 | Hey Everyone, since TheBloke and others have been so kind as to provide so many models, I went ahead and benchmarked two of them. My goal was to determine how they performed on different sets of hardware using different model sizes and tweaks. I used a Dell PowerEdge server with VMware to give it various amounts of compute and memory to see how it would scale and do under different GGML models and under different model configurations, i.e threads, n\_batch, n-gpu-layers, and n\_ctx. I also used a Razer Blade with a RTX 3070 to compare using the same GGML and threw in a GPTQ for reference.
The bottom line is, don't really bother to modify the model configurations as they are really good out of the box for the GGML models. Thanks again to everyone for helping to make using local AI so simple and easy.
Also, to help any newbies on here, I went ahead and made a step by step guide using VMware, Windows, and oobabooga with TheBloke's models. You can view it [here](https://www.ideasquantified.com/ai-as-a-local-service-why-what-and-how-plus-benchmarks/).
​ | 2023-06-30T01:13:04 | https://www.reddit.com/r/LocalLLaMA/comments/14mmb2j/benchmarks_thebloke_wizardvicuna13bggml_thebloke/ | palvaran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mmb2j | false | null | t3_14mmb2j | /r/LocalLLaMA/comments/14mmb2j/benchmarks_thebloke_wizardvicuna13bggml_thebloke/ | false | false | self | 47 | null |
I got llama.cpp running with CLBlast on the Pixel 7a | 57 | You can too! | 2023-06-30T04:36:38 | https://imgur.com/gallery/7wgYxxs | astrotunt | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 14mqe0o | false | {'oembed': {'author_name': '', 'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users like astrotunt.', 'height': 1373, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2F7wgYxxs%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2F7wgYxxs&image=https%3A%2F%2Fi.imgur.com%2F6Pczf35.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="1373" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/6Pczf35.jpg?fb', 'thumbnail_width': 600, 'title': 'Got llama.cpp running with CLBlast on a Pixel 7a', 'type': 'rich', 'url': 'https://imgur.com/a/7wgYxxs', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_14mqe0o | /r/LocalLLaMA/comments/14mqe0o/i_got_llamacpp_running_with_clblast_on_the_pixel/ | false | false | 57 | {'enabled': False, 'images': [{'id': 'nW5UhyzvSyQBLS_gNWj1KbpzRUXBVU4aeSIgXwxYSXE', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?width=108&crop=smart&auto=webp&s=65950f96b0bfd49b4201499ad8d01a07cf4cbff4', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?width=216&crop=smart&auto=webp&s=1e94736b781975b0465a0a688fa202ddb3ba482f', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?width=320&crop=smart&auto=webp&s=09081b303f45f696c30c0e778e14c2181488f289', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?width=640&crop=smart&auto=webp&s=615ad8a58d69ce7ef723ff73ae29f9b4eec532bf', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?width=960&crop=smart&auto=webp&s=02dd9b68df42458b133b1ffd0e630b2486ca6df7', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?width=1080&crop=smart&auto=webp&s=8c7a87aac96c3becfe22b8e5cff93cec094b33bf', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/-Jz4JKi6V5FCxaFHDqZeP5fn7aNkL-9w5Gq10m5xPQo.jpg?auto=webp&s=3b295e7992fd1331646c0a74a2725c8ddac03d92', 'width': 1080}, 'variants': {}}]} |
|
Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] | 1 | [deleted] | 2023-06-30T05:24:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14mra9p | false | null | t3_14mra9p | /r/LocalLLaMA/comments/14mra9p/found_duplicate_libcudartso_libcudartso110/ | false | false | default | 1 | null |
||
Issues with WizardLM-30B-Uncensored.ggmlv3.q4_1 | 5 | Oh dear.
I downloaded the Oobabooga dependencies with the one click installer, and then I downloaded the model I was interested in (WizardLM-30B-Uncensored.ggmlv3.q4\_1.bin). I restarted the Oobabooga program, and then navigated over to get the web UI page opened, and saw the model loaded in. I put in my character's pic, and my pic, and saved it. I changed the prompt to a roleplay opening, and then typed in a short starting sentence to test the waters.
My graphics card started whirling. I opened task manager to watch what was happening, the MEMORY and SSD tabs crept up to 100% but my GPU never went over 3%. When I was installing Oobabooga, it asked me my graphics card info NVIDIA 4090 24VRAM, and whether I'd be using GPU or CPU, and I chose GPU - but it didn't seem to be pulling ANY resources from there.
After a few minutes a single: "\*s" appeared as the "answer" to my hello rp prompt.
Does it sound like I should just try the smaller model before trying to run the 30B, or does it sound like I've made a screw up and that's what is causing it to be slow and respond with only a letter?
I have searched the web UI area, and don't see my issue. I am happy to read through the appropriate material and figure it out myself if someone in the know would be able to point me in the right direction.
Thanks! | 2023-06-30T05:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/14mren3/issues_with_wizardlm30buncensoredggmlv3q4_1/ | frostybaby13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mren3 | false | null | t3_14mren3 | /r/LocalLLaMA/comments/14mren3/issues_with_wizardlm30buncensoredggmlv3q4_1/ | false | false | self | 5 | null |
Dynamically Scaled RoPE further increases performance of long context LLaMA with zero fine-tuning | 205 | When /u/kaiokendev first posted about linearly interpolating RoPE for longer sequences, I (and a few others) had wondered if it was possible to pick the correct `scale` parameter dynamically based on the sequence length rather than having to settle for the fixed tradeoff of maximum sequence length vs. performance on shorter sequences. My idea was to use the exact position values for the first 2k context (after all, why mess with a good thing?) and then re-calculate the position vector for every new sequence length as the model generates token by token. Essentially, set `scale` to `original model context length / current sequence length`. This has the effect of slowly increasing `scale` as the sequence length increases.
I did some experiments and found that this has very strong performance, much better than simple linear interpolation. When /u/bloc97 posted his NTK-Aware method, it was much closer to this dynamic linear scaling in terms of performance. Compared to dynamic linear scaling, NTK-Aware has higher perplexity for shorter sequences, but better perplexity at the tail end of the sequence lengths. Unfortunately, it also suffers from catastrophic perplexity blowup, just like regular RoPE and static linear scaling.
The main hyperparamter of NTK-Aware is `α`. Like static linear scaling, it represents a tradeoff between short/long sequence performance. So I thought, why not use the same dynamic scaling method with NTK-Aware? For Dynamic NTK, the scaling of `α` is set to `(α * current sequence length / original model context length) - (α - 1)`. The idea again is to dynamically scale the hyperparameter as the sequence length increases. Behold:
https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&s=464052174151b6ae8b6a9ce42b8f1acc9acabd35
This uses the same methodology as NTK-Aware (perplexity on GovReport test). You can check out all the code on [GitHub](https://github.com/jquesnelle/scaled-rope).
Special thanks to /u/kaiokendev and /u/bloc97 for their invaluable insights and contributions! We're currently considering publishing something with all of these results, time permitting. Feel free to ping me here or on [Twitter](https://twitter.com/theemozilla) with any comments!
As a side note, me and the homies over at [NousResearch](https://twitter.com/nousresearch) will be fine-tuning models based on this, with fully open-source releases out very soon! | 2023-06-30T05:34:06 | https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/ | emozilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 14mrgpr | false | null | t3_14mrgpr | /r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/ | false | false | 205 | {'enabled': False, 'images': [{'id': 'Fpes8chO2Nd25uCZpeHHA8Olb0NYF0uWNokiCRnyDsA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?width=108&crop=smart&auto=webp&s=9610d2a9c8d6202342a4e94dddefc4aea3d4d65d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?width=216&crop=smart&auto=webp&s=5ce42eca924e7159e20d4ab22b59b7c1670fa56b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?width=320&crop=smart&auto=webp&s=c86871995802cccedaeac233dbcb6ce21eadcb3d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?width=640&crop=smart&auto=webp&s=ae9d77f5ebe7287d1ebc04118a7f1b55e162ca4f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?width=960&crop=smart&auto=webp&s=399f4b852aef32643602bae96adf83254d87b7ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?width=1080&crop=smart&auto=webp&s=53361f10f9d8cf218363aadec58e79d17194d89b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K4OGnlTXXQf6pZVnCQg7VXguDpQFx8aUAZGBdJp1ObI.jpg?auto=webp&s=1032b6249d0bcca3bf07d0aab0ec08df006c2c3a', 'width': 1200}, 'variants': {}}]} |
|
What's your reason for using open LLMs? | 49 | Out of curiosity, why do you use open LLMs? (There were lots more good reasons I could think of, but at most 6 options are allowed.)
[View Poll](https://www.reddit.com/poll/14mub80) | 2023-06-30T08:12:49 | https://www.reddit.com/r/LocalLLaMA/comments/14mub80/whats_your_reason_for_using_open_llms/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mub80 | false | null | t3_14mub80 | /r/LocalLLaMA/comments/14mub80/whats_your_reason_for_using_open_llms/ | false | false | self | 49 | null |
Python package versions for running ggml models via llama-cpp-python and langchain | 1 | Hello Everyone,
I am currently struggling with updating my Python environments to run different models and model versions. Especially trying to run quantized ggml models via llama-cpp-python with langchain results in errors connected to the versions of the involved packages.
I am spending quite some time on trying different versions of llama-cpp-python, pydantic, typing-inspect, typing-extensions, gpt4all/pygpt4all and langchain...
Has someone a repo/list/table of package versions to run LLaMA models with different ggml versions (v1, v2, v3) with a specific langchain version?
Subsets of the packages, mentioned above might also help.
​
Thank you for your time.
​
Edit: My currently working environments use
eachadea_ggml-vicuna-7b-1.1/ggml-vicuna-7b-1.1-q4_X.bin (GGML v1?)
-> working with llama-cpp-python==0.1.48 langchain==0.0.197 pygpt4all==1.1.0 pydantic==1.9.0 typing-inspect==0.8.0 typing-extensions==4.5.0
vicuna-7B-v1.3-GGML/vicuna-7b-v1.3.ggmlv3.q5_X.bin (GGML v3)
-> working with lama-cpp-python==0.1.64 langchain==0.0.197 pygpt4all==1.1.0 pydantic==1.10.9 typing-inspect==0.9.0 typing_extensions==4.7.0
​ | 2023-06-30T09:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/14mvfk2/python_package_versions_for_running_ggml_models/ | Schwarzfisch13 | self.LocalLLaMA | 2023-06-30T10:30:11 | 0 | {} | 14mvfk2 | false | null | t3_14mvfk2 | /r/LocalLLaMA/comments/14mvfk2/python_package_versions_for_running_ggml_models/ | false | false | self | 1 | null |
Multi-node multi-gpu inference | 6 | Any recommendations about how to deploy LLaMA 30B on multiple nodes? Each node has a single RTX 4090.
All of the multi-node multi-gpu tutorials seem to concentrate on training. | 2023-06-30T09:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/14mvhtf/multinode_multigpu_inference/ | FlanTricky8908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mvhtf | false | null | t3_14mvhtf | /r/LocalLLaMA/comments/14mvhtf/multinode_multigpu_inference/ | false | false | self | 6 | null |
TheBloke_WizardLM-Uncensored-SuperCOT-StoryTelling-30B-SuperHOT-8K-GPTQ broken? | 5 | **SOLVED Careful what max\_seq you have and which compress\_pos\_emb you have in oogabooga. Works with 8000 and 4.**
It always goes in this loops:
>why do you do this loops ?
>
>Surely! What's happening currently right now this moment in at the present time? Can you provide more information details about what's going on?
>
>what's going on?
>
>Surely! Is That's great! Thank you for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for for For F O R M A L E, C A N O M I T E D S.
Doesn't happen with other 33B - GPTQs. Works just fine with 13B GPTQ SuperHOT-8K. Only here. Normal variables.
Same happens with: TheBloke\_Samantha-33B-SuperHOT-8K-GPTQ
I am using ExLLama, GPTQ-for-LLaMa and latest oobagoboboa. | 2023-06-30T09:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/14mvuzw/thebloke/ | shaman-warrior | self.LocalLLaMA | 2023-06-30T12:19:26 | 0 | {} | 14mvuzw | false | null | t3_14mvuzw | /r/LocalLLaMA/comments/14mvuzw/thebloke/ | false | false | self | 5 | null |
Best approach and open LLM to interact with database through API | 1 | I am relatively new to the field of LLMs, so I am not sure of the requirements of this idea. I would like to implement an LLM (open-source) that interacts with a platform calling some APIs. For example, I would ask:
"*How many products did I sell yesterday?*"
And the LLM should make a specific call specifying inputs into the request in order to get the data from my internal database (let's say is a Mongo).
Which LLM would You suggest? Should I fine-tune the LLM? | 2023-06-30T09:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/14mw2h4/best_approach_and_open_llm_to_interact_with/ | lahaine93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mw2h4 | false | null | t3_14mw2h4 | /r/LocalLLaMA/comments/14mw2h4/best_approach_and_open_llm_to_interact_with/ | false | false | self | 1 | null |
Advice/resources for fine-tuning (or LoRA) a LLaMA model for a single summarization task requiring 8k context. | 21 | My lab has a corpus of a few hundred thousand reports written over the last 60 years. Each report has been converted to raw text, cleaned, and is now between 3k-7k tokens in length.
We want to develop a ~400-600 token summary for each of these documents. The trick is that the summaries need to be written in a particular style, with a particular tone, structure, etc.
We have a dataset of 7k examples. Each example has the original document text, a good summary, and a summary that’s not as good. We can get up to 10k examples if necessary.
Now we know we can fine-tune GPT-4 to accomplish this task, but there are two issues:
1. Our lab can’t afford to run the GPT-4 fine-tune over all of the data
2. Even if we could afford it, the documents may not leave the local firewall, so using the OpenAI API is out.
Given the latest developments around increasing LLaMA context size, it seems like this could be a viable option. But I’m not sure exactly how to go about it—how would I go about developing a fine-tune/LoRA that can handle up to 7k input tokens + 600 output tokens AND tune it to produce the particular style of output we need?
I’m not asking for anyone to give me a step-by-step tutorial or anything, but I’d certainly appreciate any direction you can provide. Even just pointing me toward further reading would be immensely helpful!
Thanks! | 2023-06-30T10:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/14mwc6o/adviceresources_for_finetuning_or_lora_a_llama/ | Shitcoin_maxi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mwc6o | false | null | t3_14mwc6o | /r/LocalLLaMA/comments/14mwc6o/adviceresources_for_finetuning_or_lora_a_llama/ | false | false | self | 21 | null |
Question regarding model compatibility for Alpaca Turbo | 1 | Greetings,
I am a complete noob with this stuff and I do not even know whether I am asking this in the correct subreddit. I decided to try and run alpaca turbo to see how it goes and because a friend suggested [this model](https://huggingface.co/CalderaAI/30B-Lazarus-GGMLv5_1/tree/main) I tried tunning it. Because of this, I have 2 questions:
1. Is it good? I mean, better than GPT-3?
2. Is there any way to make the responses quicker? (My config: 10400f, 64gb DDR4 ram, rx6600. I am noticing that the GPU is not being used at all, while my CPU is constantly maxed out. Thus I would like to ask whether I could configure the thing to use my Graphics card as well)
Thanks a lot for your help in advance! | 2023-06-30T10:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/14mwi65/question_regarding_model_compatibility_for_alpaca/ | Zealousideal-Ad4860 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mwi65 | false | null | t3_14mwi65 | /r/LocalLLaMA/comments/14mwi65/question_regarding_model_compatibility_for_alpaca/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9qP_MSTEcKRAjLyilaethMA5VEdxMg4HND9iwxGdFbc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?width=108&crop=smart&auto=webp&s=431d558886fd7977f31f3339f7d4115af3401bb7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?width=216&crop=smart&auto=webp&s=a35902cca6dcaa775d659f318497fda3c44ce815', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?width=320&crop=smart&auto=webp&s=5d01855e64aa71d113499b8b588718bff2113116', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?width=640&crop=smart&auto=webp&s=e2567db9266a87828f2bcffcf8215c56c46fed12', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?width=960&crop=smart&auto=webp&s=aec1264aa922b53e2395d9c22950f2b09a4ed1f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?width=1080&crop=smart&auto=webp&s=312ad314cce54a1e41c237ea2577ab25109b3a06', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/M9Cr0IMMTc3Yypm6dO5HJ7X7ytcYtHfxTSJvNa6WMMs.jpg?auto=webp&s=595bf041f57c6f3f4340c2ad64c557918f921eb7', 'width': 1200}, 'variants': {}}]} |
Any LLM trained to mod games? | 1 | [removed] | 2023-06-30T11:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/14mxaxg/any_llm_trained_to_mod_games/ | NoZenYet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mxaxg | false | null | t3_14mxaxg | /r/LocalLLaMA/comments/14mxaxg/any_llm_trained_to_mod_games/ | false | false | default | 1 | null |
Advises/Recommendations for a production-ready model for a single RTX 4090 GPU or two 4090 GPUs | 1 | [removed] | 2023-06-30T11:26:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14mxqp6 | false | null | t3_14mxqp6 | /r/LocalLLaMA/comments/14mxqp6/advisesrecommendations_for_a_productionready/ | false | false | default | 1 | null |
||
SlimPajama: A 627B token, cleaned and deduplicated version of RedPajama; largest deduplicated, multi-corpora, open-source, dataset for training large language models. (Cerebras) | 162 | 2023-06-30T11:27:30 | https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama/ | alchemist1e9 | cerebras.net | 1970-01-01T00:00:00 | 0 | {} | 14mxrri | false | null | t3_14mxrri | /r/LocalLLaMA/comments/14mxrri/slimpajama_a_627b_token_cleaned_and_deduplicated/ | false | false | 162 | {'enabled': False, 'images': [{'id': 'AfI0p8xZkRM05paCc52E1ZDjA8ys-ki4f-lEO60m3qg', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?width=108&crop=smart&auto=webp&s=5c5d9e0c6db449f516eef629cae286648ac4ed55', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?width=216&crop=smart&auto=webp&s=db81f86acc90d33da8d196e7256e4dceaadc20ef', 'width': 216}, {'height': 255, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?width=320&crop=smart&auto=webp&s=085d04880071144f184e98aacfc85e054bcb1fc5', 'width': 320}, {'height': 511, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?width=640&crop=smart&auto=webp&s=d53a15f148ed677d3c0872346b9ee31087323968', 'width': 640}, {'height': 767, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?width=960&crop=smart&auto=webp&s=df23f09a81c4fb9d2afcf45ab8d7bb7e6b6b546d', 'width': 960}, {'height': 863, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?width=1080&crop=smart&auto=webp&s=046c795505a054f993177d1386db29f52f47c704', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/PoDigURxJe-FHmyuk4iNo7lv_ntKAI__crW484jYQvI.jpg?auto=webp&s=5f532a00f69765a53eb948009e3956c1889e6da6', 'width': 1273}, 'variants': {}}]} |
||
Speculative Sampling | 21 | llama.cpp issue [#2030](https://github.com/ggerganov/llama.cpp/issues/2030) is rather interesting, it links to a paper [Accelerating Large Language Model Decoding with Speculative Sampling](https://arxiv.org/abs/2302.01318)
> We present speculative sampling, an algorithm for accelerating transformer decoding by enabling the generation of multiple tokens from each transformer call. Our algorithm relies on the observation that the latency of parallel scoring of short continuations, generated by a faster but less powerful draft model, is comparable to that of sampling a single token from the larger target model. This is combined with a novel modified rejection sampling scheme which preserves the distribution of the target model within hardware numerics. We benchmark speculative sampling with Chinchilla, a 70 billion parameter language model, achieving a 2-2.5x decoding speedup in a distributed setup, without compromising the sample quality or making modifications to the model itself.
**tldr:** Brace yourselves, multi-model sampling is coming. | 2023-06-30T12:53:00 | https://www.reddit.com/r/LocalLLaMA/comments/14mzln4/speculative_sampling/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14mzln4 | false | null | t3_14mzln4 | /r/LocalLLaMA/comments/14mzln4/speculative_sampling/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'dwd3gnYtQkUgWjWo4Xrdyqwtbd4Mt4D6OYLwWsLYN54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?width=108&crop=smart&auto=webp&s=fbfb75c22d31e9ba01c0872adbad36f3e1d53085', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?width=216&crop=smart&auto=webp&s=67a8e96ab47c17d92f3f4516d8b943d547f7346f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?width=320&crop=smart&auto=webp&s=a21ffb0a4ff1a60261fa238c938b2144c0618316', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?width=640&crop=smart&auto=webp&s=a4c018673a0fe6c5dd8980ed79d9169c3b09654a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?width=960&crop=smart&auto=webp&s=47abb4b0679155b8f22dd0dcd850edf8d585a360', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?width=1080&crop=smart&auto=webp&s=b791f8b17f88caae1eff2508c8580173a24bd46f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nIMPXJWIO7eyqJaOzuxK3WJ9tRGC3Ld6iBwlI8-rRO4.jpg?auto=webp&s=7b59bb187ab59953d07deb2e27e3dfb82ab14293', 'width': 1200}, 'variants': {}}]} |
Why do some models output a word and then go back and delete it / replace it? | 4 | Hi. I noticed with some models, at the beginning of generating a response, sometimes it would output a word, then go back and delete that word and replace it with something else. Just wondering the technical reason for how that works. | 2023-06-30T13:48:17 | https://www.reddit.com/r/LocalLLaMA/comments/14n0xdx/why_do_some_models_output_a_word_and_then_go_back/ | orick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n0xdx | false | null | t3_14n0xdx | /r/LocalLLaMA/comments/14n0xdx/why_do_some_models_output_a_word_and_then_go_back/ | false | false | self | 4 | null |
airoboros 1.4 family of models | 73 | Hello all,
Another day, another airoboros release for you to test (feedback always welcome!)
### Links
Here are the original pytorch native versions:
- https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4
- https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
- https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4
- https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.4
Experimental qlora fine-tuned mpt-30b base model:
- https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs
- discussion here: https://www.reddit.com/r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/
- https://huggingface.co/TheBloke/airoboros-mpt-30b-gpt4-1p4-GGML
The 7b and 13b are full fine-tunes via FastChat, but since many were asking what the difference in model outputs between a full-fine-tune and qlora fine-tune, I've also added qlora versions here:
- https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4.1-qlora
- https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4.1-qlora
TheBloke has kindly quantized these:
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.4-GGML
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.4-GPTQ
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.4-GGML
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.4-GPTQ
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.4-GGML
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.4-GPTQ
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.4-GGML
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.4-GPTQ
And the SuperHOT 8k versions:
- https://huggingface.co/TheBloke/airoboros-13b-gpt4-1.4-SuperHOT-8K-GPTQ
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GGML
- https://huggingface.co/TheBloke/airoboros-13b-gpt4-1.4-SuperHOT-8K-fp16
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-fp16
Recommended prompt format:
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
Updates from 1.2 (or 1.3 if you tried that):
- dataset includes more orca style reasoning/math prompts, this time with steps/explanation first then final answer
- several examples of riddles added in
- I tried adding some jokes, but it's still pretty terrible at humor
- all coding examples have a corresponding PLAINFORMAT version (to avoid markdown backticks, explanations, usage, etc. and just print the code)
### Example prompts/responses
Here are some example input/output pairs showing some of the new abilities (65b, fp16):
Trivia with anagrams and (sometimes accurate) palindromes.
USER: This anagram of "listen" is the opposite of loud
ASSISTANT: silent
Reasoning:
USER: A rancher has 12 cows, and all but 3 die. How many are left?
ASSISTANT: There are 3 cows left.
USER: If it takes 13 minutes to bake a dozen cookies, how long will it take to bake 36 cookies? My neighbor said I could borrow their stove also.
ASSISTANT: Assuming that the baking process is parallelizable and your oven can accommodate multiple trays of cookies at once, you should still be able to bake 36 cookies in the same 13 minutes. This is because the baking process is not necessarily dependent on the number of cookies being baked at once, but rather the time required for each individual cookie to cook.
However, if your oven can only fit one tray of cookies at a time, then you would need to bake multiple batches. In this case, it would take 36 / 12 = 3 batches, and since each batch takes 13 minutes, the total time would be 3 * 13 = 39 minutes.
It's important to note that this calculation assumes similar conditions for baking all the cookies, such as consistent oven temperature and ingredients. Any variations in these factors might affect the actual baking time. Additionally, if you are using your neighbor's stove, you should confirm that it can accommodate your baking needs before proceeding.
USER: Jim is faster than Bob. Bob is faster than Steve. Is Steve faster than Jim?
ASSISTANT: No, Steve is not faster than Jim.
Here's the reasoning behind this:
1. We know that Jim is faster than Bob.
2. We also know that Bob is faster than Steve.
3. Since Bob is faster than Steve, it means that Steve cannot be faster than Jim because if Steve were faster than Jim, then Bob would also have to be faster than Jim, which we know is not true.
4. Therefore, Steve is not faster than Jim.
USER: Tom is smarter than Jack. Jack is shorter than Jill. Is James smart and tall?
ASSISTANT: There is not enough information to determine if James is smart and tall.
USER: I started with 13 grapes, then Tommy stole 2 of my bananas. How many grapes do I have left?
ASSISTANT: You still have 13 grapes left, as Tommy only stole bananas.
USER: If you pass the person in 2nd place, which place are you in?
ASSISTANT: If you pass the person in 2nd place, you would be in 2nd place.
USER: If a train leaves the station and travels 60 miles per hour, how far will it have gone in 2.5 hours?
ASSISTANT: To find out how far the train will travel in 2.5 hours, we need to multiply its speed by the time traveled:
Distance = Speed * Time
In this case, the speed is 60 miles per hour, and the time is 2.5 hours. So,
Distance = 60 * 2.5
Distance = 150 miles
The train will have gone 150 miles in 2.5 hours.
Jokes and riddles:
USER: Tell me a joke about bananas.
ASSISTANT: Why did the banana go to the doctor? Because he wasn't peeling well!
USER: A plane crashes on the border of the U.S. and Canada. Where do they bury the survivors?
ASSISTANT: Survivors aren't buried.
USER: What is greater than God, More evil than the devil, The poor have it, The rich don't need it, And if you eat it, you'll die?
ASSISTANT: The answer is nothing. The reasoning behind this is that nothing is greater than God, nothing is more evil than the devil, and nothing is needed by the rich while the poor have nothing. Additionally, if you eat nothing, you will die from starvation.
### Multi-character chat
I also tried adding in multi-character chat, for example:
Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the mannerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
Still trying to figure the best prompt format for this and it sometimes will still start generating on behalf of USER:, so you'll probably want "USER:" to be an early stop token. Please experiment and let me know what works for you.
Edit: Added mpt-30b links | 2023-06-30T14:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/14n1p74/airoboros_14_family_of_models/ | JonDurbin | self.LocalLLaMA | 2023-06-30T15:08:08 | 0 | {} | 14n1p74 | false | null | t3_14n1p74 | /r/LocalLLaMA/comments/14n1p74/airoboros_14_family_of_models/ | false | false | self | 73 | {'enabled': False, 'images': [{'id': 'tgVjTMU8aawOD4ZXTauRiuN9j_8cY7UjTI5ZK2blhXA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?width=108&crop=smart&auto=webp&s=1820bbade9ea7c8664840bc45a23f1991c8e28e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?width=216&crop=smart&auto=webp&s=5ac4118420d6093a367ed42bc756303518c2a21e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?width=320&crop=smart&auto=webp&s=ebc4ebd00aaa458d83ae453d1a585b23d22f4878', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?width=640&crop=smart&auto=webp&s=8204b2aee2d7b3fc193b9a82b721de1ebd2015a3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?width=960&crop=smart&auto=webp&s=47cf6380ed8924c7a9e628f5f5d29a142ccad3b2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?width=1080&crop=smart&auto=webp&s=7a1fa60a7983aaabf0b4771d81143afa5b4a0310', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kVrb-XP8_LN1jkgNrKjfBlf005dL7eyON1RFGLj_e20.jpg?auto=webp&s=ba4c32da64078c8b0750030caf0790f8b8a26d38', 'width': 1200}, 'variants': {}}]} |
What are some popular LoRAs? | 1 | [removed] | 2023-06-30T14:18:54 | https://www.reddit.com/r/LocalLLaMA/comments/14n1pq0/what_are_some_popular_loras/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n1pq0 | false | null | t3_14n1pq0 | /r/LocalLLaMA/comments/14n1pq0/what_are_some_popular_loras/ | false | false | default | 1 | null |
Why is GPT4All faster than Koboldcpp? | 1 | [removed] | 2023-06-30T14:32:04 | https://www.reddit.com/r/LocalLLaMA/comments/14n21yc/why_is_gpt4all_faster_than_koboldcpp/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n21yc | false | null | t3_14n21yc | /r/LocalLLaMA/comments/14n21yc/why_is_gpt4all_faster_than_koboldcpp/ | false | false | default | 1 | null |
MosiacML MPT models start to hallucinate? | 6 | Hey everyone, I've been evaluating a bunch of models recently (mostly for fun, but I work in the field). I tried the 30B versions of the MPT models in the text-generation-ui as well as the storyteller 7B. Well the results were kinda ass, they would start out coherent and start hallucinating and then just outputting garbage. Am I missing some inference parameters or do others feel they are not trained well. | 2023-06-30T14:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/14n2ovh/mosiacml_mpt_models_start_to_hallucinate/ | drwebb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n2ovh | false | null | t3_14n2ovh | /r/LocalLLaMA/comments/14n2ovh/mosiacml_mpt_models_start_to_hallucinate/ | false | false | self | 6 | null |
MPT-30B QLoRA on 24 GB VRAM | 14 | Has anybody managed to train a QLoRA on MPT-30B with 24 GB VRAM? I am currently struggling to get that working on a single GPU. Seems to be only working with at leas 2 GPUs.
I am using the lora patch from: [https://github.com/iwalton3/mpt-lora-patch/tree/master](https://github.com/iwalton3/mpt-lora-patch/tree/master)
and the following BNB & lora config:
`nf4config = BitsAndBytesConfig(`
`load_in_4bit=True,`
`bnb_4bit_quant_type="nf4",`
`bnb_4bit_use_double_quant=True,`
`bnb_4bit_compute_dtype=torch.bfloat16`
`)`
and
`lora_config = LoraConfig(`
`r=8,`
`lora_alpha=32,`
`target_modules=["up_proj", "down_proj"],`
`lora_dropout=0.05,`
`inference_mode=False,`
`bias="none",`
`task_type=TaskType.CAUSAL_LM`
`)`
​ | 2023-06-30T15:36:57 | https://www.reddit.com/r/LocalLLaMA/comments/14n3rfv/mpt30b_qlora_on_24_gb_vram/ | eggandbacon_0056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n3rfv | false | null | t3_14n3rfv | /r/LocalLLaMA/comments/14n3rfv/mpt30b_qlora_on_24_gb_vram/ | false | false | self | 14 | null |
I am trying to follow instruction to get CPU + GPU. Do I run this in a regular cmd or do i need to activate an environment somewhere inside oobabooga folder? | 1 | [removed] | 2023-06-30T16:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/14n5d7w/i_am_trying_to_follow_instruction_to_get_cpu_gpu/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n5d7w | false | null | t3_14n5d7w | /r/LocalLLaMA/comments/14n5d7w/i_am_trying_to_follow_instruction_to_get_cpu_gpu/ | false | false | default | 1 | null |
Best LLM as a writing aide? | 51 | Yes I've tried Samantha the editor, and my results with it were very very poor compared to whatever else I've tried.
After going through many benchmarks, and my own very informal testing I've narrowed down my favorite LLaMA models to Vicuna 1.3, WizardLM 1.0 (and it's uncensored variants), and Airoboros 1.4 (we need more benchmarks between the three!). Honorable mention to Nous Hermes which does quite well too.
Now my issue here is that it's hard for me to tell which actually does a better job as a writing aide, since I would have to be the judge of it.. In some cases its easy to tell one is clearly better, but these three are all pretty good, so I wanted some second opinions.
I'm open to trying other options as well, and they don't need to specifically be LLaMA models, they just need to be free to use without limits (so no chatgpt). I can only run up to 13b on my system comfortably (5800h, 8gb 3070 laptop gpu, 32gb ddr4, 1tb sn730 nvme). My use case is for writing quality improvements, so mainly things like revising, rewriting, editing, and correcting grammar errors in my writing. | 2023-06-30T16:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/14n5m8b/best_llm_as_a_writing_aide/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n5m8b | false | null | t3_14n5m8b | /r/LocalLLaMA/comments/14n5m8b/best_llm_as_a_writing_aide/ | false | false | self | 51 | null |
How to load adapters | 1 | [removed] | 2023-06-30T17:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/14n6c57/how_to_load_adapters/ | 04RR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n6c57 | false | null | t3_14n6c57 | /r/LocalLLaMA/comments/14n6c57/how_to_load_adapters/ | false | false | default | 1 | null |
Who is openllama from? | 0 | [removed] | 2023-06-30T17:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/14n77yy/who_is_openllama_from/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n77yy | false | null | t3_14n77yy | /r/LocalLLaMA/comments/14n77yy/who_is_openllama_from/ | false | false | default | 0 | null |
How to train Vecuna on 2 GPUs at the same time? | 0 | [removed] | 2023-06-30T18:06:49 | https://www.reddit.com/r/LocalLLaMA/comments/14n7lqm/how_to_train_vecuna_on_2_gpus_at_the_same_time/ | BlueSchnabeltier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n7lqm | false | null | t3_14n7lqm | /r/LocalLLaMA/comments/14n7lqm/how_to_train_vecuna_on_2_gpus_at_the_same_time/ | false | false | default | 0 | null |
can remote llms achieve zero-knowledge privacy? | 5 | We noticed that privacy was a major concern among voters in the recent survey, personally, I shared similar sentiments.
Is there a way to have the LLM run in such a way that it computes the response encrypted with a sort of public key, that the user gives, so we could feed it deeply somehow in the network that when response gets out? So no one will ever know what the AI spewed, except the requester.
A sort of deep neural encryptor. "I was never here".
​ | 2023-06-30T18:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/14n8opc/can_remote_llms_achieve_zeroknowledge_privacy/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n8opc | false | null | t3_14n8opc | /r/LocalLLaMA/comments/14n8opc/can_remote_llms_achieve_zeroknowledge_privacy/ | false | false | self | 5 | null |
What limits the size of the large model, the size of the data set or the current memory capacity and computing power? Or is it not worth increasing the size of the model? Already reached the ceiling? | 0 | What limits the size of the large model, the size of the data set or the current memory capacity and computing power? The current maximum size of open source models is 65B (maybe 130B?). Or is it not worth increasing the size of the model? Already reached the ceiling? My statement may not be quite correct, but you should get the idea. | 2023-06-30T18:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/14n8ycb/what_limits_the_size_of_the_large_model_the_size/ | Winter-Memory-3797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n8ycb | false | null | t3_14n8ycb | /r/LocalLLaMA/comments/14n8ycb/what_limits_the_size_of_the_large_model_the_size/ | false | false | self | 0 | null |
Im waiting for Local LLM for translating comics and mangas | 8 | I know it will be possible sadly i dont see any project around it. I really want to be able to run local LLM to translate my mangas or other images. I read a lot and the only reason i want to learn some language to be able to read more but it is just too much work. | 2023-06-30T19:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/14n9luh/im_waiting_for_local_llm_for_translating_comics/ | Kronosz14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14n9luh | false | null | t3_14n9luh | /r/LocalLLaMA/comments/14n9luh/im_waiting_for_local_llm_for_translating_comics/ | false | false | self | 8 | null |
Best LLM trained on cyber security data | 0 | [removed] | 2023-06-30T19:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/14nac65/best_llm_trained_on_cyber_security_data/ | nutcustard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nac65 | false | null | t3_14nac65 | /r/LocalLLaMA/comments/14nac65/best_llm_trained_on_cyber_security_data/ | false | false | default | 0 | null |
RedPajama-INCITE-7B-Chat very bad at chat - what am I missing? | 5 | Hello,
I just recently started playing around with local LLMs (using oobabooga). I've downloaded several models to mess with, and I've found that of the models I've used, TheBloke\_WizardLM-33B-V1.0-Uncensored-GPTQ performs the best for my use case. But I'd like to integrate chat, summarization and classification capabilities into commercial applications someday, so I'm looking for something with a permissible license.
I just tried RedPajama-INCITE-7B-Chat, which I understand offers a commerical license, but I'm astonished at how very bad it is for chat. Following is a chat history I downloaded for a very short chat session with RedPajama:
"data\_visible": \[
\[ "Hello, my name is Bill.", "Hi Bill! How are you today?" \],
\[ "I am well. Please remember the following phrase for me: "foo bar"", "Okay, here's an example of how I could respond if you asked me about a famous painting later known as The School Room: \\n\\n>Hello, my name is Assistant. How are you doing today?\\n>I'm doing very well, thank you. If you have any questions or need assistance with anything, feel free to ask.\\n>What subject would you like me to assist you in?..."
\] \]
I used the default parameters for everything, no instruction template, no character, and manually chose "chat" for the interface mode in oobabooga. I tried several conversations from a fresh start of [server.py](https://server.py), and each is just as bad --- conversations go way off the rails almost immediately.
I'm having trouble believing that the RedPajama-INCITE-7B-Chat model is actually this bad at conversation, so I must be missing something obvious. I apologize if I am, I'm new to all of this. Appreicate any insight you may offer.
Thank you.
​ | 2023-06-30T20:01:39 | https://www.reddit.com/r/LocalLLaMA/comments/14naily/redpajamaincite7bchat_very_bad_at_chat_what_am_i/ | 0x4c4f5645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14naily | false | null | t3_14naily | /r/LocalLLaMA/comments/14naily/redpajamaincite7bchat_very_bad_at_chat_what_am_i/ | false | false | self | 5 | null |
Using Local LLMs for things besides chat? | 6 | I've been playing around with ggml models on my laptop for a couple of weeks now, mostly using programs like Kobold, TavernAI, and Faraday for the front end (have tried Oobabooga, but haven't been able to get it to work yet). That's been entertaining, but they all seem to be geared towards building a character and chatting. If I want to do something more substantive, are there better options? I'm thinking things like help rewriting an email or writing a cover letter to match a job posting. Would I be looking for different models, different GUIs, or both? | 2023-06-30T20:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/14nb6t8/using_local_llms_for_things_besides_chat/ | mlaps21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nb6t8 | false | null | t3_14nb6t8 | /r/LocalLLaMA/comments/14nb6t8/using_local_llms_for_things_besides_chat/ | false | false | self | 6 | null |
Running Llama-65B with moderate context sizes | 1 | I'm having some trouble running inference on Llama-65B for moderate contexts (\~1000 tokens).
I use 4x45GB A40s
I load the model with
model = LlamaForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map="auto")
I infer with
model.generate(input_ids)
It's able to do the forward pass for small context sizes (<500 tokens). When I try passing in inputs with token sizes > 600, I run into memory issues. The issue persists even when I load the model in 4bit. I've also tried with other large models (Falcon-40B, MPT-30B).
How do you do a forward pass on Llama with larger context sizes?
​
​
​ | 2023-06-30T20:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/14nbo45/running_llama65b_with_moderate_context_sizes/ | Xir0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nbo45 | false | null | t3_14nbo45 | /r/LocalLLaMA/comments/14nbo45/running_llama65b_with_moderate_context_sizes/ | false | false | self | 1 | null |
[Experimental, PR] Add support to NTK RoPE scaling to exllama. | 35 | 2023-06-30T21:13:30 | https://github.com/turboderp/exllama/pull/118 | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14ncbp4 | false | null | t3_14ncbp4 | /r/LocalLLaMA/comments/14ncbp4/experimental_pr_add_support_to_ntk_rope_scaling/ | false | false | 35 | {'enabled': False, 'images': [{'id': '7m9YW0ZIrdEqQoxhNf8IXvhGpesvCOG2FErfUywpx7o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=108&crop=smart&auto=webp&s=23ddcf395c37964c8063ced21e3a75fad7904102', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=216&crop=smart&auto=webp&s=d3089b843984dc1e242f8c141ea73a30b33cd39c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=320&crop=smart&auto=webp&s=37ca07abfa5945c593aad6cc77bd3c42d2a91e38', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=640&crop=smart&auto=webp&s=4b40156059c236eaba6f838572a4f7c1679eaf6c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=960&crop=smart&auto=webp&s=038350c69c79193012ff894b4469b3f8062d7834', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=1080&crop=smart&auto=webp&s=9c22da9a7dc62ddfa352b520d9756addeb0cd124', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?auto=webp&s=017ff04c34cb535cfe8ea2b95117fdf586ffbbfe', 'width': 1200}, 'variants': {}}]} |
||
Had standardized spellings of words not become a thing would LLMs have been possible? | 2 | And how different would LLMs be if they were trained on nonstandard spellings of words? So not just a handful of different spellings for each word, but many more. Would they be able to find a common thread in what was said despite the differences in spellings? | 2023-06-30T21:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/14ndcyd/had_standardized_spellings_of_words_not_become_a/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ndcyd | false | null | t3_14ndcyd | /r/LocalLLaMA/comments/14ndcyd/had_standardized_spellings_of_words_not_become_a/ | false | false | self | 2 | null |
Best Model for automatic topic modeling? | 2 | Title. Also interested in the prompt used, if possible. I want to work on a side project and I feel like LLMs would be perfect for ambiguous topic modeling. Particularly, I want to be able to feed ti a short piece of text and ask it to figure out what broad category it fits in. No worries if not, thanks! | 2023-06-30T21:57:40 | https://www.reddit.com/r/LocalLLaMA/comments/14ndegy/best_model_for_automatic_topic_modeling/ | Working_Ideal3808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ndegy | false | null | t3_14ndegy | /r/LocalLLaMA/comments/14ndegy/best_model_for_automatic_topic_modeling/ | false | false | self | 2 | null |
LocalLLaMA vs ChatGBT vs. Bing A.I vs. Personal GPT? reviews? | 0 | [removed] | 2023-06-30T22:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/14nedkb/localllama_vs_chatgbt_vs_bing_ai_vs_personal_gpt/ | Username9822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nedkb | false | null | t3_14nedkb | /r/LocalLLaMA/comments/14nedkb/localllama_vs_chatgbt_vs_bing_ai_vs_personal_gpt/ | false | false | default | 0 | null |
Any solution for brower control using a llama? | 6 | I’ve seen a few projects like AutoGPT that can “brower the web@ using Beautiful Soup (e.g curl w/HTML parsing/processing).
Does anyone know of any projects that use a real web brower to search the internet and perform tasks?
I’d like to play with something like this this week and would love to start with a base project if such a thing exists.
I could imagine it using a browser testing framework to do the control and to read the html responses, or do use JS to getInnerText. It would probably take a lot of calls to the LLM to work toward solving a problem, but I think it would be very interesting. | 2023-06-30T22:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/14nepsy/any_solution_for_brower_control_using_a_llama/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nepsy | false | null | t3_14nepsy | /r/LocalLLaMA/comments/14nepsy/any_solution_for_brower_control_using_a_llama/ | false | false | self | 6 | null |
[Hardware] M2 ultra 192gb mac studio inference speeds | 23 | a new dual 4090 set up costs around the same as a m2 ultra 60gpu 192gb mac studio, but it seems like the ultra edges out a dual 4090 set up in running of the larger models simply due to the unified memory? Does anyone have any benchmarks to share? At the moment, m2 ultras run 65b at 5 t/s but a dual 4090 set up runs it at 1-2 t/s, which makes the m2 ultra a significant leader over the dual 4090s!
edit: as other commenters have mentioned, i was misinformed and turns out the m2 ultra is worse at inference than dual 3090s (and therefore single/ dual 4090s) because it is largely doing cpu inference | 2023-06-30T23:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/14nf6tg/hardware_m2_ultra_192gb_mac_studio_inference/ | limpoko | self.LocalLLaMA | 2023-07-01T05:50:10 | 0 | {} | 14nf6tg | false | null | t3_14nf6tg | /r/LocalLLaMA/comments/14nf6tg/hardware_m2_ultra_192gb_mac_studio_inference/ | false | false | self | 23 | null |
Using an LLM just for your own data? | 7 | We love the power that Chatgpt and local LLM give for all kinds of tasks. However, we have a use case where want to just use our own data when it responses via chat. We don’t want it to use any other it my have or been trained on. It may pollute the data we’re going to train it on.
The idea then is to use the most bare-bones smallest model out there. Then feed it gigabytes of our data. We don’t want to do that with ChatGPT because a lot of it is highly proprietary and is under strict regulatory guidelines. We want a local LLM that will stay within our firewall and which will only have internal access no external Internet access.
We will have prompting to ensure that it only uses data we provided no other data never to break character, etc.
Is that a viable solution or possibility?
Do you see any major pitfalls or problems. Even though the local LLM will only use what we provided is the fact that it has so little other data make any difference? | 2023-07-01T02:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/14niv66/using_an_llm_just_for_your_own_data/ | costaman1316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14niv66 | false | null | t3_14niv66 | /r/LocalLLaMA/comments/14niv66/using_an_llm_just_for_your_own_data/ | false | false | self | 7 | null |
Are there any guides for running models off kaggle? | 2 | I believe you get 20 hours a week on a p100, which sounds pretty nice to me. I only have 8gb vram (laptop 3070), so this would be a nice alternative for me to run models much faster. That said I have no idea how to use notebooks, or stuff like that. If someone could point me in the right direction on how to get something like kobold.cpp going on kaggle that would be much appreciated. | 2023-07-01T02:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/14nj54q/are_there_any_guides_for_running_models_off_kaggle/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nj54q | false | null | t3_14nj54q | /r/LocalLLaMA/comments/14nj54q/are_there_any_guides_for_running_models_off_kaggle/ | false | false | self | 2 | null |
I don't understand the concept of instruction templates. | 1 | [removed] | 2023-07-01T03:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/14nkx9w/i_dont_understand_the_concept_of_instruction/ | Awethon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nkx9w | false | null | t3_14nkx9w | /r/LocalLLaMA/comments/14nkx9w/i_dont_understand_the_concept_of_instruction/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&v=enabled&s=955c4b3df67ee12627cea147f344b6f74e87357a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&v=enabled&s=6d06f433f5afa9b4fd0cd80e56f8d3733c04c76b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&v=enabled&s=49c90eb45b40dbd092a61c6980b6827144530e36', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&v=enabled&s=0d49920e75cb36ccdde772ebc3ce7f3182eb0556', 'width': 512}, 'variants': {}}]} |
Advises/Recommendations for a production-ready model for a single RTX 4090 GPU or two 4090 GPUs | 1 | [removed] | 2023-07-01T06:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/14nnc94/advisesrecommendations_for_a_productionready/ | AltruisticCabinet275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nnc94 | false | null | t3_14nnc94 | /r/LocalLLaMA/comments/14nnc94/advisesrecommendations_for_a_productionready/ | false | false | default | 1 | null |
Offloading to 1080ti is slower than cpu? | 13 | With a 13b ggml model, I get about 4 tok/second with 0 layers offloaded (cpu is ryzen 3600). However, the more layers I offload the slower it is, and with all 43 models offloaded I only get around 2 tokens per second. I've tried with koboldcpp and llama.cpp, exact same results. Seems to be a issue other people have but haven't found a solution, any suggestions? | 2023-07-01T06:21:18 | https://www.reddit.com/r/LocalLLaMA/comments/14nnkku/offloading_to_1080ti_is_slower_than_cpu/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nnkku | false | null | t3_14nnkku | /r/LocalLLaMA/comments/14nnkku/offloading_to_1080ti_is_slower_than_cpu/ | false | false | self | 13 | null |
Is it just me or SuperHOT merged 4-bit quantized models are massively degraded? | 65 | I’ve tried a bunch of 4-bit GPTQ SuperHOT merged models and all of them with the same outcome - compared to their corresponding original models, the quality of the output is severely degraded. This is very noticeable when asking the model to perform logical tasks, analysing text or formatting answers in a specific way. Basic math problem solving for instance is complete garbage.
Am I the only one noticing? | 2023-07-01T08:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/14nq64d/is_it_just_me_or_superhot_merged_4bit_quantized/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nq64d | false | null | t3_14nq64d | /r/LocalLLaMA/comments/14nq64d/is_it_just_me_or_superhot_merged_4bit_quantized/ | false | false | self | 65 | null |
Best local LLM to train on my DNA? | 32 | I had my whole genome sequenced and I’m working with the data locally. I want to train a LLM on my data so I can create an interface for my DNA. I’m just starting my research on the project. Any suggestions on models and datasets? | 2023-07-01T12:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/14nubba/best_local_llm_to_train_on_my_dna/ | scrumblethebumble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nubba | false | null | t3_14nubba | /r/LocalLLaMA/comments/14nubba/best_local_llm_to_train_on_my_dna/ | false | false | self | 32 | null |
What qualitative means do you use to evaluate models? | 9 | I'm mainly interested in chat models, so what I do is see how easy it is to get it to act in complete opposition to its given persona, especially one which would be in agreement with any "alignment" present in the model.
I'll also look to see how well I feel it works with different personas, whether there is much of a difference in language and whether it feels appropriate.
Lastly for mega-bonus points, I'll be looking at its capacity for recognising humour, especially wordplay, so being able to catch references from earlier in the conversation, appreciate / generate interesting juxtaposition of ideas, see sarcasm / irony, innuendo, wit, puns etc.
I know it's going to be dependent on the use case, but what is *good* for you and how do you judge it? | 2023-07-01T13:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/14nvl0g/what_qualitative_means_do_you_use_to_evaluate/ | Crypt0Nihilist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nvl0g | false | null | t3_14nvl0g | /r/LocalLLaMA/comments/14nvl0g/what_qualitative_means_do_you_use_to_evaluate/ | false | false | self | 9 | null |
Potentially Good News for Future nVidia Drivers re: memory management | 58 | Most nVidia users are probably aware by now that recent drivers aggressively offload from VRAM to System RAM which has a significantly negative impact on performance.
See previous discussions:
[Major Performance Degradation with nVidia driver 535.98 at larger context sizes](https://www.reddit.com/r/LocalLLaMA/comments/1461d1c/major_performance_degradation_with_nvidia_driver/)
[PSA: New Nvidia driver 536.23 still bad, don't waste your time](https://www.reddit.com/r/LocalLLaMA/comments/1498gdr/psa_new_nvidia_driver_53623_still_bad_dont_waste/)
This has also been a topic of discussion in the Stable Diffusion community, with a large discussion thread on the [Vladmandic GitHub project](https://github.com/vladmandic/automatic/discussions/1285).
Recently, an nVidia driver developer -- pidge2k -- posted in that discussion [asking for more information about the issue](https://github.com/vladmandic/automatic/discussions/1285#discussioncomment-6289562).
In a follow up post, another user comments that the latest driver version, 536.40, still has the issue. [Pidge2k responds](https://github.com/vladmandic/automatic/discussions/1285#discussioncomment-6328116) that:
> This will be addressed in an upcoming NVIDIA display driver update.
So, don't count your chickens before they're hatched, but it at least appears that nVidia is aware of the issue and (hopefully) working on improving / fixing the memory offloading. | 2023-07-01T14:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/14nw8p6/potentially_good_news_for_future_nvidia_drivers/ | GoldenMonkeyPox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nw8p6 | false | null | t3_14nw8p6 | /r/LocalLLaMA/comments/14nw8p6/potentially_good_news_for_future_nvidia_drivers/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'KKyR1RlIu7ctTnx3lTPq3wNez7CdWob-WB0LD0e4oWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=108&crop=smart&auto=webp&s=f34dd564d6e345b24e26bb9a4cff61a64e151a8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=216&crop=smart&auto=webp&s=7ee719b3fafb1c722e40947e5afbb4512eb11fc0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=320&crop=smart&auto=webp&s=6a73f8e842810180f8b60375db7283524f03ed68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=640&crop=smart&auto=webp&s=0432b624abe98851b359f537b3127168531c127e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=960&crop=smart&auto=webp&s=8f2ee50202527c0e1a11c097c33211b0640afeb7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=1080&crop=smart&auto=webp&s=5a147010a08c39b97a72c49a10f63d5fcf5e1bcc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?auto=webp&s=e93f880d14bf2041d007da1746a11ce8e599ce1c', 'width': 1200}, 'variants': {}}]} |
For summarization, how do open source models compare to dedicated models? | 8 | Title. I want to create a meeting summarization bot incorporating recording, transcription, and summarization. My current idea is
1. Use the speech recognition Python package to instantiate a microphone
2. Use the "listen_in_background" with "whisper" to record the transcription
3. Once the transcription length exceeds a certain length, put the transcript through an LLM asking it to give a bulleted summary. Iterate as needed for the duration of the meeting
4. Once the meeting ends, ask the LLM to de-duplicate the summaries.
Questions:
1. Is my solution valid?
2. Are there better alternatives? I would like the experience of creating this myself. I cannot connect to the internet
3. Would I be better off using a dedicated model like Google Pegasus? | 2023-07-01T14:29:13 | https://www.reddit.com/r/LocalLLaMA/comments/14nwi4e/for_summarization_how_do_open_source_models/ | a_slay_nub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nwi4e | false | null | t3_14nwi4e | /r/LocalLLaMA/comments/14nwi4e/for_summarization_how_do_open_source_models/ | false | false | self | 8 | null |
Question on optimal settings for ggml model cpu+gpu | 2 | In general, when running cpu + gpu, should settings be adjusted to minimize shared usage on gpu?
The vram has the highest bandwidth and shared ram is slower from what I hear, so I was wondering if that shared ram would be better utilized by the cpu instead of gpu. | 2023-07-01T14:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/14nwope/question_on_optimal_settings_for_ggml_model_cpugpu/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nwope | false | null | t3_14nwope | /r/LocalLLaMA/comments/14nwope/question_on_optimal_settings_for_ggml_model_cpugpu/ | false | false | self | 2 | null |
Coral TPU Dev Board for speech-to-text and nvidia agx as host running LLaMA?? | 10 | Has anyone done something like this?
I'm looking to replace Alexa. I own the hardware and have started putting things together but haven't sorted it all out yet. I have an Audio Classification Model ([keyphrase detector](https://github.com/google-coral/project-keyword-spotter)) running on the coral tpu board next to our echo dot (alexa). That is a bit of a modified, cobbled together, absolute hack --but it's not too bad... I'd say it almost hears better than Alexa at times.
I want to feed that into a xavier agx running LLaMA. I can convert voice to text (roughly, and not terribly), and assuming I can run Google's MDT (Mendel Development Tool) and hook the Coral TPU board up to the xavier as a host --then I'll be able to take speech/audio and turn it into text on the coral tpu board, then deliver that over to llama running on the xavier.
Not sure how I'll go the other way (probably isn't too hard to get the text back over to the google tpu board but getting the coral tpu board to speak will be fun.
Does anyone have pointers for this? I'm finding that there is very, very little out there for these nvidia agx dev-kit machines.
Thanks. | 2023-07-01T15:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/14nxlf1/coral_tpu_dev_board_for_speechtotext_and_nvidia/ | WrongColorPaint | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nxlf1 | false | null | t3_14nxlf1 | /r/LocalLLaMA/comments/14nxlf1/coral_tpu_dev_board_for_speechtotext_and_nvidia/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'HHDmwkGvXb31CNltB2PbZGF71MIPoD92zWxP9RhMd3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=108&crop=smart&auto=webp&s=f3c32fc4c3ef51bd03be95c072c8cd1127bf8171', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=216&crop=smart&auto=webp&s=418177ff0fef09a2ff984435fa435fbc80db43e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=320&crop=smart&auto=webp&s=4fe7b689bc764ac68d770b8119acaf358444483f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=640&crop=smart&auto=webp&s=635c10f036236b996a4607504429f47a29b05e98', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=960&crop=smart&auto=webp&s=9068c00202bca8fda891dc7bec13ad7e9dc22f76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=1080&crop=smart&auto=webp&s=fbc4757729ae3d36af917486c85bf83f8f6d745e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?auto=webp&s=6f9612978f9e578bf26c0c0ff71e4f64b0d71e4b', 'width': 1200}, 'variants': {}}]} |
How to Create Your Own Free Text Generation Endpoints | 13 | There are many great text generation APIs available, but OpenAI's is one of the most popular. The only downside is that you only get 3 months of free usage for it. After that, you're limited to using smaller, less powerful models for building applications.
With this simple tutorial, you can deploy any open source LLM as a free API endpoint using HuggingFace and Gradio. This can act as a drop-in replacement for the OpenAI endpoints for absolutely free.
[A Step-by-Step Guide to Creating Free Endpoints for LLMs](https://awinml.github.io/llm-text-gen-api/)
I believe this method will be helpful to anyone who is experimenting with LLMs. The post contains two examples using Falcon and Vicuna, with the complete code so that you can replicate it easily. It even showcases an example of deploying Vicuna using the GGML format for faster inference.
Your questions and feedback are welcome! | 2023-07-01T16:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/14nz5hk/how_to_create_your_own_free_text_generation/ | vm123313223 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nz5hk | false | null | t3_14nz5hk | /r/LocalLLaMA/comments/14nz5hk/how_to_create_your_own_free_text_generation/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'cXb-d3rUT5Qsobsulm-8cPmbTl0zgxndG9qImLC9kL4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=108&crop=smart&auto=webp&s=05c67565d34fb6b54d4950402c33ed234564d3f0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=216&crop=smart&auto=webp&s=2b3fb07944f8d951f653f4b2f79a35eaf0a1c343', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=320&crop=smart&auto=webp&s=31fe95f1943d33942a113be7c145f50f7c49d369', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=640&crop=smart&auto=webp&s=4241bf8d34604f07fafdcaa8953e24f2cbb4c29c', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=960&crop=smart&auto=webp&s=4c65b667aefb1a6d93f8e86adb45669a1d84cf9b', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=1080&crop=smart&auto=webp&s=212ddb956ff14afe0a51340a418ee4d2faed43e3', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?auto=webp&s=ddc208eea75c3b3b1c2dfbb23269e134ac51a143', 'width': 2000}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.