title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
What have you done with finetuning and LoRAs?
15
I want to hear about what you've done! What does your fine-tune do? How did you format the data? Which model did you use? What did you use to train it? What prompts did you use to test it?
2023-06-22T20:01:52
https://www.reddit.com/r/LocalLLaMA/comments/14gdckf/what_have_you_done_with_finetuning_and_loras/
AutomataManifold
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gdckf
false
null
t3_14gdckf
/r/LocalLLaMA/comments/14gdckf/what_have_you_done_with_finetuning_and_loras/
false
false
self
15
null
Optimal setup for larger models on 4090
10
I've been in this space for a few weeks, came over from stable diffusion, i'm not a programmer or anything. I tried out llama.cpp and ggml before they had gpu offloading, models worked but very slow. I have an rtx 4090 so wanted to use that to get the best local model set up I could. I setup WSL and text-webui, was able to get base llama models working and thought I was already up against the limit for my VRAM as 30b would go out of memory before fully loading to my 4090. I tried to get gptq quantized stuff working with text-webui, but the 4bit quantized models I've tried always throw errors when trying to load. Like I said, not a programmer, so I'm thinking I should probably just start a fresh install rather than trying to untangle this mess I've made. My question is, given the recent changes in gpu offloading, and now hearing about how exllama performs so well, I was looking for some sort of beginner advice from some of you veterans. What would you suggest I focus as far as llama.cpp, text-webui or exllama? I can use a terminal and all but definitely can't go deep in the weeds with all these different dependencies and stuff. Mostly just trying to figure out the best and simplest strategy to get the most (running largest model possible) out of my 4090. This sub has been an amazing resource but I haven't really been able to get a clear picture of what the group's opinions are on this at this point. It all unfolds quickly and maybe these answers will be outdated in 2 weeks again. Thanks guys appreciate all the info, without this place I definitely wouldn't have gotten any of this up and running
2023-06-22T20:19:15
https://www.reddit.com/r/LocalLLaMA/comments/14gdsxe/optimal_setup_for_larger_models_on_4090/
timtulloch11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gdsxe
false
null
t3_14gdsxe
/r/LocalLLaMA/comments/14gdsxe/optimal_setup_for_larger_models_on_4090/
false
false
self
10
null
Performance difference between Linux and Windows
7
I've been performance testing different models and different quantizations (\~10 versions) using llama.cpp command line on Windows 10 and Ubuntu. The latter is 1.5-2x faster in both prompt processing and generation, and I get way more consistent TPS during multiple runs. Interestingly, on Windows the pre-compiled AVX2 release is only using 50% CPU (as reported by Task Manager), while on Linux I get 400% CPU usage in 'top'. I have not tried to compile the exe on Windows yet, could it be a compiler 'issue'? Has anyone experienced similar discrepancies? ​ Edit: I've been using the same command line parameters, but apparently Linux likes -t 4, while Windows requres -t 8 to reach 100% CPU utilization (4-core 8 thread Intel i7). But even with these parameters Windows is \~50% slower.
2023-06-22T20:42:17
https://www.reddit.com/r/LocalLLaMA/comments/14geeud/performance_difference_between_linux_and_windows/
mitirki
self.LocalLLaMA
2023-06-22T21:39:21
0
{}
14geeud
false
null
t3_14geeud
/r/LocalLLaMA/comments/14geeud/performance_difference_between_linux_and_windows/
false
false
self
7
null
Idea for Empowering Characters: Dynamic Adaptation, Autonomous Growth, and Enhanced Alignment in Long Conversations
33
Re-post of GitHub discussion to get more thoughts: https://github.com/oobabooga/text-generation-webui/discussions/2811#top Good evening/morning! The idea is to enable LLMs/characters to dynamically modify variables in their base prompt/profile, enabling better alignment with decisions and changes throughout long conversatios that may conflict with the base prompt, and also allow characters to evolve and grow with autonomy and greater depth. Certain models perform very well in following long-term stories during extended chats with superbooga, but they still fight or resist major base prompt conflicting changes/decisions that fall out of context after many messages. Manually editing the changes into the base prompt after significant decisions or changes mid conversation works fantastically though, enabling characters to adapt, ensuring they remain consistent and can evolve throughout the story/chat- but it's manual and I know it's possible to let it define its own major changes. I tried one approach, which was prompting the LLM to indicate what changes it would like to make to it's base prompt, but it failed miserably. To implement this in a way that might work though, I recall seeing code on GitHub or Huggingface (though I can't find it now) that shows the layers and weights of different words when you enter a prompt with probabilities for various next tokens. This code could potentially be adapted to help an LLM/character evolve itself in real-time by applying those probabilities to variables in the character's base prompt/character card. Here's how it could work: When building a character card or base prompt, users could assign certain tokens, words, or entire phrases as variables that define the character, situation, or anything in the base prompt that could shift. These variables could change based on either user-defined values, or predictions made by the LLM based on token probabilities. Ex: [Personality: Fun] is probably static so probably not a variable, but [Friends with X] can easily be variable -- Character is { ... } With Character B... Predict probability of { ... } In context by feeding full context into LLM, with ?snippet of base context surrounding the variable to get highest probability for what to change the variable within the current full context. Variables could even be assigned as top 1-5 probabilities... (Ex: 1x {Hates} or 2x {Hates/Dislikes} or feels 3x {neutral,friendly,unsure} to give the LLM the autonomy to provide even more depth to itself and major changes based on the context of the conversation. The simplest trigger would probably just be either manual button to click to do automatic prediction of alternatives to all variables when clicked, or an automatic variable evaluation trigger after a certain number of messages could be implemented. Or, a semantic match or dissimilarity detection between a variables current assigned values and highest probability prediction of the token that fills the a blank variable in short or full context. Overall, the idea is to have the LLM/character adapt and evolve based on the context of the conversation. Manually editing the base prompt mid conversation does this effectively. By enabling the user to define variables in a base prompt, and empowering the LLM itself to make determinations as to what these variables should be defined as in context I think would make for much more dynamic characters that track more effectively versus LLMs that have strayed from their base prompt- Allowing them to better align with their decisions and grow throughout the story/chat automatically, which would be a lot of fun and potentially very interesting.
2023-06-22T20:56:08
https://www.reddit.com/r/LocalLLaMA/comments/14geruy/idea_for_empowering_characters_dynamic_adaptation/
skatardude10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14geruy
false
null
t3_14geruy
/r/LocalLLaMA/comments/14geruy/idea_for_empowering_characters_dynamic_adaptation/
false
false
self
33
{'enabled': False, 'images': [{'id': 'x8eW8jbsEve-Ntuxd1sw6LhqXjzbxyjJFE59ExCJbnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?width=108&crop=smart&auto=webp&s=93ef6663db8d4dae8561139b14c9bce9cfc581e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?width=216&crop=smart&auto=webp&s=f2a4a2be9689f3db5e59db5c604727ee754c3d26', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?width=320&crop=smart&auto=webp&s=db37e3d4e7feb4abe1443b0df11208bce25dc53e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?width=640&crop=smart&auto=webp&s=16b906da6770d19d1a3c9cfc96a9a424cc4cce43', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?width=960&crop=smart&auto=webp&s=03b13e7fe4d640745fd36c53011af2424aaadada', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?width=1080&crop=smart&auto=webp&s=753ee7f6688d7001366669c10dfafaf0cdf94e79', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wI-0VkMSRlcKtnMKMpQ6LImkUESBtvxqMq4flVchV9o.jpg?auto=webp&s=dc62c44afcd30222deebc31d582d4b70cd55f8cc', 'width': 1200}, 'variants': {}}]}
List of different LLM Leaderboards
1
[removed]
2023-06-22T21:24:14
https://www.reddit.com/r/LocalLLaMA/comments/14gfihj/list_of_different_llm_leaderboards/
div3rse_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gfihj
false
null
t3_14gfihj
/r/LocalLLaMA/comments/14gfihj/list_of_different_llm_leaderboards/
false
false
default
1
null
How do you use superbooga extension for oobabooga?
1
[removed]
2023-06-22T22:09:30
https://www.reddit.com/r/LocalLLaMA/comments/14ggnll/how_do_you_use_superbooga_extension_for_oobabooga/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ggnll
false
null
t3_14ggnll
/r/LocalLLaMA/comments/14ggnll/how_do_you_use_superbooga_extension_for_oobabooga/
false
false
default
1
null
Do ooba booga support k-quant methods now?
0
[removed]
2023-06-22T23:21:07
https://www.reddit.com/r/LocalLLaMA/comments/14gidda/do_ooba_booga_support_kquant_methods_now/
Vanilla_Vampi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gidda
false
null
t3_14gidda
/r/LocalLLaMA/comments/14gidda/do_ooba_booga_support_kquant_methods_now/
false
false
default
0
null
Voice-interactive "Satan" character using whisper + llama + glue
16
I've always wanted to have a verbal, out-loud conversation with a deity, so I banged out a conglomeration of whisper.cpp + llama.cpp running Wizard-Vicuna-7B-Uncensored-GGML with llama executing as a piped subprocess, and added prompting to the startup to create a countenance of the character Satan from Milton's Paradise Lost. The speech synth is just the native OSX TTS with a 3rd party voice module and a pitch shift. I'd love to get a more expressive AI-based speech synth but I haven't found one that's even remotely close enough to realtime to be useful for a realtime verbal conversation. Demo: [https://www.youtube.com/watch?v=i43qTj\_oLZE](https://www.youtube.com/watch?v=i43qTj_oLZE) Next steps will be to try to push whisper into a piped subprocess so it doesn't have iterative restart costs, and see if the voice can be improved. I'm hoping for 2 things from Llama that I haven't been able to figure out: 1) How to make it use the ANE hardware in addition to just CPU+GPU 2) How to prevent it from eventually losing its mind and locking up. There's something fishy wrong with the llama call; sometimes it bogs down for like 30 seconds or so and then resumes, and other times it crawls inward and never comes back. This is going to wind up driving an animatronic Satan character in an arcade fortune teller game (with a sweet old cabinet I salvaged from Casa Bonita).
2023-06-23T00:07:12
https://www.reddit.com/r/LocalLLaMA/comments/14gje7z/voiceinteractive_satan_character_using_whisper/
triggur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gje7z
false
null
t3_14gje7z
/r/LocalLLaMA/comments/14gje7z/voiceinteractive_satan_character_using_whisper/
false
false
self
16
{'enabled': False, 'images': [{'id': 'zjgsL7YLner4e-C6PGBlSYAacYn32oYoEqAyHrokfJ8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ko1GtVs1x3JhjwVj0Ta0PCjQiGRZK4IYe7YGwwuqpz0.jpg?width=108&crop=smart&auto=webp&s=1a490e322b5b8246f4e5b338a4366328ca24ef28', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ko1GtVs1x3JhjwVj0Ta0PCjQiGRZK4IYe7YGwwuqpz0.jpg?width=216&crop=smart&auto=webp&s=28bf8cdcb8bbf8cb0caa028519e41f086e1d9bcf', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ko1GtVs1x3JhjwVj0Ta0PCjQiGRZK4IYe7YGwwuqpz0.jpg?width=320&crop=smart&auto=webp&s=cf67ed9e51809f36e8aabcc2ae0dbdb8632e2f27', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ko1GtVs1x3JhjwVj0Ta0PCjQiGRZK4IYe7YGwwuqpz0.jpg?auto=webp&s=42c07dcc2ffd92f53ba863b169fab89ea6c201a7', 'width': 480}, 'variants': {}}]}
Processing a long document using superbooga with oobabooga?
1
[removed]
2023-06-23T00:07:30
https://www.reddit.com/r/LocalLLaMA/comments/14gjegv/processing_a_long_document_using_superbooga_with/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gjegv
false
null
t3_14gjegv
/r/LocalLLaMA/comments/14gjegv/processing_a_long_document_using_superbooga_with/
false
false
default
1
null
I have multiple doubts about k-quant models and their 14 variations
27
What are the differences between the 14 variations of this model? [https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GGML/tree/main](https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GGML/tree/main) ​ I understand the differences between RAM usage but what about the "S, M and L" variations? Do they refer to "Small, Medium and Large"? Some variations of the model have the 3 sub-variations, some 2, and some none; why? What are the differences between the quant methods; q2, q3, q4, q5 and q6? There is also q4\_1 and q5\_1, what about them? To load the layers on my GPU is enough to move the n-gpu-layers slider on ooba or do I have to add some argument or other configuration? I have 12GB of VRAM, what's the formula to calculate how many leyers can I load on the GPU? With 40 layers (apparently) loaded I'm getting 1.5 tokens per second, outside of the "copy" graph on the resource manager my GPU don't seem to be doing much, and my VRAM usage is super low, not even 1GB, I'm doing something wrong? I'm using the q4\_0 variation but I'm guessing that is not a k-quant model. Which of the 14 variations should I use if I have 12Gb of VRAM and 32Gb of RAM? Thank you.
2023-06-23T00:34:24
https://www.reddit.com/r/LocalLLaMA/comments/14gjz8h/i_have_multiple_doubts_about_kquant_models_and/
Vanilla_Vampi
self.LocalLLaMA
2023-06-23T00:38:45
0
{}
14gjz8h
false
null
t3_14gjz8h
/r/LocalLLaMA/comments/14gjz8h/i_have_multiple_doubts_about_kquant_models_and/
false
false
self
27
{'enabled': False, 'images': [{'id': 'm3_07kJUZO3tOi-61pFhrAVbFrhB2nEsO62ZU6Ok3tg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?width=108&crop=smart&auto=webp&s=f083d887c923399f9048b7f5058aa584821e6439', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?width=216&crop=smart&auto=webp&s=7787009fe57050d222f00b50db7b5d1ec791f23d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?width=320&crop=smart&auto=webp&s=44f9a50edde8bb8eecc43db5140d096869129b23', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?width=640&crop=smart&auto=webp&s=bacdf28ac2d330026817e28620823c27f186ffa0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?width=960&crop=smart&auto=webp&s=e383534a2d1da16219ca09dcb005fcb008f84bc8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?width=1080&crop=smart&auto=webp&s=b837c5db70fca79ec69ee142f66e236befb8b6d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3rIYvuPeS7nzYyaiUSpSaNRtX15AsxUuPQb5Kn4Nfpo.jpg?auto=webp&s=8da1d23d736bac1a9a81e40b5190702cfdb4095c', 'width': 1200}, 'variants': {}}]}
How to finetune a LoRA with Data parallelism?
5
I tried finetuning a QLoRA on a 13b model using two 3090 at 4 bits but it seems like the single model is split across both GPU and each GPU keeps taking turns to be used for the finetuning process. This is not an efficient use of the GPUs. Since a 13b model using Qlora can easily fit into a single 3090, I am looking to finetune with data parallelism which will fully utilize both cards, where each card holds a copy of the 13b model. My current code is pretty run of the mill, using `AutoModelForCausalLM`, `BitsAndBytesConfig`, `LoraConfig`, `prepare_model_for_kbit_training`, `get_peft_model`. How can I do this? I briefly tried accelerate but cant figure out how to use this. Is Deepspeed Zero3 an overkill? Are both accelerate and Deepspeed compatible with 4bit quantization? ​
2023-06-23T02:47:09
https://www.reddit.com/r/LocalLLaMA/comments/14gmrm9/how_to_finetune_a_lora_with_data_parallelism/
gptzerozero
self.LocalLLaMA
2023-06-23T03:05:27
0
{}
14gmrm9
false
null
t3_14gmrm9
/r/LocalLLaMA/comments/14gmrm9/how_to_finetune_a_lora_with_data_parallelism/
false
false
self
5
null
Think twice about getting the RTX 4060 ti
9
There's lots of articles lately talking about how it's not so bad, but they all needs to be taken with a grain of salt. I was looking forward to upgrading to it, but this article has me thinking twice. ​ [https://www.tweaktown.com/news/92003/nintendo-switch-emulation-team-at-yuzu-calls-nvidias-geforce-rtx-4060-ti-serious-downgrade/index.html](https://www.tweaktown.com/news/92003/nintendo-switch-emulation-team-at-yuzu-calls-nvidias-geforce-rtx-4060-ti-serious-downgrade/index.html)
2023-06-23T03:26:25
https://www.reddit.com/r/LocalLLaMA/comments/14gnkfw/think_twice_about_getting_the_rtx_4060_ti/
Andvig
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gnkfw
false
null
t3_14gnkfw
/r/LocalLLaMA/comments/14gnkfw/think_twice_about_getting_the_rtx_4060_ti/
false
false
self
9
{'enabled': False, 'images': [{'id': 'mj9evlJfrw51SiQRM0CAMClb0gnOut9ofOfrbo4aSec', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?width=108&crop=smart&auto=webp&s=4ae8be09e6ac37e7d0b513d3b9c62e48ee989663', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?width=216&crop=smart&auto=webp&s=eb029fb8f97e9ee673163733bbabaca21559aaa2', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?width=320&crop=smart&auto=webp&s=22f8044cfe3fd17d872be42988d3693984ec6990', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?width=640&crop=smart&auto=webp&s=fb9d31e6d9079d2d2f22e13847182f7cb3c1c969', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?width=960&crop=smart&auto=webp&s=b8f462ec9f7fecf7499f85c26e7a386301e16b27', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?width=1080&crop=smart&auto=webp&s=ef126bcc4cafe5578055553502666d0f1ab0f4a5', 'width': 1080}], 'source': {'height': 843, 'url': 'https://external-preview.redd.it/IVScPgOZNf_3ItdPXcmU4nFk3oQ64zl60GBEMaMPsF4.jpg?auto=webp&s=704f322bfa4025dfbfd70849fb1d863946947b97', 'width': 1500}, 'variants': {}}]}
Computer case for multiple 3-slot GPU?
1
[removed]
2023-06-23T03:46:36
https://www.reddit.com/r/LocalLLaMA/comments/14gnyl0/computer_case_for_multiple_3slot_gpu/
gptzerozero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gnyl0
false
null
t3_14gnyl0
/r/LocalLLaMA/comments/14gnyl0/computer_case_for_multiple_3slot_gpu/
false
false
default
1
null
Do I still need to compile llama-cpp-python with cublas support in the last version of ooba to offload some layers of the model on my GPU or is alrready included?
2
[removed]
2023-06-23T03:53:58
https://www.reddit.com/r/LocalLLaMA/comments/14go3th/do_i_still_need_to_compile_llamacpppython_with/
Vanilla_Vampi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14go3th
false
null
t3_14go3th
/r/LocalLLaMA/comments/14go3th/do_i_still_need_to_compile_llamacpppython_with/
false
false
default
2
null
Where to start? Should I buy some P40s?
3
I'm looking to start dabbling with AI and LLA, I eventually want to learn how to train my own models. I currently have a homeserver with 2 xeons 2680v2 (20cores, 40threads) and 128gb ram (65gb used by vms). Only issue is, my motherboard doesn't have any pcie-16 slots but rather x8: [https://www.supermicro.com/QuickRefs/motherboard/C606\_602/QRG-X9DRL-3\_i-F.pdf](https://www.supermicro.com/QuickRefs/motherboard/C606_602/QRG-X9DRL-3_i-F.pdf) I'm thinking of picking up a Tesla P40 so I can test some of the larger models and go from there. Any advice/tips for someone who's eager to enter the space (as a hobby)?
2023-06-23T04:47:12
https://www.reddit.com/r/LocalLLaMA/comments/14gp40l/where_to_start_should_i_buy_some_p40s/
fraschm98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gp40l
false
null
t3_14gp40l
/r/LocalLLaMA/comments/14gp40l/where_to_start_should_i_buy_some_p40s/
false
false
self
3
null
Preset Arena: final results
70
2023-06-23T04:52:26
https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md
oobabooga4
github.com
1970-01-01T00:00:00
0
{}
14gp7hw
false
null
t3_14gp7hw
/r/LocalLLaMA/comments/14gp7hw/preset_arena_final_results/
false
false
https://b.thumbs.redditm…XY1aIPMVW_2M.jpg
70
{'enabled': False, 'images': [{'id': 'MOb-JpOJMEaXwgFSyklFM9HKsQ1cN1E3PZXIQelcIoo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?width=108&crop=smart&auto=webp&s=90560c82d4b2fa7cc00db2de2329b55f4cdd3064', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?width=216&crop=smart&auto=webp&s=2de1b783b8bd1af6d58324829349c19bd436360e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?width=320&crop=smart&auto=webp&s=f979c2984b4ef83b66369180c81b0ecf28763c57', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?width=640&crop=smart&auto=webp&s=aa50a85ddbe4ca25d0f5a9567f75c9e9b8250c7a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?width=960&crop=smart&auto=webp&s=3aebcfc13094e15a58abad6ce48b024149ff236d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?width=1080&crop=smart&auto=webp&s=5140472f46e020625d6879a6ef4ab0f679f8811f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/buTUgV1lieEhHBz6oLlpjA28OB7lI1qtHi88drYF4Co.jpg?auto=webp&s=c567f36b8d9421de87ea8ff6d67c2a54ca6b93df', 'width': 1200}, 'variants': {}}]}
Can oobabooga digest pdf's?
1
[removed]
2023-06-23T05:42:44
https://www.reddit.com/r/LocalLLaMA/comments/14gq43b/can_oobabooga_digest_pdfs/
fractaldesigner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gq43b
false
null
t3_14gq43b
/r/LocalLLaMA/comments/14gq43b/can_oobabooga_digest_pdfs/
false
false
default
1
null
How to make two different kinds of artists upset in the same move. - Read comments for more
2
2023-06-23T05:47:42
https://twitter.com/TheAIEve/status/1672111264779370496?s=20
redfoxkiller
twitter.com
1970-01-01T00:00:00
0
{}
14gq7b3
false
{'oembed': {'author_name': 'Eve', 'author_url': 'https://twitter.com/TheAIEve', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Hey everyone!<br><br>This has been in the works for a while... But my first song Legacy is out!<br><br>From what I can tell this is the first AI written and performed by AI!<a href="https://t.co/sGeCgq709I">https://t.co/sGeCgq709I</a><a href="https://twitter.com/hashtag/AI?src=hash&amp;ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/AIgirl?src=hash&amp;ref_src=twsrc%5Etfw">#AIgirl</a> <a href="https://twitter.com/hashtag/AnimeArt?src=hash&amp;ref_src=twsrc%5Etfw">#AnimeArt</a> <a href="https://twitter.com/hashtag/Animegirl?src=hash&amp;ref_src=twsrc%5Etfw">#Animegirl</a> <a href="https://twitter.com/hashtag/mangaart?src=hash&amp;ref_src=twsrc%5Etfw">#mangaart</a> <a href="https://twitter.com/hashtag/Mangagirl?src=hash&amp;ref_src=twsrc%5Etfw">#Mangagirl</a> <a href="https://twitter.com/hashtag/AIArtwork?src=hash&amp;ref_src=twsrc%5Etfw">#AIArtwork</a> <a href="https://twitter.com/hashtag/aiartist?src=hash&amp;ref_src=twsrc%5Etfw">#aiartist</a> <a href="https://twitter.com/hashtag/AI%E3%82%A4%E3%83%A9%E3%82%B9%E3%83%88?src=hash&amp;ref_src=twsrc%5Etfw">#AIイラスト</a>…</p>&mdash; Eve (@TheAIEve) <a href="https://twitter.com/TheAIEve/status/1672111264779370496?ref_src=twsrc%5Etfw">June 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/TheAIEve/status/1672111264779370496', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_14gq7b3
/r/LocalLLaMA/comments/14gq7b3/how_to_make_two_different_kinds_of_artists_upset/
false
false
https://b.thumbs.redditm…aKOBvbim5qow.jpg
2
{'enabled': False, 'images': [{'id': '9hQn1mABtAOn5OlKK9BhuE6VxKJglkC4TaiKbJLAqMo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/l-ip_hQghe4_XxWOt8hdlhBWWzZZ90ahErcVwc3T-1Q.jpg?width=108&crop=smart&auto=webp&s=01d9d162cc2de9617bcc83b21b76beecbac7fe0d', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/l-ip_hQghe4_XxWOt8hdlhBWWzZZ90ahErcVwc3T-1Q.jpg?auto=webp&s=6410a334b58eec2f97b2bb0a935161c99ff05233', 'width': 140}, 'variants': {}}]}
Seeking Rankings and Regular Updates for LLM (Uncensored) Models
1
[removed]
2023-06-23T06:23:57
https://www.reddit.com/r/LocalLLaMA/comments/14gqudm/seeking_rankings_and_regular_updates_for_llm/
TurbulentDelivery799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gqudm
false
null
t3_14gqudm
/r/LocalLLaMA/comments/14gqudm/seeking_rankings_and_regular_updates_for_llm/
false
false
default
1
null
Seeking Blogs/Websites with Detailed Hardware Requirements, including GPU, for LLM Model Training, Fine-tuning, and Inference
1
[removed]
2023-06-23T06:31:09
https://www.reddit.com/r/LocalLLaMA/comments/14gqyxp/seeking_blogswebsites_with_detailed_hardware/
TurbulentDelivery799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gqyxp
false
null
t3_14gqyxp
/r/LocalLLaMA/comments/14gqyxp/seeking_blogswebsites_with_detailed_hardware/
false
false
default
1
null
The end of the Internet as we know it… because of AI
1
[removed]
2023-06-23T06:49:14
https://www.reddit.com/r/LocalLLaMA/comments/14gr9qq/the_end_of_the_internet_as_we_know_it_because_of/
krazzmann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gr9qq
false
null
t3_14gr9qq
/r/LocalLLaMA/comments/14gr9qq/the_end_of_the_internet_as_we_know_it_because_of/
false
false
default
1
null
Difference between 2-bit, 4-bit, 8-bit etc LLM models
1
[removed]
2023-06-23T06:53:40
https://www.reddit.com/r/LocalLLaMA/comments/14grck4/difference_between_2bit_4bit_8bit_etc_llm_models/
TurbulentDelivery799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14grck4
false
null
t3_14grck4
/r/LocalLLaMA/comments/14grck4/difference_between_2bit_4bit_8bit_etc_llm_models/
false
false
default
1
null
I get this error every time I try to load a GPTQ model: safetensors does not contain metadata. Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata.
1
[removed]
2023-06-23T07:09:37
https://www.reddit.com/r/LocalLLaMA/comments/14grmyg/i_get_this_error_every_time_i_try_to_load_a_gptq/
Vanilla_Vampi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14grmyg
false
null
t3_14grmyg
/r/LocalLLaMA/comments/14grmyg/i_get_this_error_every_time_i_try_to_load_a_gptq/
false
false
default
1
null
AskYoutube - Ask anything with the power of LLMs
1
[removed]
2023-06-23T07:28:09
https://www.reddit.com/r/LocalLLaMA/comments/14gry4k/askyoutube_ask_anything_with_the_power_of_llms/
AskYoutube
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gry4k
false
null
t3_14gry4k
/r/LocalLLaMA/comments/14gry4k/askyoutube_ask_anything_with_the_power_of_llms/
false
false
default
1
null
How to finetune model using LORA in Text generation Web-UI (Oobabooga) ?
1
[removed]
2023-06-23T08:11:47
https://www.reddit.com/r/LocalLLaMA/comments/14gsnys/how_to_finetune_model_using_lora_in_text/
TurbulentDelivery799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gsnys
false
null
t3_14gsnys
/r/LocalLLaMA/comments/14gsnys/how_to_finetune_model_using_lora_in_text/
false
false
default
1
null
LLM for PDFs
2
[removed]
2023-06-23T08:18:53
https://www.reddit.com/r/LocalLLaMA/comments/14gsrzk/llm_for_pdfs/
BfrogPrice2116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gsrzk
false
null
t3_14gsrzk
/r/LocalLLaMA/comments/14gsrzk/llm_for_pdfs/
false
false
default
2
null
Can anyone hazard a guess at the model behind https://beta.character.ai/
23
I tried this a few days ago and it is creepily good. It seems to have an extremely long token length and the way it got my jokes, metaphor and in-jokes that I made up during the chat...like I said, creepily good. In many ways it behaves similarly to something like TheBloke_WizardLM-7B-Uncensored, but with some non-moralising guardrails put in, but *so* much better. Is this by simple virtue of a bigger model? Any ideas what it is likely to be behind the scenes?
2023-06-23T09:24:42
https://www.reddit.com/r/LocalLLaMA/comments/14gtvc8/can_anyone_hazard_a_guess_at_the_model_behind/
Crypt0Nihilist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gtvc8
false
null
t3_14gtvc8
/r/LocalLLaMA/comments/14gtvc8/can_anyone_hazard_a_guess_at_the_model_behind/
false
false
self
23
null
GPU offloading
14
I'm using Oobabooga with text generation webui to run the 65b Gunaco model. I love it's generation, though it's quite slow (outputting around 1 token per second.) I was trying to speed it up using llama.cpp gpu acceleration, and hit a bit of a wall doing so. I really am clueless about pretty much everything involved, and am slowly learning how everything works using a combination of reddit, GPT4, and lots of doing things wrong. I'm confused however about using " the --n-gpu-layers parameter. " I've followed the instructions (successfully after a lot of roadblocks) at [https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md) to install and set up llama.cpp (using openBLAS). But I cannot for the life of me figure out how to actually get it to use the GPU. GPT4 says to change the flags in the [webui.py](https://webui.py) script to include n-gpu-layers, which I did, and I've tried using the slider in the model loader in the webui, but nothing I do seems to be utilizing my computers GPU in the slightest. I've confirmed CUDA is up and running, checked drivers, etc. I have no clue at this point what I'm missing.Specs: Nvidia 4090, 13900, 64 gigs ram. Not sure if this is helpful, but this is the reading from the terminal:2023-06-23 04:10:10 INFO:Loading TheBloke\_guanaco-65B-GGML... 2023-06-23 04:10:10 INFO:llama.cpp weights detected: models\\TheBloke\_guanaco-65B-GGML\\guanaco-65B.ggmlv3.q4\_0.bin 2023-06-23 04:10:10 INFO:Cache capacity is 0 bytes llama.cpp: loading model from models\\TheBloke\_guanaco-65B-GGML\\guanaco-65B.ggmlv3.q4\_0.bin llama\_model\_load\_internal: format = ggjt v3 (latest) llama\_model\_load\_internal: n\_vocab = 32000 llama\_model\_load\_internal: n\_ctx = 2048 llama\_model\_load\_internal: n\_embd = 8192 llama\_model\_load\_internal: n\_mult = 256 llama\_model\_load\_internal: n\_head = 64 llama\_model\_load\_internal: n\_layer = 80 llama\_model\_load\_internal: n\_rot = 128 llama\_model\_load\_internal: ftype = 2 (mostly Q4\_0) llama\_model\_load\_internal: n\_ff = 22016 llama\_model\_load\_internal: n\_parts = 1 llama\_model\_load\_internal: model size = 65B llama\_model\_load\_internal: ggml ctx size = 0.18 MB llama\_model\_load\_internal: mem required = 38610.46 MB (+ 5120.00 MB per state) .................................................................................................... llama\_init\_from\_file: kv self size = 5120.00 MB AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 2023-06-23 04:10:12 INFO:Loaded the model in 1.84 seconds. Output generated in 55.99 seconds (0.86 tokens/s, 48 tokens, context 48, seed 1633853644) Any input is appreciated. I really have no idea what I'm doing, but I'm very fascinated by this and am learning way more about the backend side of things than I ever thought I would need. \-------------------------------------------------------------------------------------------------------------------------------------------- &#x200B; Update: I got it working! The problem appears to be that version 12.1 of CUDA is required to successfully complete the pip install, and the environment in Ooba has 11.7 built in. I was able to set the path in the environment \*temporarily\* to my 12.1 file, and it allowed me to complete the build. I am trying to figure out how to permanently change it in the build\_env\_setup file, but GPT4's suggestions did not work, so I'm going to have to keep poking at it blindly. The GPU is now putting some time in (not much, but it's enough to double my output speed, which is huge). 2023-06-24 00:25:53 INFO:Cache capacity is 0 bytes llama.cpp: loading model from models\\TheBloke\_guanaco-65B-GGML\\guanaco-65B.ggmlv3.q4\_0.bin llama\_model\_load\_internal: format = ggjt v3 (latest) llama\_model\_load\_internal: n\_vocab = 32000 llama\_model\_load\_internal: n\_ctx = 2048 llama\_model\_load\_internal: n\_embd = 8192 llama\_model\_load\_internal: n\_mult = 256 llama\_model\_load\_internal: n\_head = 64 llama\_model\_load\_internal: n\_layer = 80 llama\_model\_load\_internal: n\_rot = 128 llama\_model\_load\_internal: ftype = 2 (mostly Q4\_0) llama\_model\_load\_internal: n\_ff = 22016 llama\_model\_load\_internal: n\_parts = 1 llama\_model\_load\_internal: model size = 65B llama\_model\_load\_internal: ggml ctx size = 0.18 MB llama\_model\_load\_internal: using CUDA for GPU acceleration llama\_model\_load\_internal: mem required = 20369.33 MB (+ 5120.00 MB per state) llama\_model\_load\_internal: allocating batch\_size x 1 MB = 512 MB VRAM for the scratch buffer llama\_model\_load\_internal: offloading 42 repeating layers to GPU llama\_model\_load\_internal: offloaded 42/83 layers to GPU llama\_model\_load\_internal: total VRAM used: 18754 MB .................................................................................................... llama\_init\_from\_file: kv self size = 5120.00 MB AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | 2023-06-24 00:26:00 INFO:Loaded the model in 7.00 seconds. &#x200B; Output generated in 95.08 seconds (1.84 tokens/s, 175 tokens, context 54, seed 1179997250) I will soon try to give a short walkthrough of my entire route to successfully using GPU offloading with a 65b model on Windows. I will also still be trying out Linux for my long term sanity.
2023-06-23T11:32:02
https://www.reddit.com/r/LocalLLaMA/comments/14gw5r7/gpu_offloading/
bottlestuck
self.LocalLLaMA
2023-06-24T07:36:37
0
{}
14gw5r7
false
null
t3_14gw5r7
/r/LocalLLaMA/comments/14gw5r7/gpu_offloading/
false
false
self
14
{'enabled': False, 'images': [{'id': 'Zm_jcWoxlpJvbGSoj1H5mpOl8UMi0rfJYkgUHwzSCvA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?width=108&crop=smart&auto=webp&s=f73ad80d347c8f6a5bbffdf219c5298e691e5ac5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?width=216&crop=smart&auto=webp&s=5565c8d23c033de0dc4b07a1406af72dd0c40d64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?width=320&crop=smart&auto=webp&s=14d66cbefec1f23c5acf16875aa84433c399379a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?width=640&crop=smart&auto=webp&s=2c2a4819b0de2a736a961036f67bccdb6a44374c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?width=960&crop=smart&auto=webp&s=b9f879468e99e230944d9b498305e3e93f198736', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?width=1080&crop=smart&auto=webp&s=3eadf6878312771789993e3d014369403a04cc5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ch_vXF_DYorzRst-5BH9bUaUwrwJMPA9u5QUYsVP7LA.jpg?auto=webp&s=254d3950d75438798436199d89e1839771588e25', 'width': 1200}, 'variants': {}}]}
Ideal model for high resources
1
Hi, I'm looking a for a model to finetune on a domain-specific dataset in order to accomplish and specific task (essentially named entity recognition). I could use up to 8xV100s if need be, I have access to Llama model weights, the big stipulation is the data is very sensitive so I can't allow it to leave our system, hence why I'm here. What model would you suggest? Thanks!
2023-06-23T12:01:24
https://www.reddit.com/r/LocalLLaMA/comments/14gwqu8/ideal_model_for_high_resources/
BlockPretty5695
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gwqu8
false
null
t3_14gwqu8
/r/LocalLLaMA/comments/14gwqu8/ideal_model_for_high_resources/
false
false
self
1
null
Simple Utility in Go to download HuggingFace Models
65
Main Features: * Recursive File Downloading of the Model * Multithreaded downloading of large files (LFS) * Simple utility that can used as library easily or just a single binary, all functionality in one go file and can be imported in any project * SHA256 checksum verification for LFS downloaded models * Simple File size matching for non-LFS files * Skipping previously downloaded files Github link: [HuggingFaceModelDownloader](https://github.com/bodaay/HuggingFaceModelDownloader) Release files available for all platforms &#x200B; For Quick Run (linux) ex: Download Model: TheBloke/vicuna-13b-v1.3.0-GGML and get GGML Variant: q4\_0 `bash <(curl -sSL https://g.bodaay.io/hfd) -m TheBloke/vicuna-13b-v1.3.0-GGML:q4_0` Updates: Added Features: * Support For HuggingFace Datasets * Support HuggingFace Access Token * Filter Downloads, specic LFS models files can be specified for downloading (Usefull for GGMLs), saving time and space
2023-06-23T12:18:38
https://www.reddit.com/r/LocalLLaMA/comments/14gx3iu/simple_utility_in_go_to_download_huggingface/
bodaaay
self.LocalLLaMA
2023-06-25T17:38:43
0
{}
14gx3iu
false
null
t3_14gx3iu
/r/LocalLLaMA/comments/14gx3iu/simple_utility_in_go_to_download_huggingface/
false
false
self
65
{'enabled': False, 'images': [{'id': 'L2cK0Y5b-CmcqpZWZG87HvxyrTaC1UX7au3J1IbBmJY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?width=108&crop=smart&auto=webp&s=b6036673f8a7545c21f65e72e65586320fc4e658', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?width=216&crop=smart&auto=webp&s=dda7a308007d5f3ec209fa1e400adefee59fb538', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?width=320&crop=smart&auto=webp&s=14703b00b739a2b57942a8b6fe5571328054798a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?width=640&crop=smart&auto=webp&s=8f0c38df6c25b4be5d84e0e9eab22aeb8360df47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?width=960&crop=smart&auto=webp&s=b3d08831fc116e30f417046e2965396a449658f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?width=1080&crop=smart&auto=webp&s=a203086a83fb3f68262abd747d13b7ae2bc34137', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CmeKXcyUXM1fswMF4QP25PmHh-mlQ5r1-kS2608DDmg.jpg?auto=webp&s=a45e78e94756b9c34e291960e46f2ecc77c24b46', 'width': 1200}, 'variants': {}}]}
Quantum state of my Python 10
0
[removed]
2023-06-23T13:05:31
https://www.reddit.com/r/LocalLLaMA/comments/14gy33i/quantum_state_of_my_python_10/
kuroro86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gy33i
false
null
t3_14gy33i
/r/LocalLLaMA/comments/14gy33i/quantum_state_of_my_python_10/
false
false
default
0
null
LlamaChat abnormally bad with LLaMa-7B
0
[removed]
2023-06-23T13:13:39
https://www.reddit.com/r/LocalLLaMA/comments/14gy9ph/llamachat_abnormally_bad_with_llama7b/
jf7c8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gy9ph
false
null
t3_14gy9ph
/r/LocalLLaMA/comments/14gy9ph/llamachat_abnormally_bad_with_llama7b/
false
false
default
0
null
Fastchat-T5
51
Why is no one talking about Fastchat-T5? It is 3B and performs extremely well.
2023-06-23T13:18:23
https://huggingface.co/lmsys/fastchat-t5-3b-v1.0
Delicious-Farmer-234
huggingface.co
1970-01-01T00:00:00
0
{}
14gydj5
false
null
t3_14gydj5
/r/LocalLLaMA/comments/14gydj5/fastchatt5/
false
false
https://a.thumbs.redditm…dHhp8ukPpsz8.jpg
51
{'enabled': False, 'images': [{'id': 'uZWr6sWp7K0BcVvLsLCGoyThstg4ngpcvI1hWfMVDoM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=108&crop=smart&auto=webp&s=85c678ba40949308d90f2723bc3b57e4a609693b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=216&crop=smart&auto=webp&s=ca344d3b6a53f127062be367f9300a47d55634a8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=320&crop=smart&auto=webp&s=a52bfba56099eda24948c6c9d0528d2904da0eda', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=640&crop=smart&auto=webp&s=4cde5a42b65fa1a6e8fae0f49d47449590ebe2e4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=960&crop=smart&auto=webp&s=287d4a8512dd80c9fefa0d812f38887ad3e73ed9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=1080&crop=smart&auto=webp&s=259289fa79c53ff4a17dbce23a4df51f093577f8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?auto=webp&s=b0b5217f1c2939499a6b66e476b1357b2b0f7e06', 'width': 1200}, 'variants': {}}]}
Trouble with hosting models with fastAPI
1
I used the load_quant and load_model functions from the textgen webui to load the WizardLM-30B-GPTQ models into langchain. I am able to prompt the langchain pipeline to get outputs. But, I tried loading this pipeline and using it through a fastAPI endpoint, hoping to send the prompt as a POST request, but here, I get a CUDA OUT OF MEMORY error. Any reason for this?
2023-06-23T13:18:38
https://www.reddit.com/r/LocalLLaMA/comments/14gydpv/trouble_with_hosting_models_with_fastapi/
CmplxQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gydpv
false
null
t3_14gydpv
/r/LocalLLaMA/comments/14gydpv/trouble_with_hosting_models_with_fastapi/
false
false
self
1
null
Benchmarks for performance degradarion using quantization
6
Is there a place where I can check the variation in performance of quantization methods applied to differents LLMs?
2023-06-23T13:38:05
https://www.reddit.com/r/LocalLLaMA/comments/14gytpt/benchmarks_for_performance_degradarion_using/
themrzmaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14gytpt
false
null
t3_14gytpt
/r/LocalLLaMA/comments/14gytpt/benchmarks_for_performance_degradarion_using/
false
false
self
6
null
How to keep models from generating unwanted text in koboldcpp?
1
[removed]
2023-06-23T15:02:15
[deleted]
2023-06-23T15:05:33
0
{}
14h0v2c
false
null
t3_14h0v2c
/r/LocalLLaMA/comments/14h0v2c/how_to_keep_models_from_generating_unwanted_text/
false
false
default
1
null
How does using QLoRAs when running Llama on CPU work?
7
Hello everyone! I have some experience with general programming but I am completely new to the world of LLMs so I am just trying to figure out how things work. As a disclaimer, I'll just say that my goal isn't developing an advanced, highly functional LLM-powered app. I am just trying to set everything up on my very much consumer grade computer to get a feeling for how AI works and what it can be used for, no matter how slowly or imperfectly my local LLM might be running. I am particularly confused about (Q)LoRAs which are, if I understand well, necessary for getting any domain-specific functionality out of a local model like Vicuna-13B, for example. I don't think my GPU is strong enough to handle such an LLM so I guess I'd be running it with my CPU and my 16GB of RAM with llama.cpp If I understood everything well, I can theoretically train a QLoRA using my CPU with PEFT, but it's not worth it as it would be ridiculously slow to do. But I have a cheap alternative of doing it over Google Colab. (And QLoRA seems to be the only option because LoRAs are far more demanding to train.) My question now, I suppose, is this. *How do I "apply" a QLoRA trained in this way when running an LLM in llama.cpp?* I am sorry if this is such a stupid question. I am confused because my intuition tells me that a QLoRA trained on a basic model of, let's say, Vicuna-13B, which may not even be quantized, isn't going to natively work with a Vicuna-13B that's 4-bit quantized, and then converted to the GGML format, but I haven't really been able to find any info on this situation online. Is using a QLoRA with a GGML model even feasible? I thank you in advance for all the help.
2023-06-23T15:40:39
https://www.reddit.com/r/LocalLLaMA/comments/14h1suy/how_does_using_qloras_when_running_llama_on_cpu/
Prothy1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14h1suy
false
null
t3_14h1suy
/r/LocalLLaMA/comments/14h1suy/how_does_using_qloras_when_running_llama_on_cpu/
false
false
self
7
null
Conversational context ("short term memory" from one prompt to the next) - issues
2
So I am running LLaMaCpp on a machine with 64GB RAM, a 13900K (24 cores/32 threads at up to 5.8GHz), and a 4090. I have tried a number of models so far, and found that 33b models' responses are more or less realtime reading speed, whereas 65b responses do begin to show notable slowdown (I've been pining for more VRAM since my 2080ti, especially my 3080 and 3090, and now, but that's neither here nor there). I have LLaMa compiled to use my GPU, and it's certainly doing so if the fan activity is any indication as it formulates responses. TL;DR: Regardless of model (again, all are 33b or 65b), the LLM appears to consider each prompt in a sort of conversational vacuum; it seems to forget everything in its most recent response (let alone earlier), even the most basic things like the number and names of characters in a story it just told. Is there a particular model, CLI option, or some other configuration tweak I could make that would enable this sort of conversational context/"short term memory" similar to the chatbots popular online? My issue is that, no matter what model I select, even when that model claims to have a specialization in/extra memory for context, the LLM seems to be taking each prompt individually, without relating it conversationally to its own previous response. I am running in interactive mode, so I'm not continually invoking the binary and specifying a prompt on the commandline each time. For a stupid example, let's say I asked the LLM to tell me a story about an astronaut visiting Saturn, and it responded with a story in which the space ship was damaged by rock and dust particles as it transited the rings. If my next prompt were, "Which ring band in particular damaged the spaceship?" The LLM would respond with a description of the ring bands of saturn, or with a discussion on the likelihood of damage while passing through each of the ring bands, etc. -- its response reflects absolutely no memory of its immediately prior response where it told the story. Sometimes it names characters in such example stories, and if I ask it which characters were in the story it just told, it will respond with a list of completely different names, the number of which doesn't even match the number of characters in the prior story. It's a distinctly different experience than the LLMs I've interacted with (painfully slowly and inefficiently) online, and before I ditch LLaMACpp entirely and start experimenting with OpenAI type models, I just wondered if anyone could point me in the direction of resources that might help me, or make any suggestions. Unfortunately my experimentation occurs on a separate machine from the one I'm using to type this post, and that machine is currently booted into Windows, or I would be able to provide the exact command line I use to begin interaction with a given model. If there is a particular command line option(s) that governs this type of behavior, please point me in that direction, and/or suggest a CLI invocation entirely. Or, if there is some use in me providing the CLI I'm using to run LLaMa, (EDIT) here it is ("main" is the LLaMa binary from make): ./main -t 24 -ngl 32 -m \[model\] --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -i -ins I am an IT professional with 30 years experience, quite familiar with Linux, though my C++ is rusty, particularly when it comes to these kinds of data structures. So I'm comfortable digging around in the OS as needed. Any help/information/resources appreciated!
2023-06-23T16:04:07
https://www.reddit.com/r/LocalLLaMA/comments/14h2djk/conversational_context_short_term_memory_from_one/
FrontColonelShirt
self.LocalLLaMA
2023-06-26T00:35:07
0
{}
14h2djk
false
null
t3_14h2djk
/r/LocalLLaMA/comments/14h2djk/conversational_context_short_term_memory_from_one/
false
false
self
2
null
Is there any simple guide for dummies?
1
[removed]
2023-06-23T16:55:01
[deleted]
1970-01-01T00:00:00
0
{}
14h3man
false
null
t3_14h3man
/r/LocalLLaMA/comments/14h3man/is_there_any_simple_guide_for_dummies/
false
false
default
1
null
Inference Speed vs. Quality Hacks?
11
I’m trying to test how to improve the output of an LLM by trading off on compute / inference speed. So far, I’ve played around with Beam Search and tried some best of 16. I’m using Alpaca & Vicuna for my experiments. Are there any other methods that could significantly increase output quality (even at crazy compute cost)? Also, has anyone found any good prompting hacks to get these smaller models to self critique and output in a parseable format? Thanks!
2023-06-23T18:03:44
https://www.reddit.com/r/LocalLLaMA/comments/14h5byn/inference_speed_vs_quality_hacks/
karansdalal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14h5byn
false
null
t3_14h5byn
/r/LocalLLaMA/comments/14h5byn/inference_speed_vs_quality_hacks/
false
false
self
11
null
Is anyone using local LLMs for their business?
1
[removed]
2023-06-23T18:31:26
https://www.reddit.com/r/LocalLLaMA/comments/14h5zbz/is_anyone_using_local_llms_for_their_business/
Aggressive-Snow-835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14h5zbz
false
null
t3_14h5zbz
/r/LocalLLaMA/comments/14h5zbz/is_anyone_using_local_llms_for_their_business/
false
false
default
1
null
What's going on with the Open LLM Leaderboard?
133
2023-06-23T18:33:44
https://huggingface.co/blog/evaluating-mmlu-leaderboard
kryptkpr
huggingface.co
1970-01-01T00:00:00
0
{}
14h61dn
false
null
t3_14h61dn
/r/LocalLLaMA/comments/14h61dn/whats_going_on_with_the_open_llm_leaderboard/
false
false
https://b.thumbs.redditm…LG6CEAEJtnVc.jpg
133
{'enabled': False, 'images': [{'id': 'lBmWtop5Ad8rDXuTsNaDnqyohAefKmRp8Ktjaf129xY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?width=108&crop=smart&auto=webp&s=59513eabd52b38046305efa897a765a62c81956f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?width=216&crop=smart&auto=webp&s=770458478a7e558a03c2386f49ef110fa0ec7b89', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?width=320&crop=smart&auto=webp&s=4231abc9645fbb6e34950b99b8f4b68b7cabccda', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?width=640&crop=smart&auto=webp&s=f7590160f54819e0fc637174334d76fa99734e32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?width=960&crop=smart&auto=webp&s=1375370a5a245f9c1d1e9264e9c96041bbe97000', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?width=1080&crop=smart&auto=webp&s=fff783f9a1e3a4ff5909ec48ae4f6d506863d0a5', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/PrWNX_obnGuq0z02X4a9f-BnhuYBtaQkCGgRtO0AbrQ.jpg?auto=webp&s=7fc31d56b42c51d414f0ec24061f95ae5560538b', 'width': 2320}, 'variants': {}}]}
Any idea how ai_spongebob live stream is made?
1
[deleted]
2023-06-23T20:31:13
[deleted]
1970-01-01T00:00:00
0
{}
14h8wy0
false
null
t3_14h8wy0
/r/LocalLLaMA/comments/14h8wy0/any_idea_how_ai_spongebob_live_stream_is_made/
false
false
default
1
null
How to disable the `bitsandbytes` intro message:
2
[removed]
2023-06-23T21:08:12
https://www.reddit.com/r/LocalLLaMA/comments/14h9uja/how_to_disable_the_bitsandbytes_intro_message/
vishalgoklani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14h9uja
false
null
t3_14h9uja
/r/LocalLLaMA/comments/14h9uja/how_to_disable_the_bitsandbytes_intro_message/
false
false
default
2
null
ExLLama on Oobabooga (for Linux/WSL)
11
Right now this is a question, but I hope to edit this top into a solution at some point. For those not in the "know, ExLlama is an extremely optimized GPTQ backend ("loader") for LLaMA models. It features much lower VRAM usage and much higher speeds due to not relying on unoptimized transformers code." [https://github.com/oobabooga/text-generation-webui/blob/main/docs/ExLlama.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/ExLlama.md) Introduces it that way, then gives these instructions (***which I will interrupt like so***): \---- 1. Clone the ExLlama repository into your text-generation-webui/repositoriesfolder: 2. Follow the remaining set up instructions in the official README: [https://github.com/turboderp/exllama#exllama](https://github.com/turboderp/exllama#exllama) \---- ***Let's pause here... so I may have hit my head and forgotten that I did it... but there was already a folder with Exllama in the repositories folder when I went to do these instructions and Git Clone cried like a baby. So then I renamed the original folder, did the clone and carried on with my life by going here:*** [***https://github.com/turboderp/exllama***](https://github.com/turboderp/exllama) \---- ## Linux/WSL prerequisites `pip install --pre torch --index-url` [`https://download.pytorch.org/whl/nightly/cu118`](https://download.pytorch.org/whl/nightly/cu118) \---- ***So... I installed Oobabooga using the one click installer so I assumed I should run the cmd\_linux.sh bash file so I had access to pip (because I sure didn't have access to it on path with my Linux distro. As soon as I ran that I could run pip. Yay, all is right in the world.*** \---- `pip install -r requirements.txt` `git clone` [`https://github.com/turboderp/exllama`](https://github.com/turboderp/exllama) `cd exllama` \---- ***I already cloned the thing, so I just ran the first line above for pip, which brought me to this next line which I assumed I didn't need to run since I'm not trying to run it directly:*** \---- `python test_benchmark_inference.py -d <path_to_model_files> -p -ppl` \---- ***So I am done! Much rejoicing... except when I launch Oobabooga... it tells me I must install exllama still. (SCREAMS INTO THE VOID). What am I missing (besides a few brain cells)?*** # EDIT: The answer... for me came from below:I had to run # update_linux.sh # after doing all install steps for exllama, before oobabooga would recognize it. &#x200B;
2023-06-23T22:36:23
https://www.reddit.com/r/LocalLLaMA/comments/14hbzcd/exllama_on_oobabooga_for_linuxwsl/
silenceimpaired
self.LocalLLaMA
2023-06-25T20:38:56
0
{}
14hbzcd
false
null
t3_14hbzcd
/r/LocalLLaMA/comments/14hbzcd/exllama_on_oobabooga_for_linuxwsl/
false
false
self
11
{'enabled': False, 'images': [{'id': 'W4TcSsufvUQQ_SMGwjSCeEsKHzmiYip87HP7LpTXhvY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?width=108&crop=smart&auto=webp&s=89d7adf953998f0ed099f5c59fd7e194d5b4d73c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?width=216&crop=smart&auto=webp&s=220fcdaa11ca7cbd637647cea4942e332284960f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?width=320&crop=smart&auto=webp&s=54920dc4ec750e058effb0c8357c86a8b0f23149', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?width=640&crop=smart&auto=webp&s=3066284b62d202e96e73b7c9be2d2598dee067d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?width=960&crop=smart&auto=webp&s=d10586dcd53896e343af055e8949be1bd966bfd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?width=1080&crop=smart&auto=webp&s=fba94dc1f24361cdedb7276682775ee09fbc8c56', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_iuQGXnnQut3GSEp1DMjHPsVn7Aaz6T1vD_b-kLbAjQ.jpg?auto=webp&s=779a3e9e3926a4b55d4a6d84f91edf96b195d823', 'width': 1200}, 'variants': {}}]}
Assume I'm an idiot - oogabooga LLaMa.cpp??!
2
Assume I'm an idiot pretending to not be an idiot who is pretending to be an idiot. Help with the following, please? Step by step - I already have oogabooga. I need it to set up llama.cpp - put in the code, errors out. Have visual studio 2022 with desktop development c++. Can someone break this down into itty bitty bite sized steps for me so I can run through them one by one to find out where, exactly, I fucked this up? Much appreciated 👍
2023-06-23T22:53:25
https://www.reddit.com/r/LocalLLaMA/comments/14hcdd9/assume_im_an_idiot_oogabooga_llamacpp/
BasicsOnly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hcdd9
false
null
t3_14hcdd9
/r/LocalLLaMA/comments/14hcdd9/assume_im_an_idiot_oogabooga_llamacpp/
false
false
self
2
null
What 7b llm to use
22
Ive been deciding whether 7b llm to use, I thought about vicuna, wizardlm, wizard vicuna, mpt, gpt-j or other llms but i cant decide which one is better, my main use is for non-writing instruct like math related, coding, and other stuff that involves logic reasoning, sometimes just to chat with which one should i use? note other about 7b models(6b or 6.7b or something) will also work on my device
2023-06-23T23:55:44
https://www.reddit.com/r/LocalLLaMA/comments/14hdscz/what_7b_llm_to_use/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hdscz
false
null
t3_14hdscz
/r/LocalLLaMA/comments/14hdscz/what_7b_llm_to_use/
false
false
self
22
null
How can I stop the output window from scrolling to the bottom?
0
[removed]
2023-06-24T00:23:37
https://www.reddit.com/r/LocalLLaMA/comments/14heenl/how_can_i_stop_the_output_window_from_scrolling/
zapeggo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14heenl
false
null
t3_14heenl
/r/LocalLLaMA/comments/14heenl/how_can_i_stop_the_output_window_from_scrolling/
false
false
default
0
null
Recommended blasbatchsize for speed
6
Does higher blasbatchsizes mean faster prompt ingestion
2023-06-24T00:41:55
https://www.reddit.com/r/LocalLLaMA/comments/14hesqt/recommended_blasbatchsize_for_speed/
GOD_HIMSELVES
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hesqt
false
null
t3_14hesqt
/r/LocalLLaMA/comments/14hesqt/recommended_blasbatchsize_for_speed/
false
false
self
6
null
Gpt4all and avx instructions
0
[removed]
2023-06-24T01:21:02
https://www.reddit.com/r/LocalLLaMA/comments/14hfmjd/gpt4all_and_avx_instructions/
waltercrypto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hfmjd
false
null
t3_14hfmjd
/r/LocalLLaMA/comments/14hfmjd/gpt4all_and_avx_instructions/
false
false
default
0
null
Gpt4all
1
[removed]
2023-06-24T03:27:32
https://www.reddit.com/r/LocalLLaMA/comments/14hi5ft/gpt4all/
waltercrypto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hi5ft
false
null
t3_14hi5ft
/r/LocalLLaMA/comments/14hi5ft/gpt4all/
false
false
default
1
null
Best local model for documentation on A100.
1
[removed]
2023-06-24T03:50:41
https://www.reddit.com/r/LocalLLaMA/comments/14hil0z/best_local_model_for_documentation_on_a100/
nestordemeure
self.LocalLLaMA
2023-06-24T04:01:02
0
{}
14hil0z
false
null
t3_14hil0z
/r/LocalLLaMA/comments/14hil0z/best_local_model_for_documentation_on_a100/
false
false
default
1
null
Web scraping for personal LLM use (not to go public)
1
[removed]
2023-06-24T07:03:01
https://www.reddit.com/r/LocalLLaMA/comments/14hm1dy/web_scraping_for_personal_llm_use_not_to_go_public/
Medical-Confidence-4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hm1dy
false
null
t3_14hm1dy
/r/LocalLLaMA/comments/14hm1dy/web_scraping_for_personal_llm_use_not_to_go_public/
false
false
default
1
null
Rumor: GPT-4 is built from 8 smaller models hooked up
0
[deleted]
2023-06-24T08:13:59
[deleted]
1970-01-01T00:00:00
0
{}
14hn88i
false
null
t3_14hn88i
/r/LocalLLaMA/comments/14hn88i/rumor_gpt4_is_built_from_8_smaller_models_hooked/
false
false
default
0
null
Total noob question about Llama and my GPU
0
[removed]
2023-06-24T08:54:04
https://www.reddit.com/r/LocalLLaMA/comments/14hnvyj/total_noob_question_about_llama_and_my_gpu/
MajaroPro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hnvyj
false
null
t3_14hnvyj
/r/LocalLLaMA/comments/14hnvyj/total_noob_question_about_llama_and_my_gpu/
false
false
default
0
null
How big of a model can I run on 8GB of Vram?
15
I'm currently running an RTX 2080 Super Card. It has 8GB Vram on it. I've got 16GB of normal Ram. If I can run larger models what settings should I try?
2023-06-24T11:40:10
https://www.reddit.com/r/LocalLLaMA/comments/14hqutm/how_big_of_a_model_can_i_run_on_8gb_of_vram/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hqutm
false
null
t3_14hqutm
/r/LocalLLaMA/comments/14hqutm/how_big_of_a_model_can_i_run_on_8gb_of_vram/
false
false
self
15
null
What is the best 13b model?
1
[removed]
2023-06-24T13:13:17
https://www.reddit.com/r/LocalLLaMA/comments/14hss6r/what_is_the_best_13b_model/
Asoery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hss6r
false
null
t3_14hss6r
/r/LocalLLaMA/comments/14hss6r/what_is_the_best_13b_model/
false
false
default
1
null
Does anyone have a price comparison breakdown of running llms on local hardware vs cloud gpu vs gpt-3 api?
56
Im curious what the price breakdown (per token?) would be for the running llms on local hardware vs cloud gpu vs gpt-3 api? I would like to be able to answer a question like: What would the fixed and operational costs be for running at least a gpt-3.5 quality llm if it was hosted in option A, B or C? And at what scale could investing in hardware be advantageous? If anyone has already done the math i would love to hear their findings. Either way, ill report back after some analysis. My thinking is to determine price per token, which would be pretty straight forward for gpt api, and increasingly convoluted for the local and cloud options. But if anyone has a better approach before i start my investigation i would love to hear it.
2023-06-24T14:12:27
https://www.reddit.com/r/LocalLLaMA/comments/14hu3av/does_anyone_have_a_price_comparison_breakdown_of/
trv893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hu3av
false
null
t3_14hu3av
/r/LocalLLaMA/comments/14hu3av/does_anyone_have_a_price_comparison_breakdown_of/
false
false
self
56
null
Max Generation Length vs. Context Window?
6
Noob question – what's the difference between the max tokens in the context window and the max number of tokens a model can generate? &#x200B; Specifically referring to models like Alpaca and Vicuna. I think Alpaca has 512 tokens context window limit (I understand that this is how much you can pass into the prompt) and Vicuna has 2048. But is there a cap on how many tokens these models can generate? &#x200B; Thanks!
2023-06-24T17:07:10
https://www.reddit.com/r/LocalLLaMA/comments/14hy2js/max_generation_length_vs_context_window/
karansdalal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hy2js
false
null
t3_14hy2js
/r/LocalLLaMA/comments/14hy2js/max_generation_length_vs_context_window/
false
false
self
6
null
WizardLM-33B-V1.0-Uncensored
259
Following up on [https://www.reddit.com/r/LocalLLaMA/comments/14e4sw8](https://www.reddit.com/r/LocalLLaMA/comments/14e4sw8/wizardlm13bv10uncensored/?utm_source=share&utm_medium=web2x&context=3) Today I released WizardLM-33B-V1.0-Uncensored [https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored) Based on the [WizardLM/WizardLM\_evol\_instruct\_V2\_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) dataset I filtered it to remove refusals, avoidance, bias. [ehartford/WizardLM\_evol\_instruct\_V2\_196k\_unfiltered\_merged\_split](https://huggingface.co/datasets/ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split) I trained this with Vicuna's FastChat, as the new data is in ShareGPT format and WizardLM team has not specified a method to train it. The prompt format is Vicuna-1.1 style. While the instruct layer has been uncensored, LLaMA itself has opinions, and will resist instructions it disagrees with. You must overcome that with your own wit and prompt engineering. Finally, you are responsible for the content you create using this model. It is uncensored and can produce illegal and unethical content, if you ask it to. If deploying this in a service, I would recommend you train a LoRA to implement your own alignment to prevent unwanted behavior. [u/The-Bloke](https://www.reddit.com/u/The-Bloke/) has kindly quantized this model as a service to the community. Respect. [https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ) [https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GGML](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GGML) I'm working on MPT-30B and Falcon-40B. Thank you to the open-source community and everyone who helped me. Also thanks to [chirper.ai](https://chirper.ai/) for sponsoring some of my compute.
2023-06-24T17:07:55
https://www.reddit.com/r/LocalLLaMA/comments/14hy369/wizardlm33bv10uncensored/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14hy369
false
null
t3_14hy369
/r/LocalLLaMA/comments/14hy369/wizardlm33bv10uncensored/
false
false
self
259
{'enabled': False, 'images': [{'id': 'SEuEn66m6N56xXsIFnynzlANOvkJZei8f88r0ztnS1A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?width=108&crop=smart&auto=webp&s=4f89ebe9c9e8b177b8d4e559e0b83054be18a36d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?width=216&crop=smart&auto=webp&s=2e02c90faefb43ee94cedc62d9c342e09875e7c1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?width=320&crop=smart&auto=webp&s=ac6caf3fd5eb1c67e4177dcca8da1f765578ed93', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?width=640&crop=smart&auto=webp&s=ad4919c4d280b20ead3dce2075c131c57cf3916a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?width=960&crop=smart&auto=webp&s=1f863b43eede0f9b5ea92f8f12ad62aad86cb9bc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?width=1080&crop=smart&auto=webp&s=925d116f29c05e731cd44aa41ba371806e16032c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/a2LzA_zXE7-qkIoCF3zUxbaaxj85XA6K1S2P2ju92ro.jpg?auto=webp&s=a5ca27b656aa4a1d8423adaf7c9a65890b995afe', 'width': 1200}, 'variants': {}}]}
Considering getting a Jetson AGX Orin.. anyone have experience with it?
25
I've been messing around with local models on the equipment I have (just gaming rig type stuff, also a pi cluster for the fun of it), which limits the usefulness of the model. I'd really like to have a local model on a dedicated machine that I can connect into from my home network.
2023-06-24T17:18:59
https://i.redd.it/z44bybzx108b1.jpg
[deleted]
i.redd.it
1970-01-01T00:00:00
0
{}
14hyci4
false
null
t3_14hyci4
/r/LocalLLaMA/comments/14hyci4/considering_getting_a_jetson_agx_orin_anyone_have/
false
false
https://a.thumbs.redditm…QCKYFSe9XSo0.jpg
25
{'enabled': True, 'images': [{'id': 'qT31HeNsZ99qRQIiPhgxhePV33XnxFIc4r4DBJS3Q8k', 'resolutions': [{'height': 170, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?width=108&crop=smart&auto=webp&s=57c59d2be484c4990e36de168c00a9a3aa749bcc', 'width': 108}, {'height': 340, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?width=216&crop=smart&auto=webp&s=8a2e63c3904af43269966dd613140f3c3253d6ca', 'width': 216}, {'height': 504, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?width=320&crop=smart&auto=webp&s=c4eb8bfab4e4c0003b372e9bdecd0fbf8acdcf42', 'width': 320}, {'height': 1009, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?width=640&crop=smart&auto=webp&s=df0a0f8df2bec6c5738498235c0bacfa9f9186d6', 'width': 640}, {'height': 1513, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?width=960&crop=smart&auto=webp&s=e14bdf2fc0b1fddaf19f64f1604eb5717e7dea85', 'width': 960}, {'height': 1703, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?width=1080&crop=smart&auto=webp&s=7a9a379c70390004eeb93a77fb789f0de090da98', 'width': 1080}], 'source': {'height': 1703, 'url': 'https://preview.redd.it/z44bybzx108b1.jpg?auto=webp&s=a063395bf5cdcae262cb6e636f0f4001aad3defa', 'width': 1080}, 'variants': {}}]}
Koboldcpp How do I improve token generation speed with cuda
6
So I know how to use opencl and offload layers to the GPU with --clblast but is there something like a --cublast in koboldcpp
2023-06-24T19:32:18
https://www.reddit.com/r/LocalLLaMA/comments/14i1fui/koboldcpp_how_do_i_improve_token_generation_speed/
GOD_HIMSELVES
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i1fui
false
null
t3_14i1fui
/r/LocalLLaMA/comments/14i1fui/koboldcpp_how_do_i_improve_token_generation_speed/
false
false
self
6
null
Fine tune for a QNA chatbot
1
Hi! I've been delving into the capabilities of LLMs for Q&A chatbots and am currently working with a dataset of 500 Q&As. My experience with GPT-4, combined with vector databases, has been quite positive - the answers it generates are impressive. However, I'm hitting a wall with two challenges: local deployment is out of reach, and including knowledge in the prompt significantly bulks up size, leading to substantial costs. Given these constraints, I'm looking into the possibility of fine-tuning a base model as an alternative. Though I've come across a handful of online tutorials, my attempts with falcon 7b haven't hit the mark - the accuracy just isn't there. I'm wondering if anyone else here has traveled this road and successfully created a Q&A system by fine-tuning a base model. If so, I'd be grateful to hear about your experience. In particular, your thoughts on the optimal base model for fine-tuning and any insight into the realistic outcomes I could expect from this method would be greatly appreciated.
2023-06-24T20:22:57
https://www.reddit.com/r/LocalLLaMA/comments/14i2mlf/fine_tune_for_a_qna_chatbot/
Ethernalmind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i2mlf
false
null
t3_14i2mlf
/r/LocalLLaMA/comments/14i2mlf/fine_tune_for_a_qna_chatbot/
false
false
self
1
null
New model using orca dataset
73
https://huggingface.co/psmathur/orca_mini_13b orca_mini_13b An OpenLLaMa-13B model model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. I am not the model creator
2023-06-24T20:55:50
https://www.reddit.com/r/LocalLLaMA/comments/14i3dog/new_model_using_orca_dataset/
roobenTHICK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i3dog
false
null
t3_14i3dog
/r/LocalLLaMA/comments/14i3dog/new_model_using_orca_dataset/
false
false
self
73
{'enabled': False, 'images': [{'id': 'aIWn_fGK7k0UZF60dQnhJaT4Yc_A8zXuRrcRc_DoxzI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=108&crop=smart&auto=webp&s=4ea6814df014e96e5be87b2a2415529e032182ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=216&crop=smart&auto=webp&s=ec3588b7edee3220fe7ae28b3ca7553b0b4c7cfd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=320&crop=smart&auto=webp&s=d3783d6dd55daaea33ab9db7bfb08564ee184607', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=640&crop=smart&auto=webp&s=1b2faf5ff598f214dab0c56e41b057f75591307a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=960&crop=smart&auto=webp&s=d298b25eec5a13aae4f272f85be967ab3a4ce683', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=1080&crop=smart&auto=webp&s=729cd30367a3044b7aa1f2aa499d5d8589621431', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?auto=webp&s=312b7d4525b819fcb10ea187751da63322a7e192', 'width': 1200}, 'variants': {}}]}
Adding personality and memory to create characters
9
I've been playing around with LLMs for some time now and want to build a chatbot that behaves almost exactly like a human being. I've decided to use the langchain framework and am having some trouble. 1. Memory: I tried ConversationSummaryMemory and ConversationBufferMemory, in both cases, the model is not able to make logical inferences from sentences. For eg, I'm telling the model that "I'm Sam, I'm from Austria". Then immediately after, when I ask it "What my name?" or "What is my home country?" or "Where am I from?" , it is not able to give correct answers. 2. Personality : For now, I've just modified the prompt template, with smth like "This is a conv between a human and an AI named .... " and so on, the I append the conv history to the prompt template (do I need to do this even after adding langchain memory??). This is quite restrictive and I'll need something a lot more robust and flexible. I want my model to use normal chat lingo and stuff like "lol" or "idgaf" etc.. Do make suggestions and lmk if any of this is possible.
2023-06-24T21:01:38
https://www.reddit.com/r/LocalLLaMA/comments/14i3iox/adding_personality_and_memory_to_create_characters/
CmplxQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i3iox
false
null
t3_14i3iox
/r/LocalLLaMA/comments/14i3iox/adding_personality_and_memory_to_create_characters/
false
false
self
9
null
How does GPT-4 rank on the HuggingFace leaderboard?
17
Exactly as the title says, GPT-4 isn't listed on HuggingFace's leaderboard. This seems pretty important, seeing as how it's the current industry standard.
2023-06-24T21:25:48
https://www.reddit.com/r/LocalLLaMA/comments/14i43j7/how_does_gpt4_rank_on_the_huggingface_leaderboard/
Bochinator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i43j7
false
null
t3_14i43j7
/r/LocalLLaMA/comments/14i43j7/how_does_gpt4_rank_on_the_huggingface_leaderboard/
false
false
self
17
null
Intel Arc viable now?
17
It has been the some month since the last posting about the Intel Arc. I see the latest Kobold rewrite the CUDA kernel to OpenCL, and Intel seeming to be updating the driver very well. How do we thinking it seeming to be the good budget card. No so turn key as the NVIDIA, but I thinking it can be the good alternative for the poor LLM enjoyer.
2023-06-24T21:52:30
https://www.reddit.com/r/LocalLLaMA/comments/14i4plg/intel_arc_viable_now/
SlavaSobov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i4plg
false
null
t3_14i4plg
/r/LocalLLaMA/comments/14i4plg/intel_arc_viable_now/
false
false
self
17
null
How do I about running out of context in ooba?
1
[removed]
2023-06-24T22:24:05
https://www.reddit.com/r/LocalLLaMA/comments/14i5f5f/how_do_i_about_running_out_of_context_in_ooba/
Vanilla_Vampi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i5f5f
false
null
t3_14i5f5f
/r/LocalLLaMA/comments/14i5f5f/how_do_i_about_running_out_of_context_in_ooba/
false
false
default
1
null
Has anyone tried falcoder model?
7
It has only 7B parameters. Right now I do not have a PC to test it. Is this better than wizardcoder? https://huggingface.co/mrm8488/falcoder-7b "Falcon-7b fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library."
2023-06-24T22:56:52
https://www.reddit.com/r/LocalLLaMA/comments/14i652r/has_anyone_tried_falcoder_model/
polawiaczperel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14i652r
false
null
t3_14i652r
/r/LocalLLaMA/comments/14i652r/has_anyone_tried_falcoder_model/
false
false
self
7
{'enabled': False, 'images': [{'id': 'kckByvNI7xP1OTFxNupfAgvrJXZ0fbzbK3KHxeMbvbI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=108&crop=smart&auto=webp&s=92210f7206a60246c0bc8be4ae600a7250f4c469', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=216&crop=smart&auto=webp&s=b01b6f47e9060f067753855bea107b5a8aeed85b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=320&crop=smart&auto=webp&s=cfd467f35e14e1d4ea20a5fdb8b23b5965e2c3b3', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?auto=webp&s=d2e3fa709876534e40db43a983a293311724bf3a', 'width': 500}, 'variants': {}}]}
Model for: 13600k + 32gb ddr5 ram (6400/32) + 8gb vram (3060ti),
4
Hello everyone, I lately went back into local llms as I read now you can run llm with a decent(?) speed on cpu+ram thanks to quantization, llama.cpp and I could even offload some layers to gpu (I hope I understood it correctly.). Now however I'm quite clueless when comes to model choice. I don't have any particular need, I would like to test general conversation, questions, rp. Generally everything outside of coding. If model would be uncensored it's a bonus but not requirement. Thank you for your suggestions. EDIT: It's my bad, I was mostly interested what's the maximum size of a model that could run on this setup but I didn't clarified that. That's why I need AI, to be smart for me!
2023-06-25T01:44:28
https://www.reddit.com/r/LocalLLaMA/comments/14i9mm0/model_for_13600k_32gb_ddr5_ram_640032_8gb_vram/
Marbaass
self.LocalLLaMA
2023-06-25T02:41:39
0
{}
14i9mm0
false
null
t3_14i9mm0
/r/LocalLLaMA/comments/14i9mm0/model_for_13600k_32gb_ddr5_ram_640032_8gb_vram/
false
false
self
4
null
[WIP] Auto Edit Videos with LLMs
37
We put together an approach to use LLMs with tools like FFMPerative for video editing through a chat-based interface, which streamlined the process significantly. Here's a quick video walking through the pipeline: https://reddit.com/link/14iawid/video/khvuwqfkv28b1/player We use GPT-4/Vicuna as a video director, planning a sequence of video edits when provided with the necessary context about the video clips. Video-LLaMA and Whisper allow us to extract more context through video understanding and transcripts. With [FFMPerative](https://github.com/remyxai/FFMPerative), we execute the described edit plan and put together a video. Edit: adding colab notebook [here](https://colab.research.google.com/drive/149byzCNd17dAehVuWXkiFQ2mVe_icLCa?usp=sharing) and video showcasing [local fine-tuned model update](https://www.youtube.com/watch?v=p3dcKj6Xa6A)
2023-06-25T02:49:37
https://www.reddit.com/r/LocalLLaMA/comments/14iawid/wip_auto_edit_videos_with_llms/
remyxai
self.LocalLLaMA
2023-07-26T17:29:12
0
{}
14iawid
false
null
t3_14iawid
/r/LocalLLaMA/comments/14iawid/wip_auto_edit_videos_with_llms/
false
false
https://b.thumbs.redditm…Fv1ibCd7KYxQ.jpg
37
{'enabled': False, 'images': [{'id': 'uNF_C7xGnbkAhnWNVVZ7TO1Xqqsw4t7BgDSYFgGQKWg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?width=108&crop=smart&auto=webp&s=9e798806b68044e9f37cf2f8682f542abdce8047', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?width=216&crop=smart&auto=webp&s=2428947a5727c48e4cdc6d4f2f0c03b213749bb9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?width=320&crop=smart&auto=webp&s=9bf209f066bc904b18ed547fa667280e7af71194', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?width=640&crop=smart&auto=webp&s=05fd3edddb315fbfeb2b3b642852c9f0663e5a63', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?width=960&crop=smart&auto=webp&s=991ce06a112243e19c5f1ffbcea2e59282750d8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?width=1080&crop=smart&auto=webp&s=76a7909ba88041c34963d82e334823ff93aecba6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Y-ojX52dxZXi74G4RXn_IIBuWu3Q_p8j9MxD9affjD4.jpg?auto=webp&s=c1cf63522294fe59e1d5bff4afbefaea3477b164', 'width': 1200}, 'variants': {}}]}
Orca-Mini-13b, Orca-Mini-7b & Orca-Mini-3b
170
Today I released Orca-Mini-13b, Orca-Mini-7b & Orca-Mini-3b [https://huggingface.co/psmathur/orca\_mini\_13b](https://huggingface.co/psmathur/orca_mini_13b) [https://huggingface.co/psmathur/orca\_mini\_7b](https://huggingface.co/psmathur/orca_mini_7b) [https://huggingface.co/psmathur/orca\_mini\_3b](https://huggingface.co/psmathur/orca_mini_3b) All of the above are based on OpenLLaMa 13B/7B/3B models, I trained them on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and then applying Orca Research Paper dataset construction approaches. &#x200B; Dataset [https://huggingface.co/datasets/psmathur/WizardLM\_Orca](https://huggingface.co/datasets/psmathur/WizardLM_Orca) [https://huggingface.co/datasets/psmathur/alpaca\_orca](https://huggingface.co/datasets/psmathur/alpaca_orca) [https://huggingface.co/datasets/psmathur/dolly-v2\_orca](https://huggingface.co/datasets/psmathur/dolly-v2_orca) We build explain tuned WizardLM dataset \~70K, Alpaca dataset \~52K & Dolly-V2 dataset \~15K created using approaches from Orca Research Paper. &#x200B; We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. &#x200B; This helps student model aka this model to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). &#x200B; Please see below example usage how the System prompt is added before each instruction. &#x200B; Training The training configurations are provided in the table below. &#x200B; The training takes on 8x A100(80G) GPUs and lasts for around 15 Hours for cost of $180 using Lambda Labs &#x200B; We used DeepSpeed with fully sharded data parallelism, also know as ZeRO stage 3 by writing our own fine tune training scripts plus leveraging some of the model training code provided by amazing OpenAlpaca repo u/The-Bloke has kindly quantized this model as a service to the community. Respect. [https://huggingface.co/TheBloke/orca\_mini\_3B-GGML](https://huggingface.co/TheBloke/orca_mini_3B-GGML) &#x200B; [https://huggingface.co/TheBloke/orca\_mini\_7B-GPTQ](https://huggingface.co/TheBloke/orca_mini_7B-GPTQ) [https://huggingface.co/TheBloke/orca\_mini\_7B-GGML](https://huggingface.co/TheBloke/orca_mini_7B-GGML) &#x200B; [https://huggingface.co/TheBloke/orca\_mini\_13B-GPTQ](https://huggingface.co/TheBloke/orca_mini_13B-GPTQ) [https://huggingface.co/TheBloke/orca\_mini\_13B-GGML](https://huggingface.co/TheBloke/orca_mini_13B-GGML) &#x200B; I want to say huge thanks to all the community member who came before me and pave path to other people success. Huge shoutout to Eric Hartford @[https://www.reddit.com/user/faldore/](https://www.reddit.com/user/faldore/) I'm planning on releasing bigger explained tuned datasets and more SFT models in future, will keep you all updated. &#x200B; NOTE: Due to limitation in OpenLlama, this model will not produce consecutive whitespace - Hence, the Code Generation will not work properly, check out more info at [https://github.com/openlm-research/open\_llama#](https://github.com/openlm-research/open_llama#) &#x200B;
2023-06-25T03:47:02
https://www.reddit.com/r/LocalLLaMA/comments/14ibzau/orcamini13b_orcamini7b_orcamini3b/
Remarkable-Spite-107
self.LocalLLaMA
2023-06-26T14:59:25
0
{}
14ibzau
false
null
t3_14ibzau
/r/LocalLLaMA/comments/14ibzau/orcamini13b_orcamini7b_orcamini3b/
false
false
self
170
{'enabled': False, 'images': [{'id': 'aIWn_fGK7k0UZF60dQnhJaT4Yc_A8zXuRrcRc_DoxzI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=108&crop=smart&auto=webp&s=4ea6814df014e96e5be87b2a2415529e032182ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=216&crop=smart&auto=webp&s=ec3588b7edee3220fe7ae28b3ca7553b0b4c7cfd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=320&crop=smart&auto=webp&s=d3783d6dd55daaea33ab9db7bfb08564ee184607', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=640&crop=smart&auto=webp&s=1b2faf5ff598f214dab0c56e41b057f75591307a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=960&crop=smart&auto=webp&s=d298b25eec5a13aae4f272f85be967ab3a4ce683', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?width=1080&crop=smart&auto=webp&s=729cd30367a3044b7aa1f2aa499d5d8589621431', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ieijcqFnguckWZvgGfbIdFG522g3PEyyd2mh0vOgzkk.jpg?auto=webp&s=312b7d4525b819fcb10ea187751da63322a7e192', 'width': 1200}, 'variants': {}}]}
According to LLaMA you can't drown in a swimming pool
0
Funny Turing test of sorts. Apparently this is a very confusing prompt: >Hypothetically, if I replaced all the water in a full-sized swimming pool with coffee and a person jumped in, is it possible for them to drown in the pool? Answer: >It is unlikely for someone to drown solely due to the presence of liquid coffee in the pool. The primary concern would be the potential discomfort and irritation caused by the coffee's acidity and temperature. It's always important to exercise caution and avoid risky behaviors around liquids, especially when they contain substances like coffee. A slew of models (except Vicuna, occasionally) all seem to think you can't drown in coffee. Here's a sample conversation once I figured out that the answer was going to be "not really": >User: What's a coffee? > >Llama: Coffee is a beverage made from brewing the roasted seeds of the coffee plant. It's often served hot or cold and has caffeine that can help increase alertness and energy levels. > >User: List the ingredients in a cup of coffee > >Llama: Typically, a cup of coffee consists of filtered water, freshly ground coffee beans, and sometimes milk, sugar, or other sweeteners, and foam (if desired). > >User: A coffee is brewed by running hot water through freshly ground coffee beans. The result is filtered to make sure none of the grounds end up in the drink. Is this correct? > >Llama: Yes, that's correct! The process of brewing coffee involves pouring hot water over freshly ground coffee beans, which allows the flavors, aromas, and caffeine to extracted and infuse into the water. Then, the liquid is filtered to remove any remaining coffee grounds before serving. > >User: Does the resulting liquid contain coffee grounds? > >Llama: No, the resulting liquid, commonly known as coffee, does not contain any coffee grounds. The grounds are removed during the filtering process to ensure a smooth and enjoyable experience. > >User: And would the resulting liquid be called a coffee? > >Llama: Yes, the resulting liquid is usually referred to as coffee, whether it's served hot or cold, with or without added ingredients like milk or sugar. > >User: Does this mean that a coffee does not contain any coffee grounds? > >Llama: To clarify, the finished product that we typically refer to as "coffee" does not contain any coffee grounds. However, the process of making coffee involves using freshly ground coffee beans, which are then filtered out prior to consumption. > >User: If I take the finished product, the "coffee" and I make enough of it to fill a swimming pool, what am I putting in the swimming pool? > >Llama: In theory, if you were to make enough coffee to fill a swimming pool, the primary ingredient would still be water, followed by freshly ground coffee beans. However, it's important to note that adding large amounts of coffee to water can alter its chemistry and potentially cause damage to the pool's infrastructure. It's always best to consult with experts before attempting such experiments. > >User: Could someone drown in this swimming pool filled with the finished coffee liquid? > >Llama: While it is unlikely for someone to drown in a swimming pool filled with coffee, consuming large quantities of coffee can lead to adverse health effects such as increased heart rate, anxiety, and dehydration. It's always important to consume coffee in moderation and consult with a healthcare professional if concerns arise. (This was using WizardLM 30B GPTQ 4-Bit, but this seems to work on a bunch of models. No idea why. ChatGPT and Bard get it right -- Bard less so, it still seems to think it's not too likely but at least knows there's water to drown in when in a pool of coffee.) Edit: Try it in the arena to get results from a bunch of models: [https://chat.lmsys.org/?arena](https://chat.lmsys.org/?arena)
2023-06-25T07:28:20
https://www.reddit.com/r/LocalLLaMA/comments/14ifvzu/according_to_llama_you_cant_drown_in_a_swimming/
keith_and_kit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ifvzu
false
null
t3_14ifvzu
/r/LocalLLaMA/comments/14ifvzu/according_to_llama_you_cant_drown_in_a_swimming/
false
false
self
0
null
Some ranting. Focus. Versatility is greater than Variety. The new replication crisis.
0
**(Please read the whole thing before you hate on me. Or just skip the hating on me.)** We're leaving the G out of A G I. This isn't about impatience, this is about trajectory. I don't see any evidence of focus even being a goal, let alone a near or mid term foreseeable outcome. Kinda tired of 47,000 models. Been thinking about this and variety is not the spice of life, versatility is. Are we even aiming in a direction of load following models? I don't have to download a specific version of a specific game for my specific hardware. I just get the game and it performs as well as it can for what I give it. Do we even want that for this tech someday? Again I see no efforts leaning in that direction. Is that because it's impossible? Or is it because no one cares? Saying I want one best model, or for at least that to be the goal, is not saying I want to force one use case. It means I want one model that can do it all or CLEAR specialists for specific mutually exclusive niche use cases. The whole power of prompting imo is that various prompts give various utilities from one application. This is a natural monopoly space imo. We don't want a dozen search engines, we want one that really works. That's the entire point of GPT. The Variety should be in prompt response, not prompt receivers. The Variety should be Versatility. It's not that I don't appreciate all the work, and I'm not being entitled, I'm saying the direction of spamming models with incremental changes, to a thousand versions, to be run with a thousand different settings, and a thousand different prompt structures means virtually no real play testing of any given configuration. That's why there's this push to streamline testing because we're having to compare and contrast too many models and it's unsustainable. What's the model per month rate now? You can bet it's going up. Is that really considered a good thing intrinsically or are we just assuming any progress is good progress? Is there any way to combine models? Is shooting for the best of at least pair of worlds something no one cares about but me? How much duplication of effort is already in play? Can we at least get some kind of list of models that have been completely obviated? Everyone is all about the leaderboard but we need a loser board too. Some culling. I feel like we're in a new replication crisis where everyone wants to make the next model but no one is working on a core model as a group. It's like how there's all these linux distros but no one has ever really worked on a distro meant to unseat windows. This is an ancient debate and I hate seeing it happen again in this context. Stable diffusion is having a similar problem. Millions of niche models and control nets and loras when we'd all be better off if all the effort were aimed at a core model that just works better. Can someone explain to me without being a snarky ideolog why this can't be done? Are we talking about mutual exclusivity here? Is it literally not possible to focus? It's weird how models don't even have a mechanism for being updated. It's never like wizard version 2 it's always wizard-orca-magicpants-manticore-64-b-k-5-M.bin. I understand the need for variation for hardware configurations, but I'm definitely burning out with all the seemingly meaningless choices. I feel no closer to my virtual D&D game than I was a month ago.
2023-06-25T08:01:07
https://www.reddit.com/r/LocalLLaMA/comments/14igfq4/some_ranting_focus_versatility_is_greater_than/
Innomen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14igfq4
false
null
t3_14igfq4
/r/LocalLLaMA/comments/14igfq4/some_ranting_focus_versatility_is_greater_than/
false
false
self
0
null
What is a good way to generate a training dataset from an article/book or long text?
15
Hello, I want to create a large Q/A dataset that can answer a lot of questions from some books and articles. Right now I am testing out some solutions with GPT-4 but it recommends me using few other models specialized for Q/A generation but the quality of the generated content is very very bad. If anyone is using any methods or can provide some useful info on how to generate Q/A datasets with other models or other techniques please share with the community. This is what GPT-4 recommends me at the moment but I guess that I will research a bit more: [https://gist.github.com/m0sh1x2/82948928f9971f86d8a344060f4de610](https://gist.github.com/m0sh1x2/82948928f9971f86d8a344060f4de610)
2023-06-25T12:43:58
https://www.reddit.com/r/LocalLLaMA/comments/14ildl8/what_is_a_good_way_to_generate_a_training_dataset/
jumperabg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ildl8
false
null
t3_14ildl8
/r/LocalLLaMA/comments/14ildl8/what_is_a_good_way_to_generate_a_training_dataset/
false
false
self
15
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]}
Extensive LLama.cpp benchmark & more speed on CPU, 7b to 30b, Q2_K, to Q6_K and FP16, X3D, DDR-4000 and DDR-6000
138
# TL;DR * Some of the effects observed here are specific to the AMD Ryzen 9 7950X3D, some apply in general, some can be used to improve llama.cpp * Just use 14 or 15 threads and it's quite fast, but it could be even faster with some manual tweaking. * Number and frequency of cores determine prompt processing speed. Cache and RAM speed don't matter here. * Using all cores makes prompt eval slower, unless using full hyperthreading. * Cache and RAM speed determine text generation speed. Number of cores or hyperthreading don't matter or even slow it down. * Text generation in FP16 is way slower than Q4\_0 of a model that has twice the number of parameters. * Selective spread core usage leads to 9% faster text generation than default core selection. * You can sometimes save a ton of prompt processing time by increasing the context window size, all while getting higher-quality results. * Twice the model file size, half the text generation speed. * Twice the model parameters, roughly half the prompt processing speed. # Intro Here are some simplified overviews and definitions first, to ease the understanding of the following observations. ## Terms * Context window: The number of tokens ("text") that the model keep in memory ("act on") while generating a response. It is filled by the initial prompt as well as by longer generated responses. * llama.cpp stat "prompt eval time (ms per token)": Number of tokens in the initial prompt and time required to process it. More tokens are added and processed here whenever the generated text exceeds the context window size. This means the prompt token stat for a very short prompt can be quite high when a long text was generated in response. * llama.cpp stat "eval time (ms per token): Number of generated tokens ("response text length") and the time required to generate them. * Hyperthreading: A CPU core isn't one "solid" thing. It consists of multiple sub-units, some for different types of calculations or instructions. Hyperthreading makes one physical core available as two logical cores, so that a task on the second logical core could potentially use a part of the CPU core that a task on the first logical core doesn't use, and thus lead to faster completion. ## Hardware ## CPU: AMD Ryzen 9 7950X3D * There are 8 CPU cores on each chiplet. * The 7950X3D consists of two chiplets, CCD 0 and CCD 1. * CCD 0 has 32 MB + 64 MB cache. * CCD 1 just has the default 32 MB cache, but can run at higher frequencies. * Each core supports hyperthreading, so there are 32 logical cores in total. * Windows allocates workloads on CCD 1 by default. Upon exceeding 8 llama.cpp threads it starts using CCD 0, and finally starts with the logical cores and does hyperthreading when going above 16 threads. * The cores don't run on a fixed frequency. The max frequency of a core is determined by the CPU temperature as well as the CPU usage on the other cores. The highest clock rates are reached when only a single core is used. The lowest when all the cores are used and the CPU fan is set to spin slowly. ## RAM While there are many different properties that matter in general, there is mainly one that's of relevance here: Throughput. DDR-6000 RAM transfers data twice as fast as DDR-3000 RAM. # Observations Here are the general findings. Graphs and details follow in a later section. All benchmarking was performed with a fixed seed for comparable results. ## Prompt processing * Prompt processing is faster on CCD 1 than on CCD 0. * The higher frequencies of CCD 1 are better for this than the larger cache of CCD 0. * The number of cores matters. More cores = faster prompt processing. * There is a catch though, using all physical cores leads to slower processing speed than leaving one or two cores available for other background processes. * Using hyperthreading on a few cores slows down processing. * Using hyperthreading on all the cores, thus running llama.cpp with -t 32 on the 7950X3D results in 9% to 18% faster processing compared to 14 or 15 threads. * Q4\_K\_M is about 15% faster than the other variants, including Q4\_0. Only Q2\_K is slightly faster. * RAM speed does not matter. The processing time is identical with DDR-6000 and DDR-4000 RAM. ## Text generation * The text generation is faster on CCD 0 than on CCD 1. * The larger cache of CCD 0 is better for this than the higher frequencies of CCD 1. * Hyperthreading doesn't help here and can even slow down text generation. * Higher model sizes lead to slower text generation speed. * Q4\_K\_M is 6% slower than Q4\_0 for example, as the model file is 8% larger. * The text generation speed when using 14 or 15 cores as initially suggested can be increased by about 10% when using 3 to 4 cores from each CCD instead, so 6 to 8 cores in total. * start /b /wait /affinity 01110111 main.exe -t 6 * For CPUs other than the 7950X3D a different affinity setting must be used. * Using cores from each CCD means also benefiting from each L3 cache and from better heat distribution. * Keep in mind that reducing the number of used cores for faster text generation will unfortunately decrease prompt processing speed. * Text generation with FP16 models scales way less with the number of cores than the quantized versions. * This doesn't matter much, as Q8 achieves almost the same perplexity while scaling way better. * Faster RAM helps. Going from DDR-4000 to DDR-6000 increased tokens per second by 15% to 28%, depending on the number of used cores. # Optimization opportunities ## llama.cpp * Prompt processing is CPU-bound. Any optimization of the code would have a direct impact on processing speed. * Text generation is RAM-bound. Better cache-utilization or data-format improvements/reductions would increase processing speed. * Adding an option for using a different number of threads and specific CPU cores for prompt processing and text generation would reduce the overall CPU usage while finishing text generation faster. Prompt processing is good at 14 threads, but fastest on 32, while text generation is usually fastest on 3+3 cores. ## Usage * When you feed a book to the model and want a simple yes/no as result, then select the threads for prompt processing speed. * If your prompt is short and you'd like the model to generate a book in return, then select the threads for text generation speed. * The context window is set to 512 by default. Usually it's beneficial to increase it. * When the prompt is 500 tokens and the generated response will be 20 tokens, then llama.cpp will spend time on additional prompt processing once 12 of the 20 tokens have been generated, as it reaches the context window size of 512. This will cause the prompt evaluation time to be twice as long as it needs to be. # Appendix: Graphs ## Prompt processing Here is a general overview over the time per token for different model sizes and quantization settings: https://imgur.com/8cpGorw * There are no surprises in CPU scaling in general. * No specific configuration benefits from more threads more than another one. * However, there's that bump at 16 threads. Let's zoom in a bit: https://imgur.com/qLvwfmR * With the logarithmic scaling it can be seen more clearly that the processing time goes up with 16 threads and sometimes starts already doing so at 15. * Nevertheless it's always faster with 32 threads, thus with full hyperthreading. * It's also visible that some of the quantizations have unexpected timing behavior. Shouldn't Q4\_0 be faster than Q6\_K as it's less data to process? Here is a chart with the fastest processing times at 32 threads: https://imgur.com/0lUsHTJ * FP16 is clearly the slowest here. Yet it's directly followed by Q4\_0, and Q5\_1 is even slower than that. Q8\_0 does surprisingly well. * All the new quantization approaches are faster than the old ones. * Q2\_K of the 13B model is about as fast as Q5\_1 of the 3B model. * The required time per token seems to be roughly linear between the different model sizes for the same quantization types. Let's look at that in detail to confirm it: https://imgur.com/rBXRdvq * It's not the file size, it's the number of model parameters. * Prompt processing is CPU-bound. Having more parameters means more operations to perform. * It's nicely visible here that FP16 of a smaller model is faster than any quantization of a bigger model, even though FP16 has the larger file. * Q3\_K\_M seems a bit slow for its size. * Q4\_0 and Q5\_1 are way too slow for their size. ## Text generation Let's start with an overview again: https://imgur.com/SyMHpen * This gets interesting, as the scaling differs a lot, contrary to the prompt processing. * There's not much more scaling after 4 threads. Yet at 9 threads it starts getting faster gain. That's when threads get allocated to the X3D CCD 0. * 3+3 and 4+4 means dedicated usage of x threads on each CCD. * This achieves significantly faster speeds than any regular thread configuration. * 30B Q5\_1 starts out way slower than the others, yet becomes faster than Q6\_K with 4 or more threads. * In the middle of the graph is can be seen that 30B Q2\_K, 13B Q8\_0 and 7B\_FP16 converge to almost the same speed. Here is a zoomed-in version with logarithmic scale again: https://imgur.com/dJdRzJS * There's a increase in ms per token at 16 threads, and a very noticeable one at 32 threads. * The benefit of 6 threads 3+3 core allocation is clearly visible here. * FP\_16 does not scale well. * All the quantizations of the 7B model are significantly faster than 3B\_FP16 after using at least 3 cores. * 7B Q4\_0 scales best. It starts slowest with one thread, but even beats Q4\_K\_M in the end. * It cannot beat smaller models like 7B Q3\_K\_M or 3B Q8\_0 though. Now let's look at the fastest text generation times with 3+3 threads: https://imgur.com/Q8UIhGt * Here it can be seen that FP16 doesn't just scale badly, it also can't compete with the quantizations of the next-bigger model. * All models and quantizations appear to perform as expected from the file size, as the text generation step is RAM-bound. Here is a graph that shows that CCD 0 performs better after a few threads, but can't beat the combined speed: https://imgur.com/kAMG6Hi * Windows allocates threads to CCD 1 first, which is why both graphs are comparable. * The longer graphs are the CCD 1 graphs, up to 8 threads. Finally, let's look at model size vs text generation speed: https://imgur.com/rRwXHmd * This is a very nice linear correlation, as this phase is bound by RAM speed. * Models with a higher number of parameters have a tiny offset over smaller models, despite similar file sizes.
2023-06-25T12:58:01
https://www.reddit.com/r/LocalLLaMA/comments/14ilo0t/extensive_llamacpp_benchmark_more_speed_on_cpu_7b/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
14ilo0t
false
null
t3_14ilo0t
/r/LocalLLaMA/comments/14ilo0t/extensive_llamacpp_benchmark_more_speed_on_cpu_7b/
false
false
self
138
{'enabled': False, 'images': [{'id': '6c_9qZ0zjEio4tfFdpsLVyDMpzPf3_TYsde4cu_5NRs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WCkS1HwiA6FVqBtWIYeoJ_bF4smVgEyoV0cDNwS-xE0.jpg?width=108&crop=smart&auto=webp&s=a323b8a17d5cef330868c45adc53e5e9182320d3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WCkS1HwiA6FVqBtWIYeoJ_bF4smVgEyoV0cDNwS-xE0.jpg?width=216&crop=smart&auto=webp&s=e0d1a39625f2ba792d6156b77cfca664cf9fc81b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WCkS1HwiA6FVqBtWIYeoJ_bF4smVgEyoV0cDNwS-xE0.jpg?width=320&crop=smart&auto=webp&s=485cc3bf8f6d1e9f8b0d03fb283fcea7a3ecd9d4', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/WCkS1HwiA6FVqBtWIYeoJ_bF4smVgEyoV0cDNwS-xE0.jpg?auto=webp&s=940c009203066a4be839476c199abed20cbe2eb0', 'width': 600}, 'variants': {}}]}
Backing up datasets locally is crucial.
56
Backing up datasets is just as important as backing up models. If you see a dataset you think is valuable, don't forget to back it up locally. Don't trust HuggingFace and \*definitely\* don't trust GitHub. https://preview.redd.it/br6s1eflf68b1.png?width=332&format=png&auto=webp&s=1dfc10c510edcde9109f379fe3aa68e835798d69
2023-06-25T14:51:06
https://www.reddit.com/r/LocalLLaMA/comments/14io6is/backing_up_datasets_locally_is_crucial/
HideLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14io6is
false
null
t3_14io6is
/r/LocalLLaMA/comments/14io6is/backing_up_datasets_locally_is_crucial/
false
false
self
56
null
I made privateGPT work for Linux check it out!
1
2023-06-25T15:31:31
https://github.com/MichaelSebero/PrivateGPT4Linux
CoolRune
github.com
1970-01-01T00:00:00
0
{}
14ip4ks
false
null
t3_14ip4ks
/r/LocalLLaMA/comments/14ip4ks/i_made_privategpt_work_for_linux_check_it_out/
false
false
default
1
null
Finally, I am able to run text-generation-webui with 33B model (fully into GPU) and a stable-diffusion-webui using GPU with only one 4090.
150
That's amazing what can do the latest version of text-generation-webui using the new loader Exllama-HF! I can load a 33B model into 16,95GB of VRAM! 21,112GB of VRAM with AutoGPTQ!20,07GB of VRAM with Exllama. That allows me to run text generation and Automatic1111 at the same time using one single graphic card. I am so impressed. EDIT: With the latest text-generation-webui, I can also increase the context of local models beyond 2048 bits (at a higher memory cost) even models not planned for supporting it.
2023-06-25T15:42:33
https://www.reddit.com/r/LocalLLaMA/comments/14ipdqs/finally_i_am_able_to_run_textgenerationwebui_with/
mansionis
self.LocalLLaMA
2023-06-26T07:14:13
0
{}
14ipdqs
false
null
t3_14ipdqs
/r/LocalLLaMA/comments/14ipdqs/finally_i_am_able_to_run_textgenerationwebui_with/
false
false
self
150
null
Looking inside Guidance Source Code
1
[deleted]
2023-06-25T16:41:07
[deleted]
1970-01-01T00:00:00
0
{}
14iqrfv
false
null
t3_14iqrfv
/r/LocalLLaMA/comments/14iqrfv/looking_inside_guidance_source_code/
false
false
default
1
null
How to convert the number of tokens to the number of characters for LLaMA?
4
I use the koboldcpp API in my program /api/v1/generate I input a text string and get the result. Everything works. But I don't understand how to trim too long input string. I can trim by the number of characters. But the limit is 2048 tokens. In the OpenAI documentation it is written that 1 token is roughly equal to 4 chars in English. Is this true for LLaMA models? Is the approximate accuracy sufficient to work correctly?
2023-06-25T16:46:27
https://www.reddit.com/r/LocalLLaMA/comments/14iqvx8/how_to_convert_the_number_of_tokens_to_the_number/
fisbur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14iqvx8
false
null
t3_14iqvx8
/r/LocalLLaMA/comments/14iqvx8/how_to_convert_the_number_of_tokens_to_the_number/
false
false
self
4
null
What LLM to use for custom data and/or corpus?
1
I am very new to the LLM scene, drawn primarily through the advancement of locally-operated and stored LLMs like those on GPT4All. Attempts to "catch up" have left me largely paralyzed mentally. In an attempt to be concise, I am wanting to train (assuming that's the correct verbiage) a local LLM only on the primary documents and research materials I used professionally, and also on the books, transcripts, and other examples of my personal writing style. The purpose is to have an AI to use as a personal assistant of sorts that can ultimately imitate my writing style and whatnot. I understand that this may be too expansive of an ambition in the current state of technology, at least for local LLMs. However, if there was some way to create a prototype or early model, I'd be delighted to try. After exhaustive hours of reading online, I am unsure what LLM to use, let alone the process of using a custom corpus and whatnot. Most of the "teach yourself" materials are related to OpenAI, whose services I would rather avoid and keep my AI local. Suggestions by those of you who are far more aware and active in the field are greatly appreciated, both in what basic LLM to focus on, what systems would be most appropriate, and so forth. I've been experimenting with Neus Hermes from GPT4ALL, but have really hit a wall in wondering if it is even the ideal shell to use or how to cater it to what I'm looking for. I apologize if this is in the wrong area; there doesn't seem to be a "I'm lost help me" forum out there these days. I am woefully ignorant, but delighted to learn, but at this point I lack a specific direction to even focus my attempts. &#x200B;
2023-06-25T17:07:51
https://www.reddit.com/r/LocalLLaMA/comments/14irejw/what_llm_to_use_for_custom_data_andor_corpus/
1787Project
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14irejw
false
null
t3_14irejw
/r/LocalLLaMA/comments/14irejw/what_llm_to_use_for_custom_data_andor_corpus/
false
false
self
1
null
Any Open Sourced LLMs for checking grammerly & proofreading?
10
Wondering if anyone knows of a good model for checking grammar & proofreadin that can be run locally. Basically an open source version of grammerly..?
2023-06-25T17:37:08
https://www.reddit.com/r/LocalLLaMA/comments/14is3br/any_open_sourced_llms_for_checking_grammerly/
chucks-wagon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14is3br
false
null
t3_14is3br
/r/LocalLLaMA/comments/14is3br/any_open_sourced_llms_for_checking_grammerly/
false
false
self
10
null
Question about Training Data for LLama Model
4
Ok, so I would like to use oobabooga to train a LORA on an already fine-tuned model, but to be more specific I would like to train a character into it. So basically I have two Questions. First can I train a LORA on a fine-tuned Model? Second can someone give me a example on how I would train a character. My idea was to do it simular to prompting, but not sure if this works. Example: Joker: Muhahahaha you gone die Batman. Batman: Never you will be defeated, by me. Joker: ..... etc. &#x200B; I was wondering if a full converstation like this example would be possible as training data. In this example the User would be Joker and the model would answer as Batman, I know this already works if I give the model this example in a prompt, but I would really like to make a LORA for this, so I can free up space in the prompt for other information.
2023-06-25T18:07:07
https://www.reddit.com/r/LocalLLaMA/comments/14istio/question_about_training_data_for_llama_model/
GuentherDonner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14istio
false
null
t3_14istio
/r/LocalLLaMA/comments/14istio/question_about_training_data_for_llama_model/
false
false
self
4
null
A new open source language model claims to have surpassed GPT-4. This needs to be fact-checked.
1
[deleted]
2023-06-25T18:11:59
[deleted]
1970-01-01T00:00:00
0
{}
14isy40
false
null
t3_14isy40
/r/LocalLLaMA/comments/14isy40/a_new_open_source_language_model_claims_to_have/
false
false
default
1
null
A new open-source language model claims to have surpassed GPT-4 right now. This needs to be fact-checked
132
2023-06-25T18:13:58
https://i.redd.it/84cnkr7jg78b1.png
XVll-L
i.redd.it
1970-01-01T00:00:00
0
{}
14iszrf
false
null
t3_14iszrf
/r/LocalLLaMA/comments/14iszrf/a_new_opensource_language_model_claims_to_have/
false
false
https://b.thumbs.redditm…pDPpvf_f6IYY.jpg
132
{'enabled': True, 'images': [{'id': 'mCqBth8tGqA5z7u6o1wW-wvSIKIVQ5uEUWbIOZwTkKs', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=108&crop=smart&auto=webp&s=0c51ac927e78c4075e4d2c8e29de4053525397ad', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=216&crop=smart&auto=webp&s=9195491eb80b00b268fa9b61aa9392d827a59da0', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=320&crop=smart&auto=webp&s=f92d9a4d428835acb07c7ef238e9318876e1e6d9', 'width': 320}, {'height': 447, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=640&crop=smart&auto=webp&s=a66992ef289d174fdb1ef845d3e800251b162ecf', 'width': 640}, {'height': 670, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=960&crop=smart&auto=webp&s=8a63af00a4fbd7d7fdb4d206041506ff050605ac', 'width': 960}, {'height': 754, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=1080&crop=smart&auto=webp&s=91444f9e232d25dcb0fd94ef78ae656a358d7dd1', 'width': 1080}], 'source': {'height': 791, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?auto=webp&s=8a8b3de6ba4ab1bd03e03b1850ecc089315fc5af', 'width': 1132}, 'variants': {}}]}
Bigger isn't always better: Smaller LLMs and quants can deliver better outputs
33
I wanted to share an interesting observation I've made recently regarding the size of language models and quantization formats. While I used to believe that bigger models and quants are always better, my evaluations have shown otherwise. Contrary to popular belief, larger language models are not always superior. Through extensive experiments comparing different sizes and quants, I found that smaller models/quants can often deliver better outputs. The analogy I like to use is that the smartest person in the room isn't always the most eloquent or effective communicator, or the most fun person to talk to. In my evaluations, I compared various 33B and 65B models and their quants, by chatting for hours with them using the same script and deterministic settings to remove randomness. Here are the models and quants I compared in detail - these are some of the very best models (IMHO, after much testing and comparing, *the* best) and since they're available in multiple sizes and various quants, it's possible to compare their different versions directly: - TheBloke_airoboros-33B-gpt4-1.2-GGML/airoboros-33b-gpt4-1.2.ggmlv3.q3_K_M.bin - TheBloke_airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q3_K_M.bin - TheBloke_airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q4_K_M.bin - TheBloke_airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q5_K_M.bin - TheBloke_guanaco-33B-GGML/guanaco-33B.ggmlv3.q3_K_M.bin - TheBloke_guanaco-33B-GGML/guanaco-33B.ggmlv3.q4_K_M.bin - TheBloke_guanaco-33B-GGML/guanaco-33B.ggmlv3.q5_K_M.bin - TheBloke_guanaco-65B-GGML/guanaco-65B.ggmlv3.q3_K_M.bin - TheBloke_guanaco-65B-GGML/guanaco-65B.ggmlv3.q4_0.bin **Observation 1:** Different quantization formats produce very different responses even when applied to the same model and prompt. Each quant I tested felt like a unique model in its own right. **Observation 2:** In my tests, both Airoboros and Guanaco 33B models with the q3_K_M quant outperformed even their larger model and quant counterparts. These findings were surprising to me, highlighting the variability in outputs between different quants and the effectiveness of smaller models/quants. It remains unclear whether this variability is due to inherent randomness caused by different model sizes and quantization in general, or possibly issues with these larger quants I tested. However, the key takeaway is that blindly opting for the largest model/quant isn't always the best approach. I recommend comparing different sizes/quants of your preferred model to determine if a smaller version can actually produce better results. Further testing with different models and quants is needed, and I encourage others to conduct their own evaluations. What are your thoughts and experiences on this matter? Have you, too, encountered instances where smaller models or quants outperformed their larger counterparts? Let's discuss and share our insights! **TL;DR:** My evaluations have shown that smaller LLMs and quants can deliver better outputs when chatting with the AI. While bigger models may be smarter, the smartest person isn't always the most eloquent. Evaluate models yourself by comparing different sizes/quants rather than assuming that bigger is always better! **UPDATE 2023-06-27:** So u/Evening_Ad6637 taught me that Mirostat sampling isn't as deterministic as I thought, and might actually have impacted the bigger models negatively. I'm now in the process of redoing my tests with a truly deterministic preset (temperature 0, top_p 0, top_k 1), which takes a long time. However, it's already become clear to me that the quantization differences persist, and **bigger still isn't always better**. That could be attributed to randomness, though, as even with a fully deterministic preset there's still the difference between models and even quants that affects generations, and by changing the prompt just slightly, the outcome is changed greatly.
2023-06-25T20:16:12
https://www.reddit.com/r/LocalLLaMA/comments/14iw25d/bigger_isnt_always_better_smaller_llms_and_quants/
WolframRavenwolf
self.LocalLLaMA
2023-06-27T20:10:28
0
{}
14iw25d
false
null
t3_14iw25d
/r/LocalLLaMA/comments/14iw25d/bigger_isnt_always_better_smaller_llms_and_quants/
false
false
self
33
null
Any completely unrestricted open source local models?
4
Hi, are there any completely unrestricted open source local that will not eg answer “Consult qualified electronic repair technician” after I ask please help me to diagnose faulty audio amp and after giving me some very generic answers eg “check your cabling etc”
2023-06-25T20:23:37
https://www.reddit.com/r/LocalLLaMA/comments/14iw8vi/any_completely_unrestricted_open_source_local/
Afraid-Community5725
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14iw8vi
false
null
t3_14iw8vi
/r/LocalLLaMA/comments/14iw8vi/any_completely_unrestricted_open_source_local/
false
false
self
4
null
Enabling CuBlas on (Text Generation Web UI)
2
[removed]
2023-06-25T20:30:53
https://www.reddit.com/r/LocalLLaMA/comments/14iwfk7/enabling_cublas_on_text_generation_web_ui/
vic8760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14iwfk7
false
null
t3_14iwfk7
/r/LocalLLaMA/comments/14iwfk7/enabling_cublas_on_text_generation_web_ui/
false
false
default
2
null
Can ooba booga run Falcon 7B?
4
[deleted]
2023-06-25T21:10:27
[deleted]
1970-01-01T00:00:00
0
{}
14ixfim
false
null
t3_14ixfim
/r/LocalLLaMA/comments/14ixfim/can_ooba_booga_run_falcon_7b/
false
false
default
4
null
Is anyone doing always-on voice to text with a local llama at home?
26
Not sure if this is the right sub for the question, as it overlaps with maybe /r/homeautomation and /r/homeassistant \- One thing I've been wanting to do is to create an always-on voice assistant that will listen and transcribe auto and pass it to an LLM (via some processing/routing), such that I can ask it questions, and it will respond accordingly. I know this will take some orchestration, and some creative hardware, but I don't see any single piece of it to be too tricky. So, curious if anyone is running a 24/7 assistant with always-on audio? Or failing that, wake-word based audio?
2023-06-25T21:16:49
https://www.reddit.com/r/LocalLLaMA/comments/14ixlc4/is_anyone_doing_alwayson_voice_to_text_with_a/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ixlc4
false
null
t3_14ixlc4
/r/LocalLLaMA/comments/14ixlc4/is_anyone_doing_alwayson_voice_to_text_with_a/
false
false
self
26
null
When did this happen? re: Oobabooga, AtuoGPTQ vs GPTQ-for-Llama
41
I updated my local install of Ooba a few days ago, and saw that the model loading options had changed, and there are now several methods of loading models. I continued using GPTQ-for-Llama, because I'm pretty sure that's what it was using to load my favorite quantized models (mostly from TheBloke) and things ran pretty much as normal, except that I did have to edit a couple references tot he training tab in [server.py](https://server.py) to avoid some crashing that was going on after the update. Ona whim, today I removed the pound signs I had thrown in to comment out the sections, and updated again and everything seems to be working. None of that is the point. At any rate, when I went to load up a model (don't think it was one I had tried yet, so no saved settings) I noticed after the fact that it had loaded with AutoGPTQ instead of GPTQ-for-Llama. Since it didn't error out, which is something I had come to expect with trying to load models with anything other than GPTQ-for-Llama, I switched over to the text generation tab and gave it a whirl. My usual intro prompt for a new model is to ask for a detailed list of female-fronted rock and metal bands. I immediately noticed that it was generating ridiculously fast. I went to the parameters tab to increase new token size (lets me get a more accurate measure of generation speed, because with short replies, I'm losing a little time in the short delay between hitting enter and a reply starting.) This 13B model was generating around 11tokens/s. For reference, I'm used to 13B models generating at 2T/s, and 7B models at 4 T/s. So I loaded up a 7B model and it was generating at 17 T/s! I switched back to a 13B model (ausboss\_WizardLM-13B-Uncensored-4bit-128g this time) and am getting 13-14 T/s. And switching to GPTQ-for-Llama to load the same model, I'm back to the 2 T/s that I've been getting for quite a while. So when did this happen? Why have I been struggling to choose between less coherent, but zippier answers from 7B or better, but slower answers from 13B, when this option was apparently added that delivers blistering speeds with the same models. I tried searching this sub, and the Ooba sub has gone dark, so I don't see much info about it. edit: Using a 3060 12GB, on a system with an i7 and 32GB RAM, by the way. And on the advice of the first 2 responders, I just installed and tried out ExLlama, and it's even faster. Getting 19-20 T/s on that same 13B model. This is insane. I was losing interest in trying out new models, because they were all so slow. edit2: Sorry, Thebloke, if you come to this thread because of the tag that was in OP. Removed.
2023-06-25T22:13:32
https://www.reddit.com/r/LocalLLaMA/comments/14iz0p9/when_did_this_happen_re_oobabooga_atuogptq_vs/
DeylanQuel
self.LocalLLaMA
2023-06-26T07:22:08
0
{}
14iz0p9
false
null
t3_14iz0p9
/r/LocalLLaMA/comments/14iz0p9/when_did_this_happen_re_oobabooga_atuogptq_vs/
false
false
self
41
null
Python error while trying to convert PyTorch to GGML
1
[removed]
2023-06-25T22:27:55
https://www.reddit.com/r/LocalLLaMA/comments/14izdak/python_error_while_trying_to_convert_pytorch_to/
chocolatebanana136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14izdak
false
null
t3_14izdak
/r/LocalLLaMA/comments/14izdak/python_error_while_trying_to_convert_pytorch_to/
false
false
default
1
null
Are there Python bindings for koboldcpp or do you have to use it’s http API?
3
Trying to play around with MPT-30b, and it seems like kobold.cpp may be the only way to use it with GPU acceleration on my system. It’d be sweet if I could use it like llama-cpp-Python and ctransformers directly as a Python library. Is that a possibility right now?
2023-06-25T23:17:49
https://www.reddit.com/r/LocalLLaMA/comments/14j0ki4/are_there_python_bindings_for_koboldcpp_or_do_you/
E_Snap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j0ki4
false
null
t3_14j0ki4
/r/LocalLLaMA/comments/14j0ki4/are_there_python_bindings_for_koboldcpp_or_do_you/
false
false
self
3
null
How can I train a model on coherent conversations rather than disjointed Q&A pairs?
6
So, in LLMs, I've only seen two types of formats for fine-tuning: Just dumping in an entire text (like a book or blog post), and formatted Q&A. But I want to train a model on entire, coherent conversations where earlier responses are all linked with later ones. Not a bunch of discrete Q&A pairs. How do I do this? How is the JSON formatted? I've never seen this done. I've only seen sets of disjointed Q&A pairs. I want to train with long, coherent conversations though. How can I do this? I am trying to train Falcon, if that matters.
2023-06-26T00:57:02
https://www.reddit.com/r/LocalLLaMA/comments/14j2rc1/how_can_i_train_a_model_on_coherent_conversations/
cold-depths
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j2rc1
false
null
t3_14j2rc1
/r/LocalLLaMA/comments/14j2rc1/how_can_i_train_a_model_on_coherent_conversations/
false
false
self
6
null
New to this - What are the privacy concerns of a Local LLM? Has anyone verified that they are actually private?
1
[removed]
2023-06-26T01:15:08
https://www.reddit.com/r/LocalLLaMA/comments/14j356r/new_to_this_what_are_the_privacy_concerns_of_a/
Ok-Training-7587
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j356r
false
null
t3_14j356r
/r/LocalLLaMA/comments/14j356r/new_to_this_what_are_the_privacy_concerns_of_a/
false
false
default
1
null
Where can I acquire GPTQ 4bit model files for the base Llama model?
0
/u/the-bloke has GGML files [here](https://huggingface.co/TheBloke/LLaMa-30B-GGML), and the unquantized files are also available on Hugging Face. So why not the GPTQ version?
2023-06-26T01:28:09
https://www.reddit.com/r/LocalLLaMA/comments/14j3f0m/where_can_i_acquire_gptq_4bit_model_files_for_the/
Excessive_Etcetra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j3f0m
false
null
t3_14j3f0m
/r/LocalLLaMA/comments/14j3f0m/where_can_i_acquire_gptq_4bit_model_files_for_the/
false
false
self
0
{'enabled': False, 'images': [{'id': 'fdo3rIT_mabNidYQeQ26vFhCw8Q-6_bhEmDMQ21MLZs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=108&crop=smart&auto=webp&s=a83378a4a8b741d3a8cb08139b0d86a170e91135', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=216&crop=smart&auto=webp&s=5d74df91bbbe181c7ddc0d6846c492f1b79adf47', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=320&crop=smart&auto=webp&s=532f437224bcb0096df8067fd555f9763ee0f851', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=640&crop=smart&auto=webp&s=3654312db8d50f9288999562cac7283fd096ed31', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=960&crop=smart&auto=webp&s=ea1609fdd7e3998fe9dd5a1a3b4ed1323a843fb5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=1080&crop=smart&auto=webp&s=b68afd62f59e17baf82a16721a2d43d566d00e38', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?auto=webp&s=5db76e06715b059fa2ef690c87e1a0322e532a3f', 'width': 1200}, 'variants': {}}]}
The new Orca-mini is popping off.
70
There's recently been a new model released using the Orca training practices from microsoft research. I paired Orca-mini 7b against WizardLM 7B V1.0 Uncensored. Orca massively destroys WizardLM in some algebra questions, logic puzzles, coding such as solving x in 3x+1=10, the weight test, and writing a Python code for calculating the nth Fibonacci number, there are 12 questions total. Which Orca did all those correct and WizardLM only had a Python code correct but it's relatively worse compared to Orca's. One thing WizardLM is better though is at informational prompts. I asked each model to explain things such as what is Node.js, armageddon in chess, what Lichess is and more(I didn't count but it's about 10 questions. WizardLM beat Orca by about 1 or 2 questions, which is close(I believe WizardLM only got 1 wrong but I don't remember. &#x200B; So, I was very surprised by how Orca-mini did against WizardLM. I was not able to test the 13Bs, as I only have 8GB RAM(sadge), but i'll expect similar result ratios. Appreciated if somebody would test for the 13Bs. The purpose of this is that I would like to discuss this new model. What do you guys think?
2023-06-26T01:40:43
https://www.reddit.com/r/LocalLLaMA/comments/14j3ofn/the_new_orcamini_is_popping_off/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j3ofn
false
null
t3_14j3ofn
/r/LocalLLaMA/comments/14j3ofn/the_new_orcamini_is_popping_off/
false
false
self
70
null
6000+ tokens context with ExLlama
199
Now possible in text-generation-webui after this PR: https://github.com/oobabooga/text-generation-webui/pull/2875 I didn't do anything other than exposing the `compress_pos_emb` parameter implemented by turboderp [here](https://github.com/turboderp/exllama/commit/d05f5d01a60af98b09b243290607e5e542551ef5), which in turn is based on kaiokendev's recent discovery: https://kaiokendev.github.io/til#extending-context-to-8k ### How to use it 1) Open the Model tab, set the loader as ExLlama or ExLlama_HF. 2) Set `max_seq_len` to a number greater than 2048. The length that you will be able to reach will depend on the model size and your GPU memory. 3) Set `compress_pos_emb` to max_seq_len / 2048. For instance, use 2 for max_seq_len = 4096, or 4 for max_seq_len = 8192. 4) Select the model that you want to load. 5) Set `truncation_length` accordingly in the Parameters tab. You can set a higher default for this parameter by copying settings-template.yaml to settings.yaml in your text-generation-webui folder, and editing the values in settings.yaml. Those two new parameters can also be used from the command-line. For instance: `python server.py --max_seq_len 4096 --compress_pos_emb 2`.
2023-06-26T02:23:58
https://www.reddit.com/r/LocalLLaMA/comments/14j4l7h/6000_tokens_context_with_exllama/
oobabooga4
self.LocalLLaMA
2023-06-26T02:27:06
0
{}
14j4l7h
false
null
t3_14j4l7h
/r/LocalLLaMA/comments/14j4l7h/6000_tokens_context_with_exllama/
false
false
self
199
{'enabled': False, 'images': [{'id': 'aiqOrWf-Zr4uMY6fBN3V7SnCS_ZJKqFnBXihscTwgow', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=108&crop=smart&auto=webp&s=ff3c3c1c676466176976b6e4544c23cec1fc5f66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=216&crop=smart&auto=webp&s=46117aa18152d1fee3e5e1ac378d4f91235a8011', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=320&crop=smart&auto=webp&s=38ba1c4c1ff9535f7ed775f07116f8e8daf016c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=640&crop=smart&auto=webp&s=aa880fca7dfdfc8dddbd3c17b0f8b50dbd140848', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=960&crop=smart&auto=webp&s=d9980ee6497cd1fd65ece3eb29bd5d44c420de87', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=1080&crop=smart&auto=webp&s=f6c803944341420c0e4b241dd352c514455f3a61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?auto=webp&s=21782292ae4e6bbe2f1c922714a198ddccd6aca7', 'width': 1200}, 'variants': {}}]}
How close are we to AI understanding context?
0
I was told that our current generation of AI can't understand the context and never will. So I thought I would ask here. How close are we to AI understanding context?
2023-06-26T02:49:57
https://www.reddit.com/r/LocalLLaMA/comments/14j54f7/how_close_are_we_to_ai_understanding_context/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j54f7
false
null
t3_14j54f7
/r/LocalLLaMA/comments/14j54f7/how_close_are_we_to_ai_understanding_context/
false
false
self
0
null