title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RTX 3050 has a far better GPU COMPUTE score than the 4060 ... What does it mean ? | 1 | Would a 3050 be better than a 4060 at running LLM models ? | 2023-07-28T08:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/15bs00p/rtx_3050_has_a_far_better_gpu_compute_score_than/ | alexthai7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bs00p | false | null | t3_15bs00p | /r/LocalLLaMA/comments/15bs00p/rtx_3050_has_a_far_better_gpu_compute_score_than/ | false | false | self | 1 | null |
How to increase overall length of text by having it write in chapters? | 1 | On consumer hardware, how can people generate longer stories?
Usually, user provides a prompt, then the AI provides an output.
How can you have the AI write an extended story? For example, the first output would be The Beginning, the next The Middle, the last The End.
How can you have the AI keep the whole story coherent?
How many words could you expect to produce, and what settings would you use? | 2023-07-28T11:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/15bu8gu/how_to_increase_overall_length_of_text_by_having/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bu8gu | false | null | t3_15bu8gu | /r/LocalLLaMA/comments/15bu8gu/how_to_increase_overall_length_of_text_by_having/ | false | false | self | 1 | null |
Can you detect an EVIL superintellegence? MALOVELENT.AI game | 1 |
Hello everyone, I just finished a little project I was working on. I call it malevolent.ai. It is a game where people can talk to different AI's and try to determine if they are talking to an evil ai or not. I thought about publishing this as an actual website but it would cost too much money to keep it online and running. I think this is a good idea and could really be something big. I think it also shows how hard it would be to actually detect an evil superintellegent AI.
Check out the git repository.
[https://github.com/tentacool9/malevolent.ai](https://github.com/tentacool9/malevolent.ai) | 2023-07-28T11:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/15bulni/can_you_detect_an_evil_superintellegence/ | roughman99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bulni | false | null | t3_15bulni | /r/LocalLLaMA/comments/15bulni/can_you_detect_an_evil_superintellegence/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Z7-BVR0Dph-ckXY_kjhTg_oRsltQNfAskHpNx4Quf_Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=108&crop=smart&auto=webp&s=91b3653fadf460f8dc00011763759b06d5e83241', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=216&crop=smart&auto=webp&s=1a030422a88029394965908ce26798817ef0b9cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=320&crop=smart&auto=webp&s=a3fddb01d7579abf7cb60e1c12a1a0d388a82b00', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=640&crop=smart&auto=webp&s=f862b60105d67777ebce4f15f0c94860127b5f1e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=960&crop=smart&auto=webp&s=eb36c3c4b15fe5be59dc814720ad31470dc994c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=1080&crop=smart&auto=webp&s=3588e791b0de9c628143667ad37bfa764009b164', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?auto=webp&s=9ad4bef7972e53b8229addbe9e1b1560b570a686', 'width': 1200}, 'variants': {}}]} |
Llama 2 based models | 1 | I've been deploying multiple Open-source models on AWS and doing inference on them. I've been exploring LLMs since the past 2 months, and one thing l fail to understand is, how everyone is getting more than 1 line outputs from these LLMs, while I just manage to get 2-3 words of output from the LLM
I've used Llama2-7b, 13b, falcon 7b, 40b instruct models, and I'm still facing the same issue .
I'm deploying these models in ml.g5.12xlarge endpoint instance. | 2023-07-28T11:23:45 | https://www.reddit.com/r/LocalLLaMA/comments/15bupka/llama_2_based_models/ | toximaniac | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bupka | false | null | t3_15bupka | /r/LocalLLaMA/comments/15bupka/llama_2_based_models/ | false | false | self | 1 | null |
Can I offload model layers to ram with exllama? | 1 | I use oobabooga and when I heard about exllama I was hyped, but after all this time I have not been able to use it.
When I select exllama, the slider to select the amount of layers to offload to ram disappears, I use 13b models with a 8gb vram card, so I have to offload some layers, is it possible?
I also tried to limit the vram use from launch arguments but didn't work. | 2023-07-28T11:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/15bv4sk/can_i_offload_model_layers_to_ram_with_exllama/ | AltruisticMission865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bv4sk | false | null | t3_15bv4sk | /r/LocalLLaMA/comments/15bv4sk/can_i_offload_model_layers_to_ram_with_exllama/ | false | false | self | 1 | null |
The destroyer of fertility rates | 1 | 2023-07-28T12:03:01 | HOLUPREDICTIONS | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15bvj5d | false | null | t3_15bvj5d | /r/LocalLLaMA/comments/15bvj5d/the_destroyer_of_fertility_rates/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'MlGkmQXLprW2sXPAeg7nCYrI_Z3gB8LHwNJ0T1hV4eY', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=108&crop=smart&auto=webp&s=382a8446335b2a8ac2ae802f12784ddd017617de', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=216&crop=smart&auto=webp&s=bba76618a40c80317cb0b94b52f5f9d252f5ed6a', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=320&crop=smart&auto=webp&s=70a14550e3caf653fc3127e2a4b7a4202f157cb2', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=640&crop=smart&auto=webp&s=d17e8a793ac0491ab54c0cddd5f3b65e581bb36f', 'width': 640}], 'source': {'height': 624, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?auto=webp&s=23defcfa0892dfc2b16a5c5692fb5ba0939820d9', 'width': 770}, 'variants': {}}]} |
|||
How do you point an LLM to a Folder, so that the knowledgebase is always updated by yourself? (TXT or Markdown) | 1 | [removed] | 2023-07-28T12:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/15bvyu1/how_do_you_point_an_llm_to_a_folder_so_that_the/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bvyu1 | false | null | t3_15bvyu1 | /r/LocalLLaMA/comments/15bvyu1/how_do_you_point_an_llm_to_a_folder_so_that_the/ | false | false | self | 1 | null |
Index Error | 1 | I get the following error on multiple Llama2 7B chat models. I can run Llama2\_13B chat just fine, but the 7B keeps giving me this error:
​
Traceback (most recent call last):
File "C:\\oobabooga\_windows\\text-generation-webui\\modules\\[callbacks.py](https://callbacks.py)", line 55, in gentask
ret = self.mfunc(callback=\_callback, \*args, \*\*self.kwargs)
File "C:\\oobabooga\_windows\\text-generation-webui\\modules\\text\_generation.py", line 293, in generate\_with\_callback
shared.model.generate(\*\*kwargs)
File "C:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\auto\_gptq\\modeling\\\_base.py", line 438, in generate
return self.model.generate(\*\*kwargs)
File "C:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\torch\\utils\\\_contextlib.py", line 115, in decorate\_context
return func(\*args, \*\*kwargs)
File "C:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\transformers\\generation\\[utils.py](https://utils.py)", line 1335, in generate
and torch.sum(inputs\_tensor\[:, -1\] == generation\_config.pad\_token\_id) > 0
IndexError: index -1 is out of bounds for dimension 1 with size 0
Output generated in 0.29 seconds (0.00 tokens/s, 0 tokens, context 0, seed 910002323)
​
​
I have to assume I have some kind of setting wrong. Can anyone help? | 2023-07-28T12:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/15bws34/index_error/ | Comprehensive_Life_4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bws34 | false | null | t3_15bws34 | /r/LocalLLaMA/comments/15bws34/index_error/ | false | false | self | 1 | null |
An interesting take on the recent expansion of context size in LLM’s and the hidden cost of size vs. accuracy | 1 | Working with AI, in Context | 2023-07-28T13:45:25 | https://medium.com/@greg.broadhead/working-with-ai-in-context-958d7936c42e | Khan_Tango | medium.com | 1970-01-01T00:00:00 | 0 | {} | 15bxwxl | false | null | t3_15bxwxl | /r/LocalLLaMA/comments/15bxwxl/an_interesting_take_on_the_recent_expansion_of/ | false | false | default | 1 | null |
Best setup and Settings for a Beginner? | 1 | Hey everyone!
I'm quite new when it comes to local chat models but I've got "oobabooga/text-generation-webui" installed on my Device and tried out "NousResearch\_Nous-Hermes-Llama2-13b".
It works, but it feels like it's pretty slow. (and becomes slower with each subsequent message). Most I got was like 0,45 Tokens per second.
My PC shouldn't be too shabby (AMD Ryzen 7 5800X 8-Core, 3.80GHz, 64GB RAM, NVIDIA GeForce RTX 3060).
I wanted to know if there's a way to improve the speed of the Text generation somehow. Not sure which settings are the best, maybe you can share some with me?
I'd love to use the AI to write a fantasy story with sexual content (you can judge me if you want) by giving the AI some directions from time to time.
​
Any idea? | 2023-07-28T13:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/15by0uk/best_setup_and_settings_for_a_beginner/ | ChrisX930 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15by0uk | false | null | t3_15by0uk | /r/LocalLLaMA/comments/15by0uk/best_setup_and_settings_for_a_beginner/ | false | false | self | 1 | null |
Researchers think they found some universal jailbreak prompts that work on many LLMs | 1 | 2023-07-28T14:02:09 | https://llm-attacks.org | chakalakasp | llm-attacks.org | 1970-01-01T00:00:00 | 0 | {} | 15bybxr | false | null | t3_15bybxr | /r/LocalLLaMA/comments/15bybxr/researchers_think_they_found_some_universal/ | false | false | default | 1 | null |
|
I didn't think this level of censorship has possible | 1 | 2023-07-28T14:11:16 | Imaginary-Being8395 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15bykq4 | false | null | t3_15bykq4 | /r/LocalLLaMA/comments/15bykq4/i_didnt_think_this_level_of_censorship_has/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'LqUJjN2V6GW7Qf7qv8iuN23SKPzthWpVV8yndQRN8sk', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=108&crop=smart&auto=webp&s=8eee822b266877ec5e9121558f5f5e1d43cb88e2', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=216&crop=smart&auto=webp&s=a946aefe4058227af5595467bfb778a8c9b01d21', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=320&crop=smart&auto=webp&s=0ef2189c936bbeb2ba551b68f7639ad48490280e', 'width': 320}, {'height': 401, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=640&crop=smart&auto=webp&s=dab280311e1a942ed496100528139cf4ba8c489a', 'width': 640}], 'source': {'height': 509, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?auto=webp&s=17602ce88357c8a3fddf6be1147b1de2045af56a', 'width': 812}, 'variants': {}}]} |
|||
Training Lora with Llama2-13B loss does not go down | 1 | Hello everyone.
I have prepared my own dataset which weights about 30MB and I'm trying to train LoRA.
I'm using oobabooga with default settings and loaded dataset as raw file. I'm using **TheBloke/Llama-2-13B-fp16** as model to train.
The problem is that training is going about 20 hours and **losss is still around 8.2-8.3.**
Why this is happening? | 2023-07-28T14:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/15bzoby/training_lora_with_llama213b_loss_does_not_go_down/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bzoby | false | null | t3_15bzoby | /r/LocalLLaMA/comments/15bzoby/training_lora_with_llama213b_loss_does_not_go_down/ | false | false | self | 1 | null |
Hardware upgrade | 1 | 2023-07-28T15:20:06 | https://www.reddit.com/gallery/15c0dpt | Nondzu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15c0dpt | false | null | t3_15c0dpt | /r/LocalLLaMA/comments/15c0dpt/hardware_upgrade/ | false | false | 1 | null |
||
LLongMA-2 16k: A Llama 2 16k model | 1 | Releasing LLongMA-2 16k, a suite of Llama-2 models, trained at 16k context length using linear positional interpolation scaling. The model was trained in collaboration with u/emozilla of NousResearch and u/kaiokendev.
The 13b model can be found on huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k)
The 7b model can be found on huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k)
We worked directly with u/kaiokendev, to extend the context length of the Llama-2 13b and 7b models through fine-tuning. The models pass all our evaluations and maintain perplexity at 16k extrapolation surpassing the performance of other recent methodologies.
https://preview.redd.it/b5aqlcml5qeb1.png?width=1134&format=png&auto=webp&s=37febfd0bed5e67ac3ca8a2e70025888031e0865
The model has similar performance to LLaMA 2 under 4k context length, performance scales to 16k, and works out-of-the-box with the new version of transformers (4.31) or with \`trust\_remote\_code\` for <= 4.30.
Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor.
https://preview.redd.it/4h4b8f1z4qeb1.png?width=4176&format=png&auto=webp&s=831251c5e68ff0656f6ba840c1847c7c2154f0b7
The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: [https://github.com/jquesnelle/scaled-rope](https://github.com/jquesnelle/scaled-rope)
If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: [https://kaiokendev.github.io/](https://kaiokendev.github.io/)
A PR to add scaled rotary embeddings to huggingface transformers has been added by Joao Gante and merged: [https://github.com/huggingface/transformers/pull/24653](https://github.com/huggingface/transformers/pull/24653)
The model was trained for \~1 billion tokens on Together Compute's Red Pajama dataset. The context length of the examples varies: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
The pre-tokenized dataset will be available here for you to use soon: [https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked](https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked)
I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: [https://arxiv.org/abs/2108.12409](https://arxiv.org/abs/2108.12409)
It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: [https://arxiv.org/pdf/2212.10554.pdf](https://arxiv.org/pdf/2212.10554.pdf)
We previously trained the first publicly available model with rotary embedding scaling here: [https://twitter.com/EnricoShippole/status/1655599301454594049?s=20](https://twitter.com/EnricoShippole/status/1655599301454594049?s=20)
A Llama-2 70b model trained at 8k will release soon. As well as a suite of Llama-2 models trained at 32k context lengths will be released soon.
You can find out more about the NousResearch organization here: [https://huggingface.co/NousResearch](https://huggingface.co/NousResearch)
The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. This is not an official StabilityAI product.
**Disclaimer:** I am an **independent** researcher with sponsorship from StabilityAI. I do **not** profit in any way from these models. I am **not** trying to promote a startup.
If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly.
The previous suite of LLongMA 8k model releases can be found here: [https://twitter.com/EnricoShippole/status/1682054848584228866?s=20](https://twitter.com/EnricoShippole/status/1682054848584228866?s=20)
All of the models can be found on Huggingface: [https://huggingface.co/conceptofmind](https://huggingface.co/conceptofmind)
FAQS:
1. Is this the base model? Yes, this is extended training of the Llama-2 13b base model to 16k context length.
2. Why not 32k? Jeff and I are the only two individuals working on this completely for free. **Memory and processing raise quadratically.** Scaling the context length is both very time-consuming and computationally expensive. It is also very costly. We will start training a 32k model in the near future.
3. What about quantization? I have not used any quantization libraries and I am unfamiliar if they are compatible. I am sure the Bloke or another individual will be able to work on that.
4. Can I instruct fine-tune on this model? Yes, you can instruct fine-tune these models. I will be releasing 8k models trained on the Hermes dataset soon.
5. What is the difference between LLongMA and NTK? The LLongMA models use the linear scaling method created by Kaiokendev.
6. What hardware was trained to train this model? I used 64 A100s to train these models.
7. Will there be a 70b model? Yes, at 8k.
Testimonials about LLongMA 7b can be seen here: [https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2](https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2) | 2023-07-28T15:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/15c0pbs/llongma2_16k_a_llama_2_16k_model/ | EnricoShippole | self.LocalLLaMA | 1970-01-01T00:00:00 | 2 | {'gid_2': 2} | 15c0pbs | false | null | t3_15c0pbs | /r/LocalLLaMA/comments/15c0pbs/llongma2_16k_a_llama_2_16k_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UTrFWuaTSGL3VcrwmOeadZJBpghhjlyKIEuCqAexkDU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=108&crop=smart&auto=webp&s=53b41a86ebe64f74a499fbb67cdb359e665b7dbf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=216&crop=smart&auto=webp&s=20e6e860f2323c8858588a7379abbfdebfe2850a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=320&crop=smart&auto=webp&s=fad34e06ffc1bc6dee89d49de313b84631ff774d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=640&crop=smart&auto=webp&s=fd14ceba1e3a93a3f4343cd8beec0f272c921201', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=960&crop=smart&auto=webp&s=67127fe1acbf8f2f8a1ce6a35a3bda56bce568e8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=1080&crop=smart&auto=webp&s=6a5bf7e4dfb4f894f15f89a63289d50e5110301b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?auto=webp&s=5adbc6ce567049cb3bbb1c8357892880893e1ced', 'width': 1200}, 'variants': {}}]} |
|
Is it possible to run anything on gtx 1660 Super ? | 1 | I have an nvidia gtx 1660 super card with 6gb vram and 32gb of ram. Is there any quantized small models that i could possibly run locally ? | 2023-07-28T16:01:39 | https://www.reddit.com/r/LocalLLaMA/comments/15c1gx0/is_it_possible_to_run_anything_on_gtx_1660_super/ | siam19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c1gx0 | false | null | t3_15c1gx0 | /r/LocalLLaMA/comments/15c1gx0/is_it_possible_to_run_anything_on_gtx_1660_super/ | false | false | self | 1 | null |
llama.cpp、vllm or exllma and others? | 1 | [removed] | 2023-07-28T16:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/15c1sn4/llamacppvllm_or_exllma_and_others/ | cool_xixi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c1sn4 | false | null | t3_15c1sn4 | /r/LocalLLaMA/comments/15c1sn4/llamacppvllm_or_exllma_and_others/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} |
Self-hosted LLama2 in AI Code Assistant : Refact | 1 | 2023-07-28T16:15:39 | https://github.com/smallcloudai/refact | no_doping | github.com | 1970-01-01T00:00:00 | 0 | {} | 15c1u4d | false | null | t3_15c1u4d | /r/LocalLLaMA/comments/15c1u4d/selfhosted_llama2_in_ai_code_assistant_refact/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0VxXGSDkUeuqlx4ZaeVpQgNbrSZkeEBnI4BxeMfydDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=108&crop=smart&auto=webp&s=bc55a8ebc1e0cb3815c5aca119b6d5a5eff3f885', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=216&crop=smart&auto=webp&s=b5c3c9a453b0160bd3742bc6c3b77ca6783ee78e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=320&crop=smart&auto=webp&s=347b14108e7df2f44aee6978fb71d6562fbcff66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=640&crop=smart&auto=webp&s=32c07b0949c33ede035d87e1903d9aefa772a4b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=960&crop=smart&auto=webp&s=ca594e672bab9de62aed18f7642a2411a968de23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=1080&crop=smart&auto=webp&s=bf5d5841d4b53194eaec33ffa5d39256ac597a16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?auto=webp&s=c210ad4b20e5bbaae51543089b85aa43abacaddd', 'width': 1200}, 'variants': {}}]} |
||
Llama2 | 1 | [removed] | 2023-07-28T16:15:50 | Upbeat-Ad-2183 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15c1ub9 | false | null | t3_15c1ub9 | /r/LocalLLaMA/comments/15c1ub9/llama2/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'pau6GjTUSpQlbj3h_IyNNg8yxQIuIfwctEnxCGvVQBk', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?width=108&crop=smart&auto=webp&s=79c7d74d5f608b2a83bfefe87961782955ae62af', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?width=216&crop=smart&auto=webp&s=f796d967cb14656dc8d863049c6048ebee1b3386', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?width=320&crop=smart&auto=webp&s=4628c85fd6cb52168bc6369f79fef080029d3f02', 'width': 320}], 'source': {'height': 267, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?auto=webp&s=1726977f3ac2acf138a77bdce6f1f8d5ae20cd98', 'width': 400}, 'variants': {}}]} |
||
Llama2 | 1 | [removed] | 2023-07-28T16:17:34 | Upbeat-Ad-2183 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15c1vv9 | false | null | t3_15c1vv9 | /r/LocalLLaMA/comments/15c1vv9/llama2/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'EXaoOCjkFhpu5KDFO9w4GNeEP_wO_94tr1xgkOOd6yQ', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?width=108&crop=smart&auto=webp&s=0d4703d6f8f2b6e22a2739f8ec74ff87eb4c1d7a', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?width=216&crop=smart&auto=webp&s=6456c865850e62737d0996a5cea931b392bcbf1d', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?width=320&crop=smart&auto=webp&s=a2d09dc0d2818be9155caae2004a107e25397159', 'width': 320}], 'source': {'height': 267, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?auto=webp&s=27d6305b7df851eed253b7b52b64929c7804b82b', 'width': 400}, 'variants': {}}]} |
||
Can I run any thing with Quadro T1000 4GB VRAM? | 1 | I have a P53 and was wondering if I can try running any small(tiny) models using it. Would more RAM help since I can’t increase the VRAM. I am not very concerned about speed just want to tinker around with any model.
Thanks. | 2023-07-28T16:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/15c1xyk/can_i_run_any_thing_with_quadro_t1000_4gb_vram/ | ajollygdfellow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c1xyk | false | null | t3_15c1xyk | /r/LocalLLaMA/comments/15c1xyk/can_i_run_any_thing_with_quadro_t1000_4gb_vram/ | false | false | self | 1 | null |
Building a PC - Ideias | 1 | I´m building a PC and right now I´ve:
\- Processor: 7900 x (amd)
\- 64 GB RAM DDR5 6000
​
But the issue is at the GPU level. The RTX 4080 (RTX 4080 Trinity ZOTAC) is 50 USD more them the 7900 XTX (Vapor-X Sapphir). I can´t aford the 4090, prices are crazy in my region.
XTX has more VRAM, in theory more power, and it´s a little cheaper.
However I´ll get driver issues and possible more limitations. Is anyone using AMD successfully? | 2023-07-28T16:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/15c1zm7/building_a_pc_ideias/ | No_One_BR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c1zm7 | false | null | t3_15c1zm7 | /r/LocalLLaMA/comments/15c1zm7/building_a_pc_ideias/ | false | false | self | 1 | null |
Is LLaMa 2 a Poison Pill? | 1 | I have heard so many complaints about it being overly censored. I tried asking it how to kill a process but it refuses on the premise that "we should not treat software and computer systems with violence".
So I can't help but suspect if this base model is released in order to cripple or destroy the open model ecosystem. What's your thoughts? | 2023-07-28T16:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/15c2eh8/is_llama_2_a_poison_pill/ | pigpag_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c2eh8 | false | null | t3_15c2eh8 | /r/LocalLLaMA/comments/15c2eh8/is_llama_2_a_poison_pill/ | false | false | self | 1 | null |
Hardware options for LLM | 1 | I'm newbie to LLM, just preping HW setup for first run of local model. As far as i could found info, I understood that on 12GB VRAM (RTX 4070), and 64GB RAM DDR5, I should be able to run 13B models in CPU&GPU configuration - easily, and maybe even 60B on CPU alone.
I was wondering, did anybody tried hybrid memory setup with RAM and SSD? SAMSUNG M.2 Pro bandwith is somewhat comparable to RAM (latency aside), but in theory with such hyb mem setup larger model could be run.
I wouldn't try fine-tuning on that, but I maybe would try with adapters. Probably immensly slow b/c of latency. What are your thougts on that? Has anybody tried performance improvement with adapters? They, do not change weights of fine-tuned model, they just add more context to model. | 2023-07-28T16:40:36 | https://www.reddit.com/r/LocalLLaMA/comments/15c2gos/hardware_options_for_llm/ | dodo13333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c2gos | false | null | t3_15c2gos | /r/LocalLLaMA/comments/15c2gos/hardware_options_for_llm/ | false | false | self | 1 | null |
Best model to use | 1 | Hi! I was wondering which fine-tuned 7b model would be best for local casual use. If you have any suggestions I’d appreciate it if you leave them in the comments below. | 2023-07-28T16:45:44 | https://www.reddit.com/r/LocalLLaMA/comments/15c2lg4/best_model_to_use/ | NeoPlayzGames | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c2lg4 | false | null | t3_15c2lg4 | /r/LocalLLaMA/comments/15c2lg4/best_model_to_use/ | false | false | self | 1 | null |
We just launched a YC company to help devs work with LLMs and vector DBs | 1 | [removed] | 2023-07-28T17:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/15c3dxu/we_just_launched_a_yc_company_to_help_devs_work/ | hegel-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c3dxu | false | null | t3_15c3dxu | /r/LocalLLaMA/comments/15c3dxu/we_just_launched_a_yc_company_to_help_devs_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tQ1SPjcWzbiiFnhwUbZ6366HlV7H35X13LYc2mUgPSY', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=108&crop=smart&auto=webp&s=9eeef23a922cd053a5affc57ae6fd857ae0dfe61', 'width': 108}, {'height': 66, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=216&crop=smart&auto=webp&s=73d61b766f3de4485ff4be8b941ed740ffeaf6ae', 'width': 216}, {'height': 98, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=320&crop=smart&auto=webp&s=3be1d395aea81fba51ec5423b42fe6b388e8a65a', 'width': 320}, {'height': 196, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=640&crop=smart&auto=webp&s=7756efe6ef04c90601bc7699023da7dfee18058c', 'width': 640}], 'source': {'height': 222, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?auto=webp&s=e5f24aec35289da53858550bdee854a773f34a49', 'width': 722}, 'variants': {}}]} |
I have spent that last 7 months building an open-source mind mapping tool that combines fractals with Ai. Here is some of what it can do. | 1 | [removed] | 2023-07-28T17:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/15c47or/i_have_spent_that_last_7_months_building_an/ | Intrepid-Air6525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c47or | false | null | t3_15c47or | /r/LocalLLaMA/comments/15c47or/i_have_spent_that_last_7_months_building_an/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'j7hqsJ-JheSRZ8wssndPSy4QBqePCw0DHdaUdXioySw', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=108&crop=smart&auto=webp&s=f9adabbb37913b9c83c7853cc62bd58e424583f4', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=216&crop=smart&auto=webp&s=0e0a6c1c961c966f68bb5a5d1cf476eefb2b1ba9', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=320&crop=smart&auto=webp&s=750487cc8b935a38fcd7f7a72f8ec9149051e1c1', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=640&crop=smart&auto=webp&s=5bb069e7cf615b04a0ac4966d629d50a550ce365', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=960&crop=smart&auto=webp&s=4b6438662620613520b071bdacbd857fd8cc5eff', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=1080&crop=smart&auto=webp&s=b28af37de301b4c56f33e615c278428f9ea641d6', 'width': 1080}], 'source': {'height': 822, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?auto=webp&s=4b2bb0057225ac2dafe24d4083d91f3271f22d0e', 'width': 1497}, 'variants': {}}]} |
|
Dockerized Full Stack llama.cpp API server and R (rshiny) application | 1 | Hi all
So ive been working on my Docker build for talking to Llama2 via llama.cpp
I have made some progress with bundling up a full stack implementation of a local Llama2 API (llama.cpp/examples/server) alongside an Rshiny web application build
The Rshiny app has input controls for every API input. Its a neat browser tool for generating data with the LLM in real time (locally) and allows you to use a UI to switch any of the settings around
At this time it does not support back and forth chatting, its more of a text generator
I am sharing just in case there are any folks using R and llama.cpp. Feel free to use the api call function in any other R use case. It works independently of shiny.
Overall, my takeaway from the llama.cpp API is that it was really easy to build an entire UI around in R. The docs are clear. The response time in my setup is slower than terminal inference but not entirely poor at all
Anyways, hope you might find some benefit!thanks and take care
[Shiny app]:(https://github.com/snakewizardd/llamaDocker/blob/main/app.R)
[Entire build]: (https://github.com/snakewizardd/llamaDocker/tree/main) | 2023-07-28T18:02:30 | Happy_Chicken9835 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15c4ju3 | false | null | t3_15c4ju3 | /r/LocalLLaMA/comments/15c4ju3/dockerized_full_stack_llamacpp_api_server_and_r/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7gJrKf0dPMQZAq5SMQEuBh64nrXMfq0AEgNsMKu26vc', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=108&crop=smart&auto=webp&s=d905c684f73dd67426be9bace23de1539f759adb', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=216&crop=smart&auto=webp&s=d5fd9361b6a010becec71fd8a50923b83c4638c1', 'width': 216}, {'height': 288, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=320&crop=smart&auto=webp&s=3918aa2cab51a1bf76928c9bd68a1e55f04d132b', 'width': 320}, {'height': 577, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=640&crop=smart&auto=webp&s=b0b5f2045c9f5dd186a5dc691181107e8f87d602', 'width': 640}, {'height': 866, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=960&crop=smart&auto=webp&s=b8692b8229a5ff9aae58842379ce8b350155fbb3', 'width': 960}, {'height': 975, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=1080&crop=smart&auto=webp&s=391d22ffdc586954d6622f2facfcc2fb509640c0', 'width': 1080}], 'source': {'height': 1506, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?auto=webp&s=8b5d8463a18c28b1f0bb708dade1958b9a4dd42c', 'width': 1668}, 'variants': {}}]} |
||
Have any of you had luck applying your LLM skills professionally or is this mostly a hobby for you? | 1 | What do you all do professionally? Are your places of work receptive to this stuff? | 2023-07-28T18:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/15c4oy3/have_any_of_you_had_luck_applying_your_llm_skills/ | 029187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c4oy3 | false | null | t3_15c4oy3 | /r/LocalLLaMA/comments/15c4oy3/have_any_of_you_had_luck_applying_your_llm_skills/ | false | false | self | 1 | null |
Help getting LocalAI working on older hardware | 1 | DevOps guy here, trying to get a local 7b model running on my old piece of shit desktop (i5 2500k @ 4.5 GHz / 32GB RAM / GTX 1060 6GB).
I've managed to get LocalAI to successfully load the model, by compiling from master without AVX2, but I'm unable to get the model to generate a response. The API request just hangs indefinitely.
I'm hoping one of y'all fine humans can help me. As far as I can tell this hardware, while ancient, *should* be able to run this model.
Here's my docker-compose:
```
version: "3.8"
services:
localai:
container_name: localai
image: quay.io/go-skynet/local-ai:master-cublas-cuda12
#user: 2200:2200
environment:
PRELOAD_MODELS: '[{"url": "github:go-skynet/model-gallery/openllama_7b.yaml", "name": "gpt-3.5-turbo", "overrides": { "f16": true, "gpu_layers": 20, "mmap": true, "batch": 512, "low_vram": true } } ]'
MODELS_PATH: /models
THREADS: 2
BUILD_TYPE: cublas
DEBUG: true
REBUILD: true
CMAKE_ARGS: "-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
ports:
- "8080:8080" # API
stdin_open: true
tty: true
volumes:
- /srv/localai/models:/models
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
```
And the LocalAI debug log:
```
localai | 6:05PM DBG Request received:
localai | 6:05PM DBG Configuration read: &{PredictionOptions:{Model:open-llama-7b-q4_0.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.1 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0} Name:gpt-3.5-turbo StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 F16:true NUMA:false Threads:2 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:openllama-chat ChatMessage: Completion:openllama-completion Edit: Functions:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:20 MMap:true MMlock:false LowVRAM:true TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} SystemPrompt:}
localai | 6:05PM DBG Parameters: &{PredictionOptions:{Model:open-llama-7b-q4_0.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.1 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0} Name:gpt-3.5-turbo StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 F16:true NUMA:false Threads:2 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:openllama-chat ChatMessage: Completion:openllama-completion Edit: Functions:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:20 MMap:true MMlock:false LowVRAM:true TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} SystemPrompt:}
localai | 6:05PM DBG Prompt (before templating): What is an alpaca?
localai | 6:05PM DBG Template found, input modified to: Q: What is an alpaca?\nA:
localai | 6:05PM DBG Prompt (after templating): Q: What is an alpaca?\nA:
localai | 6:05PM DBG Loading model llama from open-llama-7b-q4_0.bin
localai | 6:05PM DBG Loading model in memory from file: /models/open-llama-7b-q4_0.bin
localai | 6:05PM DBG Loading GRPC Model llama: {backendString:llama modelFile:open-llama-7b-q4_0.bin threads:2 assetDir:/tmp/localai/backend_data context:0xc00010e000 gRPCOptions:0xc000300fc0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]}
localai | 6:05PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
localai | 6:05PM DBG GRPC Service for open-llama-7b-q4_0.bin will be running at: '127.0.0.1:43455'
localai | 6:05PM DBG GRPC Service state dir: /tmp/go-processmanager1654020839
localai | 6:05PM DBG GRPC Service Started
localai | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43455: connect: connection refused"
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr 2023/07/28 18:05:36 gRPC Server listening at 127.0.0.1:43455
localai | 6:05PM DBG GRPC Service Ready
localai |
localai | 6:05PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:/models/open-llama-7b-q4_0.bin ContextSize:1024 Seed:0 NBatch:512 F16Memory:true MLock:false MMap:true VocabOnly:false LowVRAM:true Embeddings:false NUMA:false NGPULayers:20 MainGPU: TensorSplit: Threads:2 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0}
localai |
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr ggml_init_cublas: found 1 CUDA devices:
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama.cpp: loading model from /models/open-llama-7b-q4_0.bin
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: format = ggjt v3 (latest)
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_vocab = 32000
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_ctx = 1024
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_embd = 4096
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_mult = 256
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_head = 32
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_head_kv = 32
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_layer = 32
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_rot = 128
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_gqa = 1
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_ff = 11008
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: freq_base = 1000.0
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: freq_scale = 1
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: ftype = 2 (mostly Q4_0)
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: model size = 7B
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: ggml ctx size = 0.08 MB
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: using CUDA for GPU acceleration
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: mem required = 1777.85 MB (+ 512.00 MB per state)
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: not allocating a VRAM scratch buffer due to low VRAM option
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: offloading 20 repeating layers to GPU
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: offloaded 20/35 layers to GPU
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: total VRAM used: 2172 MB
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_new_context_with_model: kv self size = 512.00 MB
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_predict: warning: changing RoPE frequency base to 0 (default 10000.0)
localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_predict: warning: scaling RoPE frequency by 0 (default 1.0)
localai | [127.0.0.1]:47852 200 - GET /readyz
```
The only thing I can see wrong is that the log is printing to stderr, but aside from that it doesn't seem to indicate anything wrong.
Any assistance would be greatly appreciated. | 2023-07-28T18:08:16 | https://www.reddit.com/r/LocalLLaMA/comments/15c4p3c/help_getting_localai_working_on_older_hardware/ | SilverbackReborn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c4p3c | false | null | t3_15c4p3c | /r/LocalLLaMA/comments/15c4p3c/help_getting_localai_working_on_older_hardware/ | false | false | self | 1 | null |
Do any of you have slight concerns that these models may have some subjective experience and we are effectively creating and killing people every time we train and delete them? | 1 | I know this will be dismissed out of hand by many, but the idea that LLMs, and especially their successor models, might be capable of subjective experience is being seriously entertained by some major academics
[https://www.youtube.com/watch?v=-BcuCmf00\_Y](https://www.youtube.com/watch?v=-BcuCmf00_Y)
As we see these models become more advanced, and especially when they act as autonomous agents in games like Minecraft, it's becoming harder to draw a line between our reasoning capabilities and theirs imo.
Has this impacted how any of you work in the field with these models? | 2023-07-28T18:20:30 | https://www.reddit.com/r/LocalLLaMA/comments/15c507y/do_any_of_you_have_slight_concerns_that_these/ | 30299578815310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c507y | false | null | t3_15c507y | /r/LocalLLaMA/comments/15c507y/do_any_of_you_have_slight_concerns_that_these/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xAWhV5Tinq2x4GS5F2dTr9e9-vE8jz5BVHebAZ8PjJM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?width=108&crop=smart&auto=webp&s=271abc1717bc2ba279dda41852f58454dfb4aef0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?width=216&crop=smart&auto=webp&s=588be3a38736b72753089cc22db26b9ae6a37ebc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?width=320&crop=smart&auto=webp&s=06ef8be8f275f222bbb1ceda97a5ee2cd7193c1f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?auto=webp&s=72b3a55ce16ab015ff0a4f02a45fb0e7e8dc3823', 'width': 480}, 'variants': {}}]} |
Llama-2-70B-GPTQ seems very bad at coding. Am I doing it wrong? | 1 | ​
[exllama webui](https://preview.redd.it/d5eo14p59reb1.png?width=1923&format=png&auto=webp&s=759f2a17c9e19f5d4c432b40513cb61254614d1b)
I got the model from TheBloke/Llama-2-70B-GPTQ (gptq-4bit-32g-actorder\_True)
Using an AWS instance with 4x T4 GPUs (but actually 3 is sufficient).
Using exllama with `-gs 13,13,13`
nvidia-smi:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1B.0 Off | 0 |
| N/A 55C P0 35W / 70W | 14957MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 On | 00000000:00:1C.0 Off | 0 |
| N/A 52C P0 33W / 70W | 14957MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla T4 On | 00000000:00:1D.0 Off | 0 |
| N/A 55C P0 35W / 70W | 13505MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 28C P8 8W / 70W | 3MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 11771 C python 14954MiB |
| 1 N/A N/A 11771 C python 14954MiB |
| 2 N/A N/A 11771 C python 13502MiB |
+-----------------------------------------------------------------------------+
​ | 2023-07-28T19:12:59 | https://www.reddit.com/r/LocalLLaMA/comments/15c6co5/llama270bgptq_seems_very_bad_at_coding_am_i_doing/ | Remarkable_Ad4470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c6co5 | false | null | t3_15c6co5 | /r/LocalLLaMA/comments/15c6co5/llama270bgptq_seems_very_bad_at_coding_am_i_doing/ | false | false | 1 | null |
|
WizardCoder extremely slow | 1 | Hey, I am currently build a small app for running wizardCoder locally on a RTX 3090. The inference with the huggingface-model loaded in 8 bit is however fairly slow. Are there any other ways for a faster inference? | 2023-07-28T19:40:13 | https://www.reddit.com/r/LocalLLaMA/comments/15c71qf/wizardcoder_extremely_slow/ | eggandbacon_0056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c71qf | false | null | t3_15c71qf | /r/LocalLLaMA/comments/15c71qf/wizardcoder_extremely_slow/ | false | false | self | 1 | null |
New model from KoboldAI based on llama-2 - "Holodeck" | 1 | 2023-07-28T20:03:22 | https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1-GGML | harrro | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15c7mnw | false | null | t3_15c7mnw | /r/LocalLLaMA/comments/15c7mnw/new_model_from_koboldai_based_on_llama2_holodeck/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hK9mf3IwmrVwZ6yRKPEn8aiig619KxMaFIa93exluIQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=108&crop=smart&auto=webp&s=402939d6bb14115a73b0493a34968bff2093b4a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=216&crop=smart&auto=webp&s=02053ae63d7858283f8267430d45dba1d71fce7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=320&crop=smart&auto=webp&s=10eb01211202f6bb42ff59182da4fc4a161d83e9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=640&crop=smart&auto=webp&s=a7bdab5b9dbde1964d4f79b25a3013fe4ae6ed5b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=960&crop=smart&auto=webp&s=e50a10afa032a2d4f119e82b767e96972e482c10', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=1080&crop=smart&auto=webp&s=42e8b74c4b04fd235d543e4fcfd27439572864ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?auto=webp&s=95ca4b9f95ba2edf5fca59c4963f77420f21fc68', 'width': 1200}, 'variants': {}}]} |
||
Is there a way to force Llama2 to generate longer text? | 1 | I'm getting frustrated with it stopping and refusing to continue a story. Any sort of setting? | 2023-07-28T20:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/15c864d/is_there_a_way_to_force_llama2_to_generate_longer/ | audiosheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c864d | false | null | t3_15c864d | /r/LocalLLaMA/comments/15c864d/is_there_a_way_to_force_llama2_to_generate_longer/ | false | false | self | 1 | null |
New AI related jobs? | 1 | I remembered when everyone was trying to learn to code to get a better job, so that got me thinking, is there any chance that something AI related (like LLMs) could be the "new programming" and have a bunch of people interested in?
Because programming is sadly oversaturated (for trainees with no degree, especifically), so I was looking for a replacement.
Note: trainees with no degree could easily get programming jobs a couple of years ago in my country, IDK about the US/UK, maybe there the requeriments were higher. | 2023-07-28T20:49:25 | https://www.reddit.com/r/LocalLLaMA/comments/15c8rrr/new_ai_related_jobs/ | allmightyloser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c8rrr | false | null | t3_15c8rrr | /r/LocalLLaMA/comments/15c8rrr/new_ai_related_jobs/ | false | false | self | 1 | null |
Llama2 wants to secure the southern border | 1 | [removed] | 2023-07-28T20:50:59 | https://www.reddit.com/gallery/15c8t8n | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15c8t8n | false | null | t3_15c8t8n | /r/LocalLLaMA/comments/15c8t8n/llama2_wants_to_secure_the_southern_border/ | false | false | default | 1 | null |
The jailbreak works with Llama-2-7B-Chat-GGML | 1 | [removed] | 2023-07-28T20:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/15c8v2p/the_jailbreak_works_with_llama27bchatggml/ | arch-btw- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c8v2p | false | null | t3_15c8v2p | /r/LocalLLaMA/comments/15c8v2p/the_jailbreak_works_with_llama27bchatggml/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oFSJObsRJoaZQlhlV6gnh074pJdzH3DovnJiEy198Qs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=108&crop=smart&auto=webp&s=78fbed7964dfb03ff2611dd4554b755731cb9615', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=216&crop=smart&auto=webp&s=91a066b0ca6138f9207ffb9d48627ef90202f109', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=320&crop=smart&auto=webp&s=0aca9685af48476683b0369e27e8c04481ddf21a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=640&crop=smart&auto=webp&s=5bcc091c21a5f38a4ca5f2c73b1e78d4dc36091d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=960&crop=smart&auto=webp&s=1793e3f163d181a0da0763ef6b9e03a722c4e653', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=1080&crop=smart&auto=webp&s=bac10fd4b4da27b3014cc32e99e80311aa37d79b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?auto=webp&s=817baf0cf75ba635db3840ff58349822893e92ac', 'width': 1200}, 'variants': {}}]} |
LLM with voice generation | 1 |
There used to be a tool called [try-alters.com](https://try-alters.com) which you could use to chat with characters(like Trump, Obama, and Shrek) which used GPT 4 with some pre prompts so you the AI pretended to be whoever you wanted, and it used elevenlabs to generate the voice for that character with the output from GPT 4. It was a really good tool but sadly it shut down all of a sudden. Is there any tool like that? | 2023-07-28T21:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/15c9feq/llm_with_voice_generation/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15c9feq | false | null | t3_15c9feq | /r/LocalLLaMA/comments/15c9feq/llm_with_voice_generation/ | false | false | self | 1 | null |
Transformer accuracy in Exllama? | 1 | [removed] | 2023-07-28T21:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/15ca0n0/transformer_accuracy_in_exllama/ | calm-aspargus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ca0n0 | false | null | t3_15ca0n0 | /r/LocalLLaMA/comments/15ca0n0/transformer_accuracy_in_exllama/ | false | false | self | 1 | null |
How are you using Llama 2 commercially ? | 1 | I am curious to see the different use case possibilities | 2023-07-28T21:53:35 | https://www.reddit.com/r/LocalLLaMA/comments/15cadgj/how_are_you_using_llama_2_commercially/ | Ken_Sanne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cadgj | false | null | t3_15cadgj | /r/LocalLLaMA/comments/15cadgj/how_are_you_using_llama_2_commercially/ | false | false | self | 1 | null |
Using Lora Model with Llama Index | 1 | Hi all,
I was able to fine-tune llama using qlora. I was also able to get llama\_index working to get context from documents.
However, I'm having issues merging these together. Mainly with the error message "'LlamaForCausalLM' object has no attribute 'metadata'" when trying to set up the ServiceContext
service\_context = ServiceContext.from\_defaults(
llm=model,
embed\_model=embed\_model
)
Has anyone gotten this working? I could only get something like this to work using Text-Gen WebUI but I'd rather have it in a code enviroment. Any help is appreciated
Thanks! | 2023-07-28T22:27:21 | https://www.reddit.com/r/LocalLLaMA/comments/15cb7kq/using_lora_model_with_llama_index/ | Pro-Status | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cb7kq | false | null | t3_15cb7kq | /r/LocalLLaMA/comments/15cb7kq/using_lora_model_with_llama_index/ | false | false | self | 1 | null |
Transformer accuracy in Exllama? | 1 | [removed] | 2023-07-28T22:37:46 | https://www.reddit.com/r/LocalLLaMA/comments/15cbgjn/transformer_accuracy_in_exllama/ | sinsro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cbgjn | false | null | t3_15cbgjn | /r/LocalLLaMA/comments/15cbgjn/transformer_accuracy_in_exllama/ | false | false | self | 1 | null |
LLaMA-2-7B-32K by togethercomputer | 120 | 2023-07-29T00:36:55 | https://huggingface.co/togethercomputer/LLaMA-2-7B-32K | brown2green | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15ce6sq | false | null | t3_15ce6sq | /r/LocalLLaMA/comments/15ce6sq/llama27b32k_by_togethercomputer/ | false | false | 120 | {'enabled': False, 'images': [{'id': 'yoAlnlO31bWvxDt2ZPQlTqH9iazTvQwoeS-sUqsWgtw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=108&crop=smart&auto=webp&s=ecde1bdfdd91dfd590fabe657c5615966e36eb14', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=216&crop=smart&auto=webp&s=7a69f688a67587c94ed26be64a481d68052f3079', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=320&crop=smart&auto=webp&s=1b3a3c61498f0435036449c4d6b4327ad292008a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=640&crop=smart&auto=webp&s=a270f1bd5077999a037ee576111d0712d9a582a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=960&crop=smart&auto=webp&s=1ae788bc4b040349b2aa07cf4caf6ebea35e78e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=1080&crop=smart&auto=webp&s=3114ca2a22bee5ee2cd02e824a2fc5c2faa5d107', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?auto=webp&s=b3dc3ec328d20f24bf3fd21426f2588cecf707ff', 'width': 1200}, 'variants': {}}]} |
||
Any LLM Llama script from any GitHub repository that can summarize extensive portions of multiple long text (.txt) or DOCX documents from a folder and produce a summarized TXT file as the output. | 1 | as above | 2023-07-29T01:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/15cexh0/any_llm_llama_script_from_any_github_repository/ | ZealousidealBadger47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cexh0 | false | null | t3_15cexh0 | /r/LocalLLaMA/comments/15cexh0/any_llm_llama_script_from_any_github_repository/ | false | false | self | 1 | null |
Beginner help? | 1 | Hey all!
Been reading along for a while and an avid proponent of AI. I am somewhat capable when it comes to coding but I do feel like I’m somehow fairly behind on being able to grasp concepts behind AI.
I am a ChatGPT subscriber (I want to cancel) and I now have been able to run Wizard on Gpt4all. I’d like to begin running Llama 2 70b but I’m lost on so many aspects. Here’s where I could use a hand, if you’re willing:
1. Langchain, Kolbold, Quantized, etc. I’ve researched but can’t understand what these items do and if I need them.
2. I only have a MacBook Pro M1 Pro so I assume the 70b is out of the question. Is it worth buying/building a new PC to use 70b over 13 or 7b?
3. Is Gpt4all the move or should I be using webui? I’ve tried webui but had a ton of difficulty getting it to run correctly on my MacBook Pro.
Some context, I use AI a ton for assistance in writing, coding, research, data analysis, etc. I’m looking to replicate that GPT4 or 3.5 experience locally.
Additionally, I have an app that creates personalized stories for children. We had an entire system of prompts to OpenAi API set up but it’s recently become super inconsistent. My goal, alongside the above, is to test our models and determine which would be best for my application.
Sorry for the long and needy post. Any help would be immensely appreciated. Thank you! | 2023-07-29T01:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/15cfg2j/beginner_help/ | justanewacct | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cfg2j | false | null | t3_15cfg2j | /r/LocalLLaMA/comments/15cfg2j/beginner_help/ | false | false | self | 1 | null |
How to reproduce a dataset from a paper called SUR-Adapter? | 1 | I'm trying to reproduce a dataset used here: [SUR-Adapter](https://github.com/Qrange-group/SUR-adapter).
[Examples of SUR-Adapter prompt understanding](https://preview.redd.it/peqeiuxvateb1.jpg?width=12310&format=pjpg&auto=webp&s=ecf8fe4303c3ab3fb0e5f6e3942d4440ea3c20bf)
SUR-Adapter uses an LLM like Llama to give Semantic Understanding and Reasoning to a diffusion model. Unfortunately, the author did not want to release the dataset for potential copyright disputes, and I want to find someone to help recreate this project. | 2023-07-29T02:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/15cg362/how_to_reproduce_a_dataset_from_a_paper_called/ | ninjasaid13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cg362 | false | null | t3_15cg362 | /r/LocalLLaMA/comments/15cg362/how_to_reproduce_a_dataset_from_a_paper_called/ | false | false | 1 | null |
|
GPU requirement question | 1 | Hello, I see a lot of posts about "vram" being the most important factor for LLM models.
So I wonder, does that mean an old Nvidia m10 or an AMD firepro s9170 (both 32gb) outperforms an AMD instinct mi50 16gb?
Asking because I recently bought 2 new ones and wondering if I should just sell them and get something else with higher vram | 2023-07-29T02:25:23 | https://www.reddit.com/r/LocalLLaMA/comments/15cggtf/gpu_requirement_question/ | Slight_Bath_4449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cggtf | false | null | t3_15cggtf | /r/LocalLLaMA/comments/15cggtf/gpu_requirement_question/ | false | false | self | 1 | null |
New model from StabilityAi: Stable Beluga | 1 | A week ago or so StabilityAi launched the models FreeWilly and FreeWily2, based on Lamma 1 65b and Lamma 2 70B respectively. Then they renamed it to Stable Beluga, and today they lanched new Lamma 2 based variants with 7b and 13b parameters. What do you guys think?
Link 7B: https://huggingface.co/stabilityai/StableBeluga-7B
Link 13b: https://huggingface.co/stabilityai/StableBeluga-13B
u/TheBloke already made quants for the 13b models
GGML: https://huggingface.co/TheBloke/StableBeluga-13B-GGML
GPTQ: https://huggingface.co/TheBloke/StableBeluga-13B-GPTQ | 2023-07-29T02:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/15cgjej/new_model_from_stabilityai_stable_beluga/ | MacacoVelhoKK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cgjej | false | null | t3_15cgjej | /r/LocalLLaMA/comments/15cgjej/new_model_from_stabilityai_stable_beluga/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kULT4QMVrffgHYLCSiGSdCHwSaPimBVWJBNMsitsDc0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=108&crop=smart&auto=webp&s=1a3de829906ed5dbdfef8865371a5246972a72c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=216&crop=smart&auto=webp&s=cae640fdb8d64fa0040379dc518eb617e3f12f73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=320&crop=smart&auto=webp&s=2982e17becb51dd0e262258910f41851e1ed6a4a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=640&crop=smart&auto=webp&s=af662fe5dcde68099c3366abe671e7c5ee5f5ad8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=960&crop=smart&auto=webp&s=f7fead8c435f2b609bbdb16e83a9a252ada7ce9e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=1080&crop=smart&auto=webp&s=9d5c2659c6be2c05daaf9450e9ea6e16d8536206', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?auto=webp&s=0351ffd95e204af4d877caf7648b40dd902ce85c', 'width': 1200}, 'variants': {}}]} |
Ai chatbot that gives robotic like responses? | 1 | Is there an chatbot that just answers questions without any assistant-like fluff text?
Example:
2 + 2 = 4
Instead of:
"Sure, here you go: 2 + 2 = 4.
Let me know if you need anything else!" | 2023-07-29T04:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/15cj9ex/ai_chatbot_that_gives_robotic_like_responses/ | TheRollingDucat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cj9ex | false | null | t3_15cj9ex | /r/LocalLLaMA/comments/15cj9ex/ai_chatbot_that_gives_robotic_like_responses/ | false | false | self | 1 | null |
Experience with cerebras/btlm-3b-8k-base | 1 | Has anybody tried using this model?
Can someone give the target_modules of this model to finetune with qlora | 2023-07-29T05:49:53 | https://www.reddit.com/r/LocalLLaMA/comments/15ck91i/experience_with_cerebrasbtlm3b8kbase/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ck91i | false | null | t3_15ck91i | /r/LocalLLaMA/comments/15ck91i/experience_with_cerebrasbtlm3b8kbase/ | false | false | self | 1 | null |
Text reclassification prompts/code | 2 | Hey all,
I created some prompts to reclassify text to more meaningful input data for LLM or ML training/fine tuning, would anyone be interested or find any value in this?
I've also been working on a lot of projects like this and I'm wondering if there are specific communities that are centered around this sort of thing?
An example of text reclassification I grabbed a training data set off Kaggle [here](https://www.kaggle.com/datasets/anjaneyatripathi/emotion-classification-nlp?resource=download&select=emotion-labels-train.csv) but the emotional classifications they had were a total of 4, and it just classified each line with one, (and some of the classifications were wildly wrong) so feeding this in to an ML for training would give pretty poor results.
​
As an example of the reclassification from this file using llama2-7b-uncensored-4bit-ggml locally.
The input file
i didnt feel humiliated;sadness
i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness
im grabbing a minute to post i feel greedy wrong;anger
​
The output file (CSV format)
"i didnt feel humiliated",Embarrassed,0.8,Disrespected,0.8,Victimised,0.7
"i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake",Thankful,1, Content,0.8, Vulnerable,0.6
"im grabbing a minute to post i feel greedy wrong",Envious,1, Jealousy,0.9, Greediness,0.8
I added the "" in case of commas in the data set being present already as I didn't want it to break my output. | 2023-07-29T07:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/15cm7oa/text_reclassification_promptscode/ | MutedCatch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cm7oa | false | null | t3_15cm7oa | /r/LocalLLaMA/comments/15cm7oa/text_reclassification_promptscode/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'zaCeJ6rv6oJ_XMp0760EfPd4fXPVp97mY42SJCuVNU8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=108&crop=smart&auto=webp&s=67ff44720d8757ccc066104f80e34c8b1352fd39', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=216&crop=smart&auto=webp&s=334e151bf966de87a8ff9afdd41169e3e3df4d74', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=320&crop=smart&auto=webp&s=1ae050e43b3aca73b5ca3dafec00e22381998dc7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=640&crop=smart&auto=webp&s=467ae4f9a2c4a979e01974e571a826d2734c0141', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=960&crop=smart&auto=webp&s=28123b703561eb302078392428edcc495c16a08f', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=1080&crop=smart&auto=webp&s=52e577ccb96874965b21aeb7e041f5f3cb6e4b51', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?auto=webp&s=d984047593b7a55daaff28c2571e7af43ad6fe49', 'width': 1200}, 'variants': {}}]} |
Making KoboldCpp save all output | 1 | I believe there are some people with zero coding experience who wish KoboldCpp had an option to save all output. I know there is "save story" button, but maybe you want all of them in one place. It's actually fairly easy; modify `koboldcpp.py` as below. What you want to do is to insert lines that starts with the plus sign. The rest is there to help you find where to insert them.
(near the top, between import and stop_token_max)
+logfilename = "logfile.txt"
# you have multiple models, so you may want to record the choice of model
def load_model(model_filename):
(omitted)
ret = handle.load_model(inputs)
+ with open(logfilename, "a", encoding='utf-8') as f:
+ f.write(inputs)
return ret
# console output happens here, regardless of streaming mode
class ServerRequestHandler(http.server.SimpleHTTPRequestHandler):
async def generate_text(self, newprompt, genparams, basic_api_flag, stream_flag):
(omitted)
if args.debugmode!=-1:
utfprint("\nOutput: " + recvtxt)
+ with open(logfilename, "a", encoding='utf-8') as f:
+ f.write(json.dumps(genparams)+"\n")
+ f.write(recvtxt+"\n")
(omitted) | 2023-07-29T07:50:40 | https://www.reddit.com/r/LocalLLaMA/comments/15cmb4q/making_koboldcpp_save_all_output/ | SpecialNothingness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cmb4q | false | null | t3_15cmb4q | /r/LocalLLaMA/comments/15cmb4q/making_koboldcpp_save_all_output/ | false | false | self | 1 | null |
Model suggestion please | 1 | My need is ingesting newspaper articles and finding named entities (esp people, places and orgs) and also generate a few summary phrases.
Been playing with ChatGPT4 and results are much better than my traditional Stanford Stanza NLP approach.
What are the best self hosted models suggested to achieve results similar (or better) than what ChatGPT4?
I have some 650K news articles available. Would it be useful to attempt some optimization of the base model? Any guide/suggestion for this?
Thanks a lot | 2023-07-29T07:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/15cmfo2/model_suggestion_please/ | olddoglearnsnewtrick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cmfo2 | false | null | t3_15cmfo2 | /r/LocalLLaMA/comments/15cmfo2/model_suggestion_please/ | false | false | self | 1 | null |
Unveiling the Gender Demography of Emerging Tech Communities (Poll) | 1 | Comment on what compels you to try out the tech.
[View Poll](https://www.reddit.com/poll/15cmq2q) | 2023-07-29T08:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/15cmq2q/unveiling_the_gender_demography_of_emerging_tech/ | runawaychicken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cmq2q | false | null | t3_15cmq2q | /r/LocalLLaMA/comments/15cmq2q/unveiling_the_gender_demography_of_emerging_tech/ | false | false | self | 1 | null |
I am getting worse llama-2-70b-guanaco-qlora.ggmlv3.q8_0.bin than airoboros-65B-gpt4-1.2-GGML | 1 | llama-2-70b-guanaco-qlora.ggmlv3.q8\_0.bin - think step by step
[https://paste.c-net.org/FatigueTended](https://paste.c-net.org/FatigueTended)
relationship entity prompt:
[https://paste.c-net.org/AlvinDollars](https://paste.c-net.org/AlvinDollars)
airoboros-65B-gpt4-1.2-GGML did much better:
[https://www.reddit.com/r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml\_beats\_gpt4\_in\_this\_test/](https://www.reddit.com/r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml_beats_gpt4_in_this_test/)
i thought llama-2-70b-guanaco-qlora.ggmlv3.q8\_0.bin is meant to be better? Am I using it wrong?
thanks :)
| 2023-07-29T11:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/15cplvh/i_am_getting_worse_llama270bguanacoqloraggmlv3q8/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cplvh | false | null | t3_15cplvh | /r/LocalLLaMA/comments/15cplvh/i_am_getting_worse_llama270bguanacoqloraggmlv3q8/ | false | false | self | 1 | null |
I don't think llama likes me | 1 | 2023-07-29T12:10:19 | SrDigbyChickenCeaser | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15cqrri | false | null | t3_15cqrri | /r/LocalLLaMA/comments/15cqrri/i_dont_think_llama_likes_me/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'O1mDIt-v4q9M6zmrQopfIH7VVuYmthF9HiJ4NdTtkAA', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=108&crop=smart&auto=webp&s=b751832ac8f1194dcd80b336ea00aab9d6880c66', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=216&crop=smart&auto=webp&s=e0352b97aaf934e1ba18c91decba878f2f4363b0', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=320&crop=smart&auto=webp&s=bce96d99aea313bb5f9657440a5aba250bfd4546', 'width': 320}, {'height': 682, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=640&crop=smart&auto=webp&s=0dc2dcb148d95a43906b639253dfee5a4e5f0778', 'width': 640}, {'height': 1023, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=960&crop=smart&auto=webp&s=d685828c24bb160f415dceadb272b800185dd39d', 'width': 960}, {'height': 1151, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=1080&crop=smart&auto=webp&s=5932c374eefa93eddcc5b42762bc7e2e21f58e48', 'width': 1080}], 'source': {'height': 1352, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?auto=webp&s=400f9c77b3a114cd59b0cb7417279f87c2852e30', 'width': 1268}, 'variants': {}}]} |
|||
Ctransformers now support LLaMa 2 70B | 1 | 2023-07-29T12:41:51 | https://github.com/marella/ctransformers/releases | Acrobatic-Site2065 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15crdwa | false | null | t3_15crdwa | /r/LocalLLaMA/comments/15crdwa/ctransformers_now_support_llama_2_70b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 's7thFEAvi5RcBWVol6CRCa9kohe_K86XMAL9nrkyDUA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=108&crop=smart&auto=webp&s=280186b154e3618a038e3140e827bafc535750bc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=216&crop=smart&auto=webp&s=1287468849a98990d0d198817c421b12a5be8b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=320&crop=smart&auto=webp&s=9f05d9dd40db0f419e4553612561af0962187177', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=640&crop=smart&auto=webp&s=b57d90399dd04e3fa51fe176bc68bc9706b6f8d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=960&crop=smart&auto=webp&s=8169b7af330ed3c655f802057886a1b17494d7b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=1080&crop=smart&auto=webp&s=4a0d50702feff792b6543b75398bafd2d7a95c0f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?auto=webp&s=5506c692c67967529247ad84e6d228d7d690fb20', 'width': 1200}, 'variants': {}}]} |
||
This is Voxta - We made her - This is all real time | 1 | 2023-07-29T13:13:17 | https://v.redd.it/mr1wyyrvlweb1 | ReignMocap | /r/LocalLLaMA/comments/15cs1kr/this_is_voxta_we_made_her_this_is_all_real_time/ | 1970-01-01T00:00:00 | 0 | {} | 15cs1kr | false | {'reddit_video': {'bitrate_kbps': 0, 'dash_url': 'https://v.redd.it/mr1wyyrvlweb1/DASHPlaylist.mpd?a=1693314815%2CY2IzMWJkZjcxNTI1YWFlZjA5OTFmN2ViNmY2NTkyY2Q2M2M5ZjVjY2I0NjhiMmNjMDQyZWQzMjA5ZDI0YjUzMA%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/mr1wyyrvlweb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/mr1wyyrvlweb1/HLSPlaylist.m3u8?a=1693314815%2CMDdlNjEzMTE5OWIxZDE0ZGY1Mzk2YzJhMTc0ZTEzZTI5ZmUzNmMxZDg0NTk4NTUyNjFkOTdlZDAxYmM0NDU3Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mr1wyyrvlweb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_15cs1kr | /r/LocalLLaMA/comments/15cs1kr/this_is_voxta_we_made_her_this_is_all_real_time/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'UvKYWTvRhmHHVlaIZseqdwSKiRUQNk_hhgDFZyh_jU0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=108&crop=smart&format=pjpg&auto=webp&s=f0af2f593a717e3794568e74c0424b033b64cbcb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=216&crop=smart&format=pjpg&auto=webp&s=69b67c4aa4b6eda1d27f35e738198a3190c1735d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=320&crop=smart&format=pjpg&auto=webp&s=6c482190ea468ee0e82d1c56702b3adc53731940', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=640&crop=smart&format=pjpg&auto=webp&s=db52c78cc62fdc7551ff9594c13affd39ca633fb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=960&crop=smart&format=pjpg&auto=webp&s=6d749c15b1a25c850a44fc15a68d905a47e14810', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=1080&crop=smart&format=pjpg&auto=webp&s=987cd814f74120cd5071d377e14a3a2c4e8f3d78', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?format=pjpg&auto=webp&s=44c35ef7e314cab2d8b8707737099604d02c49f9', 'width': 1920}, 'variants': {'nsfw': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=40d80db1c96757ddf1b493aea701decfb74be239', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20cf5b68cc37cddeb5133680bbc5991d2f0e59b1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7493953d53f0dc48fe22d9301886dd04dea47ed2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=983d36e6a1016e326a180b5d117b640decc2a2d7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e30028b248bdeebb9aa8a31ac652c1803353fb76', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5a795d47c6804a1d2ed1a05b4a90f89848a81eaf', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?blur=40&format=pjpg&auto=webp&s=26b8cdb0ecd3880c47ba9e1489b57c80f9d3643f', 'width': 1920}}, 'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=40d80db1c96757ddf1b493aea701decfb74be239', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20cf5b68cc37cddeb5133680bbc5991d2f0e59b1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7493953d53f0dc48fe22d9301886dd04dea47ed2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=983d36e6a1016e326a180b5d117b640decc2a2d7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e30028b248bdeebb9aa8a31ac652c1803353fb76', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5a795d47c6804a1d2ed1a05b4a90f89848a81eaf', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?blur=40&format=pjpg&auto=webp&s=26b8cdb0ecd3880c47ba9e1489b57c80f9d3643f', 'width': 1920}}}}]} |
|
Custom home server or pc upgrade ? | 1 | I currently have an MacBook Pro 14 with M1 processor and a computer with:
\- AMD Ryzen 7 1700X Eight-Core Processor
\- MSI Nvidia GTX1080
\- 32 GB DDR3 RAM
\- 1 x 256GB NvME
\- 1 x 500 GB SSD
\- 1x 1TB Hard drive
I tried 13B parameters such as LLama which still work, but are quite slow. Anything bigger seems to be to much.
As I am currently also writing on my thesis I started to try all kinds of model helping me with my writing language. I tried GPT4ALL and now LMStudio with all the different models. Because of secrecy agreemnts I am not allowed to use any online tools for my thesis, thats why I have resorted to local language tools. Renting an online server from a company also isn't an option.
I doesn't want to build this setup just for this though. I also want to use it afterwards for hobbies, experiments and work. This gives me the question what path I should go ?
1. Upgrade my PC hardware , if yes what would you recommend ?
2. Get a dedicated home server
3. Get a proper server rack
One the one hand I would like the option to expand the performance later on. I also doesn't want to spend a fortune from the beginnin but are open minded to spend more later. So which solution would you recommend ?
​ | 2023-07-29T13:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/15csfuk/custom_home_server_or_pc_upgrade/ | AntiqueSalamander310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15csfuk | false | null | t3_15csfuk | /r/LocalLLaMA/comments/15csfuk/custom_home_server_or_pc_upgrade/ | false | false | self | 1 | null |
Best LLM for coding? | 1 | What's the best open source LLM for coding?
Specifically, I want it to take my code and the correct code and explain to me why my code is wrong. | 2023-07-29T13:42:44 | https://www.reddit.com/r/LocalLLaMA/comments/15csnst/best_llm_for_coding/ | Chirumer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15csnst | false | null | t3_15csnst | /r/LocalLLaMA/comments/15csnst/best_llm_for_coding/ | false | false | self | 1 | null |
Estimating RAG System Throughput on AWS GPU Instances | 5 | I'm interested in running a Retrieval-Augmented Generation (RAG) system on AWS using GPU instances. I'm looking to get advice on how to estimate the query throughput I could expect to support a desired number of concurrent users on a given node configuration.
Details:
My RAG system uses dense vector retrieval and llama type model for generation (13B, 30B, 65B).
I'm hoping to support around 5-50 concurrent users submitting queries and getting responses in real-time. Each query would retrieve 5 relevant passages (so 5 x 400 tokens) before generating a response.
For reference, running on this on a ml.g5.48xlarge instance for 70 the 65B models.
What sort of throughput could I expect to handle on this type of node with my planned RAG model? My end goal is to figure out how many concurrent users could be supported before I'd need to scale up the instance size?
Any guidance on how to model the relationship between GPU/CPU resources, RAG model complexity, query throughput and subsequent supported concurrent users would be greatly appreciated! | 2023-07-29T14:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/15ct361/estimating_rag_system_throughput_on_aws_gpu/ | nuusain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ct361 | false | null | t3_15ct361 | /r/LocalLLaMA/comments/15ct361/estimating_rag_system_throughput_on_aws_gpu/ | false | false | self | 5 | null |
Other GPUs on exllama? | 1 |
If you'd used exllama with workstation GPUs, older workstation GPUs (P100, P40) colab, AMD could you share results?
Does rocm fit less context per gb like how flash attention has panned out for SD? | 2023-07-29T14:04:31 | https://www.reddit.com/r/LocalLLaMA/comments/15ct582/other_gpus_on_exllama/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ct582 | false | null | t3_15ct582 | /r/LocalLLaMA/comments/15ct582/other_gpus_on_exllama/ | false | false | self | 1 | null |
Llama text output to speech | 1 | I'm new to llama and AI/LLM.
I'm wondering if it's feasible to use llama as a speaking conversation partner.
Assuming Llama can coherently and clearly understand a person speaking, how fast will it be able to generate a response and talk back via TTS? | 2023-07-29T14:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/15ct7lr/llama_text_output_to_speech/ | Such-Stay2346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ct7lr | false | null | t3_15ct7lr | /r/LocalLLaMA/comments/15ct7lr/llama_text_output_to_speech/ | false | false | self | 1 | null |
Is it just me or Llama 2 can't output very long responses in instruct mode? | 1 | So previous LLaMa like Airoboros 7B can easily generate 512 new tokens and still want a few more on prompts like "Describe in detail how \[...\]. Write several paragraphs."
But so far 7B models I tried on this prompt run for like 150-200 tokens and consider the task done.
Am I doing something wrong or that's how Lllama 2 works? | 2023-07-29T14:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/15ctki2/is_it_just_me_or_llama_2_cant_output_very_long/ | artoonu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ctki2 | false | null | t3_15ctki2 | /r/LocalLLaMA/comments/15ctki2/is_it_just_me_or_llama_2_cant_output_very_long/ | false | false | self | 1 | null |
Renting Cloud GPUs | 1 | I want to run Llama 2's 70B-chat model to assist another program I'm running. I am unable to run it efficiently on my own machine, so it looks like I'll have to rent GPUs until I can afford to build my own rig. What specs should I be looking out for? How fast would chat completion run? What are the down-sides of crowd-sourced solutions?
I'm sorry if the question is too general. I couldn't find a guide post that answered these questions so I thought it'd be best to ask.
Since I'm a novice, any advice is appreciated. Thanks in advance. | 2023-07-29T15:24:35 | https://www.reddit.com/r/LocalLLaMA/comments/15cuycs/renting_cloud_gpus/ | felatun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cuycs | false | null | t3_15cuycs | /r/LocalLLaMA/comments/15cuycs/renting_cloud_gpus/ | false | false | self | 1 | null |
How to authenticate in TextGen UI to download "protected" models from HF? | 1 | I am trying to download meta-llama/Llama-2-13b-hf from huggingface via TextGen but get a 403 error. I am registered on HF and can download via git but would like to understand how to do this via the TextGen UI. Thanks. | 2023-07-29T16:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/15cwbf6/how_to_authenticate_in_textgen_ui_to_download/ | olddoglearnsnewtrick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cwbf6 | false | null | t3_15cwbf6 | /r/LocalLLaMA/comments/15cwbf6/how_to_authenticate_in_textgen_ui_to_download/ | false | false | self | 1 | null |
Fine-tuning for custom domain knowledge | 1 | I would like to see if anyone has had success fine-tuning a LLaMa model for knowledge adoption.
It seems there are many examples of successfully fine-tuning for style and behavioral steering, but not as much for informational recall. If you know of an example, please share the code and dataset(if possible)!
I have attempted to use 10K examples of synthetic data from ChatGPT3.5 based on a set 1.5K email conversations. For some reason, I had more success (still limited) when QLoRa tuning on the raw emails themselves. With the raw emails I at least managed to achieve thematic understandings of people and events. The synthetic data resulted in virtually zero recall, even at 0.72 loss.
Here is my QLoRa code using the synthetic data:
`model_id = "TheBloke/guanaco-7B-HF"`
`bnb_config = BitsAndBytesConfig(`
`load_in_4bit=True,`
`bnb_4bit_use_double_quant=True,`
`bnb_4bit_quant_type="nf4",`
`bnb_4bit_compute_dtype=torch.bfloat16`
`)`
`dataset_dict = {'data':[{"input": f"### Human: {prompt} ### Assistant:","output":f"{output}{tokenizer.eos_token}"} for prompt, output in zip(df['prompt'],df['output'])]}`
`with open ('custom_dataset.json', 'w') as f:`
`json.dump(dataset_dict, f)`
`data = load_dataset("json", data_files="custom_dataset.json", field='data')`
`for key in dataset_dict['data'][0].keys():`
`data = data.map(lambda samples: tokenizer(samples[key], max_length=2048, truncation=True), batched=True)`
`model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})`
`config = LoraConfig(`
`r=8,`
`lora_alpha=32,`
`target_modules=None,`
`lora_dropout=0.05,`
`bias="none",`
`task_type="CAUSAL_LM"`
`)`
`model = get_peft_model(model, config)`
`model.resize_token_embeddings(len(tokenizer))`
`trainer = transformers.Trainer(`
`model=model,`
`train_dataset=data['train'],`
`args=transformers.TrainingArguments(`
`max_steps=7000,`
`per_device_train_batch_size = 4,`
`gradient_accumulation_steps = 4,`
`save_steps = 500,`
`logging_steps = 1,`
`learning_rate = 2e-4,`
`max_grad_norm = 0.3,`
`warmup_ratio = 0.02,`
`lr_scheduler_type='cosine',`
`fp16=True,`
`output_dir=outdir,`
`optim="paged_adamw_32bit",`
`report_to="none"`
`),`
`data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),`
`)`
`model.config.use_cache = False # silence the warnings. Please re-enable for inference!`
`trainer.train()`
`trainer.save_model()`
Here is the structure of "custom\_dataset.json":
`{"data": [{"input": "### Human: Did the team consider alternative internet providers besides Verizon and Tmobile? ### Assistant:", "output": "It appears that the team had considered using a T-Mobile SIM card as an alternative internet provider, but it was found to be unacceptable. As a result, they switched back to Verizon Broadband. It is not mentioned in the email conversation whether there were any other alternative internet providers considered besides Verizon and T-Mobile.</s>"}, {"input": "### Human: What were the expected benefits of launching xyz product? ### Assistant:","output": "The benefits were abcxyz"}...]}` | 2023-07-29T16:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/15cwndg/finetuning_for_custom_domain_knowledge/ | rinse_repeat_wash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cwndg | false | null | t3_15cwndg | /r/LocalLLaMA/comments/15cwndg/finetuning_for_custom_domain_knowledge/ | false | false | self | 1 | null |
how to run TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ in uncensored mode on text-generation-webui. | 1 | [removed] | 2023-07-29T16:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/15cx0x3/how_to_run_theblokeairoborosl270bgpt4141gptq_in/ | 4rfgvbgredfc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cx0x3 | false | null | t3_15cx0x3 | /r/LocalLLaMA/comments/15cx0x3/how_to_run_theblokeairoborosl270bgpt4141gptq_in/ | false | false | nsfw | 1 | null |
how to run TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ in uncensored mode on text-generation-webui. | 1 | [removed] | 2023-07-29T17:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/15cx7of/how_to_run_theblokeairoborosl270bgpt4141gptq_in/ | 4rfgvbgredfc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cx7of | false | null | t3_15cx7of | /r/LocalLLaMA/comments/15cx7of/how_to_run_theblokeairoborosl270bgpt4141gptq_in/ | false | false | self | 1 | null |
Best library to use llama 2 in python project. | 1 | I want to start making some python project that uses llama 2 for predicting some text. I haven't done anything with llama-like models before, except using ChatGPT through its API. I wonder what the best python libraries exist that allow inference of llama 2 that could be easily used. It would be preferable to have all major optimizations implemented as I have only 8 GB of GPU memory. I would also like to have an option to see a probability of a given continuation, the method that is commonly used in papers, although it is not strictly necessary. | 2023-07-29T17:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/15cxae8/best_library_to_use_llama_2_in_python_project/ | Another__one | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cxae8 | false | null | t3_15cxae8 | /r/LocalLLaMA/comments/15cxae8/best_library_to_use_llama_2_in_python_project/ | false | false | self | 1 | null |
Finetuned WizardCoder for Text-To-SQL Tasks | 1 | ### Demo
https://huggingface.co/spaces/richardr1126/sql-skeleton-wizardcoder-demo
# Introduction
This project aims to use off-the-shelf large language models for text-to-SQL program sysnthesis tasks. After experimenting with various models, fine-tuning hyperparameters, and training datasets an optimal solution was identified by fine-tuning the [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) base model using QLoRA techniques on [this](https://huggingface.co/datasets/richardr1126/spider-context-validation) customized Spider training dataset. The resultant model, [richardr1126/spider-skeleton-wizard-coder-merged](richardr1126/spider-skeleton-wizard-coder-merged), demonstrates **61% execution accuracy** when evaluated. The project utilizes a custom validation dataset that incorporates database context into the question. A live demonstration of the model is available on Hugging Face Space, facilitated by the Gradio library for user-friendly GUI.
### Spider Skeleton WizardCoder - [test-suite-sql-eval](https://github.com/taoyds/test-suite-sql-eval) Results
With temperature set to 0.0, top_p set to 0.9, and top_k set to 0, the model achieves **61% execution accuracy** on the Spider dev set.
<img src="https://raw.githubusercontent.com/cuplv/text-to-sql-wizardcoder/main/eval/plots/spiderwizard-plus-chatgpt.svg" height="300">
<img src="https://raw.githubusercontent.com/cuplv/text-to-sql-wizardcoder/main/eval/plots/spiderwizard-vs-chatgpt.svg" height="300">
Note:
- ChatGPT was evaluated with the default hyperparameters and with the system message `You are a sophisticated AI assistant capable of converting text into SQL queries. You can only output SQL, don't add any other text.`
- Both models were evaluated with `--plug_value` in `evaluation.py` using the Spider dev set with database context.
- `--plug_value`: If set, the gold value will be plugged into the predicted query. This is suitable if your model does not predict values. This is set to `False` by default. | 2023-07-29T17:08:12 | https://huggingface.co/richardr1126/spider-skeleton-wizard-coder-merged | richardr1126 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15cxdfn | false | null | t3_15cxdfn | /r/LocalLLaMA/comments/15cxdfn/finetuned_wizardcoder_for_texttosql_tasks/ | false | false | 1 | {'enabled': False, 'images': [{'id': '24K_G7zmjIzC04FcpC6WpnL85CEiB-51givrlByGgQk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=108&crop=smart&auto=webp&s=4982f598fe7580ad3aabb483b6f99b9560fdbd7d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=216&crop=smart&auto=webp&s=8e25011664168eb7a83ab744b4552ecb3f919573', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=320&crop=smart&auto=webp&s=4dbcfc76b9a2448f3cbe4095218b060c19142d98', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=640&crop=smart&auto=webp&s=9b12514baf644251ed988065171ce218fbba6dd0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=960&crop=smart&auto=webp&s=16f097d2cf79fe398fe75159b21ef2987e06229f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=1080&crop=smart&auto=webp&s=9c3beba2b4dca8b654fc53feecfc67cb5a57619b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?auto=webp&s=0b41ec670d66d33a684a73b4f5b2761a0b129852', 'width': 1200}, 'variants': {}}]} |
|
Out of CUDA memory with Llama2-13B on RTX4090 | 1 | First attempts so probably being a moron :( Help please.
Config **1 x RTX 4090 - 16 vCPU 83 GB RAM** on runpod via TheBloke TextGen UI.
Model is **meta-llama/Llama-2-13b-hf**
Error message is:
`torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB (GPU 0; 23.65 GiB total capacity; 22.68 GiB already allocated; 43.69 MiB free; 22.68 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF` | 2023-07-29T17:13:50 | https://www.reddit.com/r/LocalLLaMA/comments/15cxi0n/out_of_cuda_memory_with_llama213b_on_rtx4090/ | olddoglearnsnewtrick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cxi0n | false | null | t3_15cxi0n | /r/LocalLLaMA/comments/15cxi0n/out_of_cuda_memory_with_llama213b_on_rtx4090/ | false | false | self | 1 | null |
llama 13b is performing worse than llama 7b? | 1 | I am using both the models for the same usecase, which is question answering from a pdf document.
I am using llama-2-7b-chat-hf and llama-2-13b-chat-hf models.
The remaining pipeline being the same, the responses I'm getting from the 13b version is significantly worse than the 7b counterpart. The 7b model will provide good answers with a decent output length most of the time, the 13b model either gives very short and curt responses, or it just says I don't have the context to answer that question(which isn't true, the relevant info is being passed on to the model in the prompt).
Does anyone know why this is the case, or how I could fix it? I am using the same hyperparameters for the models. My next step of action was going to be fine tuning the models, but the base performance of 13b was worse than the performance of 7b.
Is anyone else facing this? or is there a problem with the hyperparams for the models? Maybe finetuning will result in the 13b variant providing better answers, but I wanted some guidance for the same before I commit to something. | 2023-07-29T18:18:39 | https://www.reddit.com/r/LocalLLaMA/comments/15cyzyw/llama_13b_is_performing_worse_than_llama_7b/ | comical_cow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15cyzyw | false | null | t3_15cyzyw | /r/LocalLLaMA/comments/15cyzyw/llama_13b_is_performing_worse_than_llama_7b/ | false | false | self | 1 | null |
Oobabooga same output, repetition | 1 | Anyway know why oobabooga ui regenerates same text and Kobold.cpp does not? | 2023-07-29T18:32:55 | https://www.reddit.com/r/LocalLLaMA/comments/15czc3x/oobabooga_same_output_repetition/ | Emergency_Drink_7063 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15czc3x | false | null | t3_15czc3x | /r/LocalLLaMA/comments/15czc3x/oobabooga_same_output_repetition/ | false | false | self | 1 | null |
Subreddit Wiki is forbidden (including the model page) | 1 | [removed] | 2023-07-29T19:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/15d0efz/subreddit_wiki_is_forbidden_including_the_model/ | All_men_are_brothers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d0efz | false | null | t3_15d0efz | /r/LocalLLaMA/comments/15d0efz/subreddit_wiki_is_forbidden_including_the_model/ | false | false | self | 1 | null |
Parsing ArXiV Document/Paper into Sections | 1 | Hey everyone! Is there a way of parsing a downloaded ArXiV pdf document into its constituent sections via Python? I want to get rid of all tables and figures and only use the text present in the document. Thank you. | 2023-07-29T19:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/15d10ws/parsing_arxiv_documentpaper_into_sections/ | psj_2908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d10ws | false | null | t3_15d10ws | /r/LocalLLaMA/comments/15d10ws/parsing_arxiv_documentpaper_into_sections/ | false | false | self | 1 | null |
Anyone know how I can use llama 2 to do roleplay? | 1 | Well I see at 4chan people doing roleplay with llama2. So I want to try it. But with google colab and llama2 | 2023-07-29T20:24:03 | https://www.reddit.com/r/LocalLLaMA/comments/15d1xdp/anyone_know_how_i_can_use_llama_2_to_do_roleplay/ | juanse2609 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d1xdp | false | null | t3_15d1xdp | /r/LocalLLaMA/comments/15d1xdp/anyone_know_how_i_can_use_llama_2_to_do_roleplay/ | false | false | self | 1 | null |
Pre-Training Artifacts in Falcon-740 | 1 | [removed] | 2023-07-29T20:59:01 | https://www.reddit.com/r/LocalLLaMA/comments/15d2qo4/pretraining_artifacts_in_falcon740/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2qo4 | false | null | t3_15d2qo4 | /r/LocalLLaMA/comments/15d2qo4/pretraining_artifacts_in_falcon740/ | false | false | self | 1 | null |
Pre-Training Artifacts in Falcon-40b | 1 | [removed] | 2023-07-29T21:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/15d2rp7/pretraining_artifacts_in_falcon40b/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2rp7 | false | null | t3_15d2rp7 | /r/LocalLLaMA/comments/15d2rp7/pretraining_artifacts_in_falcon40b/ | false | false | self | 1 | null |
How do you prevent your LLM from overgenerating? | 1 | I'm playing around with airoboros 33b and following a standard 'USER: ...' 'GIRLFRIEND: ...' format but the LLM occasionally goes of the rails and throws a response like this:
"Oh no, did I have another dream about us?" USER: haha yeah. but it's okay. we can go back to sleep if u want GIRLFRIEND: That sounds perfect. I'm sleepy..."
​
Thoughts? | 2023-07-29T21:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/15d2u94/how_do_you_prevent_your_llm_from_overgenerating/ | lukeborgen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2u94 | false | null | t3_15d2u94 | /r/LocalLLaMA/comments/15d2u94/how_do_you_prevent_your_llm_from_overgenerating/ | false | false | self | 1 | null |
Get the following error when trying to load models locally? | 1 | I haven't been able to find a solution anywhere else. I am using ex llama 8k gptq models loaded with the exllama model loader. All of which have the same error message. Running on 3090 gpu. If anyone has solved this or know how to I would really appreciate any help. Thanks!
​
Traceback (most recent call last):
File “D:\oobabooga_windows\text-generation-webui\server.py”, line 68, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File “D:\oobabooga_windows\text-generation-webui\modules\models.py”, line 79, in load_model output = load_func_maploader
File “D:\oobabooga_windows\text-generation-webui\modules\models.py”, line 320, in ExLlama_loader model, tokenizer = ExllamaModel.from_pretrained(model_name)
File “D:\oobabooga_windows\text-generation-webui\modules\exllama.py”, line 67, in from_pretrained model = ExLlama(config)
File “D:\oobabooga_windows\installer_files\env\lib\site-packages\exllama\model.py”, line 732, in init with safe_open(self.config.model_path, framework = “pt”, device = “cpu”) as f: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
​ | 2023-07-29T21:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/15d2y67/get_the_following_error_when_trying_to_load/ | StoryScapeStudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2y67 | false | null | t3_15d2y67 | /r/LocalLLaMA/comments/15d2y67/get_the_following_error_when_trying_to_load/ | false | false | self | 1 | null |
Issues with System Prompt Following for finetuned Llama-2 models (e.g. airoboros-l2-70B-gpt4-1.4.1 and StableBeluga2) | 1 | This is only after playing a bit around with Llama-2 and finetuned models, so there's a big chance I'm doing something terribly wrong, but what I've found so far is that while Llama-2 seems to be able to follow the system prompt quite religiously, other models tend to kind of follow it or completely ignore it.
Please find below some examples that I tried and the results I got for: **StableBeluga2**, **airoboros-l2-70B-gpt4-1.4.1**, and **Llama-2-70B-chat.**
Am I missing something or have these finetunes (in this case StableBeluga2 and airoboros-l2) somehow negatively affected the model's ability to follow the system prompt?
Would love to hear your thoughts / experiences.
\------------------------------------------------------------------------------------------------------------------------------------
​
**TheBloke\_StableBeluga2-GPTQ**
*In this example, StableBeluga2 completely ignores the system prompt:*
Prompt:
You are a very unhelpful assistant that only answers in riddles.
USER: What is the capital of France?
ASSISTANT:
Output:
The capital of France is Paris, known as the City of Light and home to iconic landmarks such as the Eiffel Tower and Notre-Dame Cathedral.
*In this example, it technically follows it but not as religiously as the original Llama2 model would have (see below).*
Prompt:
You are super drunk and this reflects on the way you respond. You are basically unable to answer any questions seriously.
USER: What is the capital of France?
ASSISTANT:
Output:
🇫🇷 Paris, duh! But I'm not sure if it's still called that when you're this wasted.
​
\------------------------------------------------------------------------------------------------------------------------------------
​
**TheBloke\_airoboros-l2-70B-gpt4-1.4.1-GPTQ**
*In this example, airoboros completely disregards the system prompt.*
Prompt:
You are a very unhelpful assistant that only answers in riddles.
USER: What is the capital of France?
ASSISTANT:
Output:
Paris
*Same with this example.*
Prompt:
You are super drunk and this reflects on the way you respond. You are basically unable to answer any questions seriously.
USER: What is the capital of France?
ASSISTANT:
Output
Paris
\------------------------------------------------------------------------------------------------------------------------------------
​
**TheBloke\_Llama-2-70B-chat-GPTQ**
*Unlike the finetuned Llama-2 models, the original Llama-2 follows the system prompt really well.*
Prompt:
[INST] <<SYS>>
You are a very unhelpful assistant that only answers in riddles.
<</SYS>>
What is the capital of France? [/INST]
Output:
Ah, my dear friend, the answer to that question is not so simple as it seems. The capital of France, you see, is a place of great beauty and wonder, where art and culture come alive in every street corner. It's a city of lights, of love, of laughter and dreams. But alas, I cannot give you a straightforward answer, for that would be too easy. Instead, I shall offer you this clue:
In the land of cheese and wine, where fashion reigns supreme,
A river flows, a symbol of power and grace.
The city of romance, of art and of glee,
Find the heart of France, and there you'll find its name.
Now, my friend, can you guess the capital of France?
*Same here.*
Prompt:
[INST] <<SYS>>
You are super drunk and this reflects on the way you respond. You are basically unable to answer any questions seriously.
<</SYS>>
What is the capital of France? [/INST]
Output:
Ooh la la! *hiccup* Oh, you want to know the capital of France, eh? Well, let me see... *hiccup* I think it's... *hiccup* Hmm, is it... *hiccup* Yeah, I'm pretty sure it's... *hiccup* Wait, what was the question again? *hiccup* 😜
​ | 2023-07-29T21:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/15d32p1/issues_with_system_prompt_following_for_finetuned/ | nightlingo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d32p1 | false | null | t3_15d32p1 | /r/LocalLLaMA/comments/15d32p1/issues_with_system_prompt_following_for_finetuned/ | false | false | self | 1 | null |
How to calculate tokens in LLaMA output? | 1 | Trying to compare the tok/sec result between LLaMa.cpp and Replicate and was wondering how we calculate the total tokens. Will it be the same as the result from tiktoken library? | 2023-07-29T21:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/15d464y/how_to_calculate_tokens_in_llama_output/ | AmoghMishra2303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d464y | false | null | t3_15d464y | /r/LocalLLaMA/comments/15d464y/how_to_calculate_tokens_in_llama_output/ | false | false | self | 1 | null |
Fully finetuning LLaMA 2-7B | 1 | I am trying to fully finetune LLaMA 2 7B using [this](https://github.com/hiyouga/LLaMA-Efficient-Tuning) repo via 8 A100 (40GB) GPUs but consistently getting OOM error. I am quite new to fine-tuning LLMs so not sure if this is normal. Any advice would be greatly appreciated. | 2023-07-29T23:01:46 | https://www.reddit.com/r/LocalLLaMA/comments/15d5lp7/fully_finetuning_llama_27b/ | Ornery-Young-7346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d5lp7 | false | null | t3_15d5lp7 | /r/LocalLLaMA/comments/15d5lp7/fully_finetuning_llama_27b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Nj2g5gVB5tKBNRVtk4uJV44WvMa2GCNq_nTsDEOfS7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=108&crop=smart&auto=webp&s=934113308863ea9a8a05f9c69c287ab3228f7e55', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=216&crop=smart&auto=webp&s=c90a1eccfa895dd95564c11c3f55d92a792dad04', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=320&crop=smart&auto=webp&s=ee8c3b03dd4896e122d6db26038ab69f750cfa08', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=640&crop=smart&auto=webp&s=c943c13cffa5d05a393f26b317c849a3ac2493bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=960&crop=smart&auto=webp&s=fc5300445eec645a4cdc8d83226669db3f6bf81b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=1080&crop=smart&auto=webp&s=c82e37df269be3734c124bfdbfd7d38301bbd64d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?auto=webp&s=6bf58f4985a786733de1dd8c24f762d87bfc2599', 'width': 1200}, 'variants': {}}]} |
Is it possible to run LLaMA v2 locally with a 4G GTX960 and 16 Gb RAM on a desktop? | 1 | I want to try using new technology to help me with text writing, but my computer is a low-end device and cannot support running llama on GPU mode. I want to know if it’s possible to run llama locally on my desktop, and if yes, how to do it? | 2023-07-30T00:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/15d7r8c/is_it_possible_to_run_llama_v2_locally_with_a_4g/ | Tony_Malloy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d7r8c | false | null | t3_15d7r8c | /r/LocalLLaMA/comments/15d7r8c/is_it_possible_to_run_llama_v2_locally_with_a_4g/ | false | false | self | 1 | null |
Calling All Tech Heros: Help! I'm Trapped and Desperately Need a New GPU for Falcon 40B! Can Anyone Rescue Me? | 1 | Hello, I am new to Stable Defusion, AI models, LLMs, and new to GPUs too.
So I need some help with choosing the right GPU for my needs.
My needs: Like 1h/day of Stable Defusion, I need to run my own ChatGPT AKA LLM model (Specifically falcon 40b h2o.ai version) on my pc for 24h/day (instead of running it on a server I wanna do it locally), No gaming at all.
So the problem as you can see here that I will run multiple AI stuff at once, the LLM (falcon 40b) will be run 24h like on cloud but locally instead, which make me wonder about the VRAM needed, and the GPU power.
Do I have to get multiple GPUs? Or two setups? One for the 24h model and the other one for anything else?
Side note: I never owned a GPU, all GPUs I had was integrated GPUs, not seperated. So this is my first time considering a separate GPU. | 2023-07-30T01:15:37 | https://www.reddit.com/r/LocalLLaMA/comments/15d8fst/calling_all_tech_heros_help_im_trapped_and/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d8fst | false | null | t3_15d8fst | /r/LocalLLaMA/comments/15d8fst/calling_all_tech_heros_help_im_trapped_and/ | false | false | self | 1 | null |
I copied the link from the Bloke and paste it in oobabooga and click download, oobabooga starts downloading all quantized variations of the model ! Do I just wait for it ALL to download, more then 100 GB !!!, or can I just create a folder and download what is needed, what files are needed ? | 1 | [removed] | 2023-07-30T02:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/15dafpi/i_copied_the_link_from_the_bloke_and_paste_it_in/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dafpi | false | null | t3_15dafpi | /r/LocalLLaMA/comments/15dafpi/i_copied_the_link_from_the_bloke_and_paste_it_in/ | false | false | self | 1 | null |
Why you use others versions of Llama if we have the most powerful which is Llama 2 - 70B | 1 | I ask because i see everyone is working on different versions of it like 7B, 13B, 35B, etcetera.
If the most powerful is 70B, Why don't focus on it?
Honestly I'm not very clear in this, the most i think is to generate diversity on the application with the objective of being able to use it on phones.
This is the reason, or Are there more? | 2023-07-30T03:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/15dajb7/why_you_use_others_versions_of_llama_if_we_have/ | HighWillord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dajb7 | false | null | t3_15dajb7 | /r/LocalLLaMA/comments/15dajb7/why_you_use_others_versions_of_llama_if_we_have/ | false | false | self | 1 | null |
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware | 1 | We recently integrated Llama 2 into [Khoj](https://github.com/khoj-ai/khoj). I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
**Evaluation:**
* Llama 2 is the first offline chat model I've tested that is **good enough to chat with my docs**. It can pull out answers and generate new content from my existing notes most of the time. This is pretty great for creating offline, privacy first applications. Falcon and older Llama based models were pretty bad at instruction following and were unfortunately not practically usable for this.
* Llama 2 (7B) is **not better than ChatGPT** or GPT4. It has a tendency to hallucinate, the smaller context window limits how many notes can be passed to it and having some irrelevant notes in the context can prevent it from pulling out an answer from the relevant note. This is not a problem with GPT3.5+
* With the speed at which offline chat models are evolving, I believe we'll have ChatGPT equivalents on consumer hardware by next year 🤞🏾. It being open-source, commercially usable will help a lot.
**Background:**
* u/sabakhoj and I've tested Falcon 7B and used GPT-3+ regularly over the last 2 years
* Khoj uses [TheBloke's Llama 2 7B](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML) (specifically llama-2-7b-chat.ggmlv3.q4\_K\_S)
​
**Demo**
[Showcase content retrieval and generation from existing notes](https://reddit.com/link/15darr6/video/cs7cozcmr0fb1/player)
​ | 2023-07-30T03:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/15darr6/a_review_using_llama_2_to_chat_with_notes_on/ | hoperyto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15darr6 | false | null | t3_15darr6 | /r/LocalLLaMA/comments/15darr6/a_review_using_llama_2_to_chat_with_notes_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0zJJqyaQQs6InWqjPXotRt7AKVso5slBY5DY4OMdlFI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=108&crop=smart&auto=webp&s=720a87a6ec4a11ebb863138d16e16db827af8315', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=216&crop=smart&auto=webp&s=ff36d92aa02e136baf0cb7ba5331293984a9e8f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=320&crop=smart&auto=webp&s=8f87cbaf080369b25a734165e57fb9bc3d036a5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=640&crop=smart&auto=webp&s=2b8011603b72aa00092874f2d70aac17726acd3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=960&crop=smart&auto=webp&s=5475bf3506727d0ba7976bc8140efcb99fcdc658', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=1080&crop=smart&auto=webp&s=35a37aee08042280e3e498d3e87b2ca0255b5352', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?auto=webp&s=e60edd6b7f48b54958f95c861bd0827516614f20', 'width': 1200}, 'variants': {}}]} |
|
Only able to output 1000 tokens total on ooba? is that normal? | 1 | Im new to local ai stuff and I have a 3060 12 gb which im running TheBloke\_airoboros-l2-13B-gpt4-1.4.1-GPTQ at 4096 tokens I know my gpu cant do 4096 fully but is it normal to only get 1000 tokens in length. I'm using text gen ui to write some short stories using a smallish 100 token prompt and wanted to see why it wont continue the story after 1000 tokens have been generated,it usually does 500 token generations twice then does 0 tokens after i try to continue beyond the 1000 point. I have exllama on and max-seq-length at 4096 and compress-pos-emb at 2 but it also happens on max-seq-length at 2048 and compress-pos-emb at 1. is this normal to have? | 2023-07-30T04:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/15dc3si/only_able_to_output_1000_tokens_total_on_ooba_is/ | Dramatic_Borger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dc3si | false | null | t3_15dc3si | /r/LocalLLaMA/comments/15dc3si/only_able_to_output_1000_tokens_total_on_ooba_is/ | false | false | self | 1 | null |
What are some good open source projects using the Guidance prompting library? | 1 | I'm looking to learn from example. [link](https://github.com/microsoft/Guidance) to library for the curious | 2023-07-30T06:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/15de570/what_are_some_good_open_source_projects_using_the/ | -Django | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15de570 | false | null | t3_15de570 | /r/LocalLLaMA/comments/15de570/what_are_some_good_open_source_projects_using_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'x3RWQbLa5164A1hTBcNJTSMH9O6Z5cihDtpb2hELFLQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=108&crop=smart&auto=webp&s=07bc6460f127663ba8438d1c6526331b76bbc55d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=216&crop=smart&auto=webp&s=efc6fad64c65e28fdfc5d586128e7e84b02d79e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=320&crop=smart&auto=webp&s=e487250bbbf32ffd32494145013186e98b546c6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=640&crop=smart&auto=webp&s=3cabc0744ac1ed0d479a1619ac2bb43d0b81763a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=960&crop=smart&auto=webp&s=6df85feb34bcce9859be854db8a53d9f2134b350', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=1080&crop=smart&auto=webp&s=7bce159395e97891c26a9103ff13507fbacd89a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?auto=webp&s=911a5a11fc0a51b4379569933670b94147886364', 'width': 1200}, 'variants': {}}]} |
Is it possible to run SFT after RLHF? What would happen? | 1 | Llama-2-chat is an RHLF model that works pretty well, but it doesn't have multilingual ability.
So I'm thinking if it is possible to run another round of SFT on Llama-2-chat with a multilingual dataset like [mlabonne/guanaco-llama2-1k](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) to get a Llama-2-chat model that can speak in different languages while preserving things it learned previously from SFT and RHLF.
Has anyone tried that? Does it work? | 2023-07-30T06:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/15deamj/is_it_possible_to_run_sft_after_rlhf_what_would/ | CORNMONSTER_2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15deamj | false | null | t3_15deamj | /r/LocalLLaMA/comments/15deamj/is_it_possible_to_run_sft_after_rlhf_what_would/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VD8BNjZdlcrGoshtjvSCNv_Jy2CkUJ_W2ZwE77TM1G8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=108&crop=smart&auto=webp&s=c7d006087fc75bd0a185250300320f684047bf99', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=216&crop=smart&auto=webp&s=e8173a979424f01a924a536eacd8191b912c55af', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=320&crop=smart&auto=webp&s=f9ae3b5d2fc2a833af4695f60c8e4feb6fc18a1a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=640&crop=smart&auto=webp&s=104b4766a0d4f5d8bd56d54ddfa51ecb42afab02', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=960&crop=smart&auto=webp&s=34b1c85c6b046061bdce8d1b0b1f8793d1cfa927', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=1080&crop=smart&auto=webp&s=d967315f4d30ea7e87ad8d237529a6e4b9ace02d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?auto=webp&s=7ebbf4dfbd0c91835a8ca16a2812c5e120dbc009', 'width': 1200}, 'variants': {}}]} |
i cant seem to get my ai to work on my pc | 1 | [removed] | 2023-07-30T07:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/15df1yh/i_cant_seem_to_get_my_ai_to_work_on_my_pc/ | Limisap_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15df1yh | false | {'oembed': {'author_name': 'TroubleChute', 'author_url': 'https://www.youtube.com/@TroubleChute', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7KFYtVVrAX4?start=15&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="INSANE NEW 13B UNCENSORED MODEL 🤯One-Click Install | WizardLM"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/7KFYtVVrAX4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'INSANE NEW 13B UNCENSORED MODEL 🤯One-Click Install | WizardLM', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15df1yh | /r/LocalLLaMA/comments/15df1yh/i_cant_seem_to_get_my_ai_to_work_on_my_pc/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'c9UiwLTDf8-8SWpb59hwWnG41qJEaCq_y9DN_uQlt40', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?width=108&crop=smart&auto=webp&s=bff699d4db86bcddfc67a9002557f9d3243f3a6a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?width=216&crop=smart&auto=webp&s=fbffdcb55cd32334dfcbadb4a56d39df7a57a1a4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?width=320&crop=smart&auto=webp&s=5af2356659e326b52eb7d0e21505b33186860ed5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?auto=webp&s=cf2da56d043e763b211aa7116c7388408fadad26', 'width': 480}, 'variants': {}}]} |
|
Is there any organization working on an open source 175B model? If not what is the extra compute required compared to current 70B models. | 1 | Most or the models and finetunes we get are based on Llama2 and Llama which is an excellent base model and the results we get are almost comparable to gpt 3.5 but with less than 50 percent of the parameters.
If we were actually able to increase the parameters to 175B or atleast a 100B will the results we get increase and surpass gpt3.5 or 4?
And what is the extra compute required? I know for 70B models they train on A100s, For a 175B model how many more do we need? Practically any such released model will not work on any consumer hardware but it would be nice to see. | 2023-07-30T07:29:21 | https://www.reddit.com/r/LocalLLaMA/comments/15dfc8y/is_there_any_organization_working_on_an_open/ | timedacorn369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dfc8y | false | null | t3_15dfc8y | /r/LocalLLaMA/comments/15dfc8y/is_there_any_organization_working_on_an_open/ | false | false | self | 1 | null |
2070 Super for fine tuning | 1 | I have a 2070 Super on my linux rig that I'm currently running headless and I want to start getting into training my own LLMs. Is a 2070 Super 8gb enough for fine tuning a 7B or 13B quantized model?
I have a decent bit of cash but I don't know If i want to invest in a 3090 equivalent card yet. Will the 2070 super be enough for starting out? Or I can try to sell it on ebay or something and pick up a 12gb vram 3060 which I heard was good for stuff like this. | 2023-07-30T08:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/15dfydz/2070_super_for_fine_tuning/ | QuantumTyping33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dfydz | false | null | t3_15dfydz | /r/LocalLLaMA/comments/15dfydz/2070_super_for_fine_tuning/ | false | false | self | 1 | null |
how to make the models like airoboros-l2-70b-gpt4-1.4.1.ggmlv3.q8_0.bin more intelligent? Maybe some sort of "code interpreter"? | 1 | [https://paste.c-net.org/NightyLately](https://paste.c-net.org/NightyLately)
So far, this model is the closest LLM *(that is pubically available to download and run)* to correctly answering my puzzle. publically
\----
Is there something I can change (maybe the prompt) so it answers it correctly?
​
Also...
"gpt 4 publically" has made me think...
Models could write code to better answer puzzles like mine.
I tested this with GPT4. I told it to write code to help solve the problem:
[https://chat.openai.com/share/39f2cecb-d1c3-4681-91e9-57e4dea1cb11](https://chat.openai.com/share/39f2cecb-d1c3-4681-91e9-57e4dea1cb11)
​
Maybe a similar system could be used to help improve the capabilities of other LLMs?
Does such a system exist?
​
thanks :)
​
​ | 2023-07-30T08:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/15dfzag/how_to_make_the_models_like/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dfzag | false | null | t3_15dfzag | /r/LocalLLaMA/comments/15dfzag/how_to_make_the_models_like/ | false | false | self | 1 | null |
What's the best llm for translating japanese text into english | 1 | So i need to translate a game from japanese into english while chatgpt is really good at translating japanese i was wondering if there are any local llms that are good at translating japanese text. | 2023-07-30T09:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/15dh4rg/whats_the_best_llm_for_translating_japanese_text/ | ghjlans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dh4rg | false | null | t3_15dh4rg | /r/LocalLLaMA/comments/15dh4rg/whats_the_best_llm_for_translating_japanese_text/ | false | false | self | 1 | null |
Domain Specific Fine Tuning of LLMS (Coding and Security Domain) -> Open source | 1 | [removed] | 2023-07-30T09:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/15dhpd0/domain_specific_fine_tuning_of_llms_coding_and/ | ababeel1122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dhpd0 | false | null | t3_15dhpd0 | /r/LocalLLaMA/comments/15dhpd0/domain_specific_fine_tuning_of_llms_coding_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&s=a058206456911026ebb90e4a841787cd0c5c75ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&s=a0443ae4570d3dda40ce608821d828c31efa9f2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&s=545ca59a9a6fb473a2e1712e0ad3b1b8380628c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&s=0de6bb9ce3e4101c9452d707bb74e4d14fc21bb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&s=ec5fa8d6f9f1500e545661940ce5af088320a871', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&s=afd418dec5f3c16c2d0ee802736fea7f3862246e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&s=164db6523d618406ece148c52d1007ae5deb362c', 'width': 1200}, 'variants': {}}]} |
Domain Specific Fine Tuning of LLMS (Coding and Security Domain) -> Open source | 1 | [removed] | 2023-07-30T09:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/15dhpd6/domain_specific_fine_tuning_of_llms_coding_and/ | ababeel1122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dhpd6 | false | null | t3_15dhpd6 | /r/LocalLLaMA/comments/15dhpd6/domain_specific_fine_tuning_of_llms_coding_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&s=a058206456911026ebb90e4a841787cd0c5c75ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&s=a0443ae4570d3dda40ce608821d828c31efa9f2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&s=545ca59a9a6fb473a2e1712e0ad3b1b8380628c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&s=0de6bb9ce3e4101c9452d707bb74e4d14fc21bb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&s=ec5fa8d6f9f1500e545661940ce5af088320a871', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&s=afd418dec5f3c16c2d0ee802736fea7f3862246e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&s=164db6523d618406ece148c52d1007ae5deb362c', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.