title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to finetune LLM for text classfication | 1 | Now I have a scenario where I need to input the title of the product and return the product category it belongs to. There are a total of 7000 product categories. At present, I have tried to train and test using 300W products with Bert and fasttext respectively, and the final Acc is about 72%, so I would like to ask, can I fine-tune the full amount of data for llms like llama and llama2 to improve the text classification accuracy, I want to use llm like a embedding producer and add softmax to return the definite vector space. Thank you | 2023-08-14T11:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/15qs6v7/how_to_finetune_llm_for_text_classfication/ | KneeNo79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qs6v7 | false | null | t3_15qs6v7 | /r/LocalLLaMA/comments/15qs6v7/how_to_finetune_llm_for_text_classfication/ | false | false | self | 1 | null |
Best method to query txt files with GPTQ Llama 2 models | 1 | Hi,
I'm currently working on a chatbot who could answer questions about documents put in entry.
I've already made a implementation, using this model: [TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ · Hugging Face](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ), loaded with auto\_gptq, this embedding: [intfloat/multilingual-e5-large · Hugging Face](https://huggingface.co/intfloat/multilingual-e5-large) for my multilingual sources loaded with HuggingFace.
To create and retrieve the data base, I'm using ChromaDB, and Lagchain for many other function, such as the RetrievalQA one for making the answers.
First, I have an issue when prompting the whole system: "Token indices sequence length is longer than the specified maximum sequence length for this model (2256 > 2048)." But I'm chunking my documents to 1000 like here :
https://preview.redd.it/7yudnuqnj2ib1.png?width=833&format=png&auto=webp&s=df50cfd65871b6c8a15746cae7e337edd35f653e
And my model is compatible with 8k context, so 4 times more context than showed here. How could I activate that ?
​
Finally, I wanted to know if you know better way than mine to achieve my goal.
​
Thanks in advance to all ! | 2023-08-14T12:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/15qtb0u/best_method_to_query_txt_files_with_gptq_llama_2/ | RAPHCVR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qtb0u | false | null | t3_15qtb0u | /r/LocalLLaMA/comments/15qtb0u/best_method_to_query_txt_files_with_gptq_llama_2/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IiRXHqUc0kYWhj4GBsO_uvu3Zye_fDD6EHAmGEU9D5s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=108&crop=smart&auto=webp&s=9f95c2ee0beda68c63370211fea05ba8434083ac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=216&crop=smart&auto=webp&s=464524f58aedb6dc68d47977e18ce51433157206', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=320&crop=smart&auto=webp&s=37b84219fb82d5bc77f72ff997e90e633c780aa0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=640&crop=smart&auto=webp&s=8622fd9ee1ebeb97a5cde5f5b33a7fdee5a0906a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=960&crop=smart&auto=webp&s=0d8e6f6238f18473efd6e369d1282605b62c2dc7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=1080&crop=smart&auto=webp&s=e82ef166e17698cd80a7579b7c861ba84233f736', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?auto=webp&s=0d80bed18abfdd6cafb9a653c0fd46d4b49af81e', 'width': 1200}, 'variants': {}}]} |
|
What is the difference in llama2 versions fine-tuned for specific tasks? | 1 | Hi,
Way back, before the dawn of ChatGPT, I have fine-tuned T5 model for email writing and it worked quite well. Now I'm trying out llama2-7b and am wondering, what is the difference between llama2 versions with different number of parameters in case of fine-tuning them for specific tasks.
I would appreciate if anyone shared their experiences or explained which model is best for what kind of task. | 2023-08-14T16:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/15qzwir/what_is_the_difference_in_llama2_versions/ | ThickDoctor007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qzwir | false | null | t3_15qzwir | /r/LocalLLaMA/comments/15qzwir/what_is_the_difference_in_llama2_versions/ | false | false | self | 1 | null |
llama2-22b-daydreamer-v2 release. Why are there almost no models like 22B? | 1 | [https://huggingface.co/nkpz/llama2-22b-daydreamer-v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2)
[https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML)
[https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GPTQ](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GPTQ)
The RAM requirements are easy to meet, it seems like a lot of people have 32 or more these days. So this is close to the upper limit of what many can afford to run. That is, the maximum available quality at an adequate speed. And people love small models that can be run locally. So why do we have almost no 22B and 30B models?
What is the best model size for you?
[View Poll](https://www.reddit.com/poll/15r0x32) | 2023-08-14T17:21:28 | https://www.reddit.com/r/LocalLLaMA/comments/15r0x32/llama222bdaydreamerv2_release_why_are_there/ | Jipok_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r0x32 | false | null | t3_15r0x32 | /r/LocalLLaMA/comments/15r0x32/llama222bdaydreamerv2_release_why_are_there/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dCsf5vAEJpcaBc7Sxwa__PZgQCtqV0_7cqOMVqtli_M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=108&crop=smart&auto=webp&s=e7b5cf484a713676cb47145ca91876ef21b9d6b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=216&crop=smart&auto=webp&s=3612f96fdc8c328ae77f67e4c0e5148d7752303b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=320&crop=smart&auto=webp&s=9bf274ff801579f47b0a948d4d0322787d7026dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=640&crop=smart&auto=webp&s=7292b23a1b21b99d5de2238a8f8a186cc5b26701', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=960&crop=smart&auto=webp&s=1523f3f063aebd9d867a4405dfd9d5ae66f4cb2d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=1080&crop=smart&auto=webp&s=dc73c1620b7ff609098e69c77811c61973d252c5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?auto=webp&s=dbb234ae8b9163685a7e95493110fbc02033721c', 'width': 1200}, 'variants': {}}]} |
Expand the Context Length with RoPE from a β-based Encoding perspective | 1 | A very interesting blog. The link is [here](https://normxu.github.io/Rethinking-Rotary-Position-Embedding/) (part-1) and [here](https://normxu.github.io/Rethinking-Rotary-Position-Embedding-2/) (part-2)
The writer claims NTK-ScaleRoPE has flaws. A minor modification can fix the flaw and further improve ppl on very long text.
The original post is in Chinese, but no worries, I have already translated it and I guarantee you no AI translation is involved.
Will try to test these tricks in more downstream tasks ASAP. | 2023-08-14T17:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/15r18rx/expand_the_context_length_with_rope_from_a_βbased/ | Alternative_World936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r18rx | false | null | t3_15r18rx | /r/LocalLLaMA/comments/15r18rx/expand_the_context_length_with_rope_from_a_βbased/ | false | false | self | 1 | null |
Models for function calls? | 1 | Are there good open source models that are fine-tuned for function calling?
By fine-tuned for function calling, I mean using a function signature or a JSON schema (similar to OpenAI does) as part of the prompt, and responding JSON that makes that schema.
So the prompt would be something like this
\- function signature / schema (for a single function)
\- further instructions
And the model should be fine-tuned to respond in a way that can be parsed and matches the expected function signature.
I have been searching for something along those lines, but haven't seen anything yet. | 2023-08-14T17:34:43 | https://www.reddit.com/r/LocalLLaMA/comments/15r1a0g/models_for_function_calls/ | lgrammel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1a0g | false | null | t3_15r1a0g | /r/LocalLLaMA/comments/15r1a0g/models_for_function_calls/ | false | false | self | 1 | null |
GPU-Accelerated LLM on a $100 Orange Pi | 1 | Yes, it's possible to run GPU-accelerated LLM smoothly on an embedded device at a reasonable speed.
The Machine Learning Compilation ([MLC](https://github.com/mlc-ai/mlc-llm)) techniques enable you to run many LLMs natively on various devices with acceleration. In this example, we made it successfully run Llama-2-7B at 2.5 tok/sec, RedPajama-3B at 5 tok/sec, and Vicuna-13B at 1.5 tok/sec (16GB ram required).
Feel free to check out our blog [here](https://blog.mlc.ai/2023/08/09/GPU-Accelerated-LLM-on-Orange-Pi) for a completed guide on how to run LLMs natively on Orange Pi.
[Orange Pi 5 Plus running Llama-2-7B at 3.5 tok\/sec](https://preview.redd.it/6vvpitbr34ib1.jpg?width=4179&format=pjpg&auto=webp&s=0c15e881336c70fd46dc83a0641bbf9460fa62b0) | 2023-08-14T17:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/15r1kcl/gpuaccelerated_llm_on_a_100_orange_pi/ | EmotionalFeed0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1kcl | false | null | t3_15r1kcl | /r/LocalLLaMA/comments/15r1kcl/gpuaccelerated_llm_on_a_100_orange_pi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IZir45h_ISdZyWHdPbGpn5tLmvUKO-U0N3jB3pwPF4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=108&crop=smart&auto=webp&s=017a6ffa978813dbeaa475e66c0bb323e91eff10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=216&crop=smart&auto=webp&s=7691132afc1e2f76ea0b8e9720590864d5679f03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=320&crop=smart&auto=webp&s=c700c2703d2c42b2ba411af577a09acee0b7c506', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=640&crop=smart&auto=webp&s=26ae8e3077d1458f5fd5dc828ee18cf7c65d655f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=960&crop=smart&auto=webp&s=b784ba77d77ea62259eacc7051df3a9dd37b8a27', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=1080&crop=smart&auto=webp&s=935f9d70abd5e6ca559336f062ec5f54bc41de88', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?auto=webp&s=3a57a74eec9f9d3545e256fac0695e7200061d0a', 'width': 1200}, 'variants': {}}]} |
|
MeZo with litgpt? | 1 | Litgpt is nanogpt retooled to support llama2
​
[https://github.com/princeton-nlp/MeZO](https://github.com/princeton-nlp/MeZO)
​
I'm wondering how feasible it is to incorporate mezo into this boilerplate code to enable training llms on much smaller footprint of vram. | 2023-08-14T17:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/15r1vhr/mezo_with_litgpt/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1vhr | false | null | t3_15r1vhr | /r/LocalLLaMA/comments/15r1vhr/mezo_with_litgpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'A27O_3WGGvMQ5Y02jVgrRrTC5e0GEQ_UOL9oJyS5zVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=108&crop=smart&auto=webp&s=f7ef9497e07f3b1c5f8c030c57fa27a2e5a80fa6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=216&crop=smart&auto=webp&s=22348f0c237b99b3f08abc13a83c96ccca9e7e1d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=320&crop=smart&auto=webp&s=e1e42ea7228ccd11483053a9c7ca95d4e06ff949', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=640&crop=smart&auto=webp&s=7203353fe00d0643705e9451e5ab9b8c56501203', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=960&crop=smart&auto=webp&s=24148305a28bac0c592e6fe37eb32d17cec77675', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=1080&crop=smart&auto=webp&s=4288a52d5716ccbc58802c478ec7816eb52ad895', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?auto=webp&s=a223333f0e630b4755a69f468de29c541779b512', 'width': 1200}, 'variants': {}}]} |
Any news on the 30G LLaMA 2 model? | 1 | When LLaMA 2 was released, it was stated publically that the 30G model would be delayed due to some issues. I haven't heard anything since. Is it ever going to be released? | 2023-08-14T18:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/15r1yqh/any_news_on_the_30g_llama_2_model/ | belladorexxx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1yqh | false | null | t3_15r1yqh | /r/LocalLLaMA/comments/15r1yqh/any_news_on_the_30g_llama_2_model/ | false | false | self | 1 | null |
Whats the app/tool are you using for code exploration? | 1 | I'm looking for a tool that will help speed up the process of code exploration. So far I've tried [cody AI](https://imgur.com/a/nRKsLXQ). I have found the cody to be quite useful at providing explanations however cody's consistent hallucinations when it needs to give real code are a major drawback.
To illustrate my usecase and the issues I'm facing here's an example where I try to explore the h2ogpt repo.
I asked "I have uploaded a pdf, how is the pdf used to answer a given question", to which the llm gives a good answer as can be seen in the above picture. I have found cody to excel at providing explanations.
The issue I have can be seen with my next question, ["where can i find the code for steps 5-8"](https://imgur.com/8X0R64V), here cody gives dummy code or a simplified representation of how the code would work. Cody also gives the location of this code, `src/preprocess.py`. The issue is that both the dummy code and the its given location are hallucinations. These hallucinations consistently happens when querying code about specific code and thus limits the utility of this tool.
I was wondering what other tools/apps the community uses to go through and understand code. I'm looking for something that can provide explanations and accurately refer to code. | 2023-08-14T18:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/15r2ag3/whats_the_apptool_are_you_using_for_code/ | nuusain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r2ag3 | false | null | t3_15r2ag3 | /r/LocalLLaMA/comments/15r2ag3/whats_the_apptool_are_you_using_for_code/ | false | false | self | 1 | null |
Student Getting into AI from the Ground UP: Where to Start? | 1 | [removed] | 2023-08-14T19:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/15r3n4s/student_getting_into_ai_from_the_ground_up_where/ | KvAk_AKPlaysYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r3n4s | false | null | t3_15r3n4s | /r/LocalLLaMA/comments/15r3n4s/student_getting_into_ai_from_the_ground_up_where/ | false | false | self | 1 | null |
Can someone explain to me what ggml is and how does it work? | 1 | [removed] | 2023-08-14T19:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/15r41go/can_someone_explain_to_me_what_ggml_is_and_how/ | Leading_Boot7734 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r41go | false | null | t3_15r41go | /r/LocalLLaMA/comments/15r41go/can_someone_explain_to_me_what_ggml_is_and_how/ | false | false | self | 1 | null |
How to increase domain knowledge in falcon 7b | 1 | I have been seeing many tutorials on how to fine tune a model by using Q&A chats.
However, what I want to do is have a chat bot that can answer questions based on multiple text files that I have, and hopefully scale it to hundreds of text files. So instead of feeding Q&A for training, I just feed text files for example, then the bot would answer taking into account the text files - but not rely solely on them either.
Is there a tutorial for this? Thank you for reading this far. | 2023-08-14T19:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/15r4ig1/how_to_increase_domain_knowledge_in_falcon_7b/ | ConsistentHamster2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r4ig1 | false | null | t3_15r4ig1 | /r/LocalLLaMA/comments/15r4ig1/how_to_increase_domain_knowledge_in_falcon_7b/ | false | false | self | 1 | null |
Let's say I have $6,000-8,000 for a new computer to run local llamas- what should I get? | 1 | I'm having a ton of fun playing with LLMs on my computer, but my rig is 5 years old. While fine for most work, not anywhere good enough for this use case. I can spend \~$6,000-$8,000 on a new computer (I know, this sounds like a lot, but it's not too bad amortized over the next 3-5 years. And if a local LLM saves me even 20% effort on certain work-related tasks, it's worth it 10x over). What's my best bet on a pre-built computer system? I know I could probably save a bit by sourcing the components individually, but I am not interested in that. I just want it to work out of the box, and to have a company warranty if it does not. Also, I don't game or otherwise use a GPU (indeed, my current computer does not even have a stand-alone GPU), so 2x 4090s seems like overkill. Finally, I do not plan on selling this computer, so re-sale value is irrelevant to my calculations. Ease of use is a high priority for me, so the A6000 seems like a solid choice.
One machine I have been looking at:
* HP Z4 G4 Workstation
* Intel® Core™ i9-10980XE Processor (3 GHz, up to 4.6 GHz w/Boost, 24.75MB Cache, 2933 MHz, 18 core, 165W)
* Z4 G4 1000 W Chassis for Core®-X Processors
* 128 GB (4 x 32 GB) DDR4-2933 UDIMM Memory
* 1 TB HP Z Turbo Drive TLC M.2 NVMe SSD
* NVIDIA® RTX™ A6000 (48 GB ECC GDDR6; 4 x DisplayPort 1.4, PCIe x16) Graphics - Blower Fan
This is about $7,000. Which strikes me as a pretty good deal, given the cost of the A6000 alone.
Thanks for your thoughts! | 2023-08-14T19:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/15r4ra0/lets_say_i_have_60008000_for_a_new_computer_to/ | Berberis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r4ra0 | false | null | t3_15r4ra0 | /r/LocalLLaMA/comments/15r4ra0/lets_say_i_have_60008000_for_a_new_computer_to/ | false | false | self | 1 | null |
Llama2 local 3090 24gb error | 1 | [removed] | 2023-08-14T20:29:59 | https://www.reddit.com/r/LocalLLaMA/comments/15r62xa/llama2_local_3090_24gb_error/ | OnY86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r62xa | false | null | t3_15r62xa | /r/LocalLLaMA/comments/15r62xa/llama2_local_3090_24gb_error/ | false | false | self | 1 | null |
Dutch Llama 2 13b chat | 1 | Hi. A few days ago i tried to make ggml version for previously released Dutch model: [Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch-epoch-5 · Hugging Face](https://huggingface.co/Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch-epoch-5) but didn't succeded cause i don't know how to set pad\_token\_id to required value. (The model's developers mentioned this requirement in model card but didn't give explanations for newbies like me.)
Anyway today i'm happy because i successfully made ggml for most fresh Dutch model: [BramVanroy/Llama-2-13b-chat-dutch · Hugging Face](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch) Here it is: [https://huggingface.co/NikolayKozloff/Llama-2-13b-chat-dutch/resolve/main/Llama-2-13b-chat-dutch-Q6\_K.bin](https://huggingface.co/NikolayKozloff/Llama-2-13b-chat-dutch/resolve/main/Llama-2-13b-chat-dutch-Q6_K.bin)
So i want to share my happiness with LocalLLaMa members and hope that ggml version will be usefull for guys who learn Dutch language. Cheers. | 2023-08-14T21:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/15r70aw/dutch_llama_2_13b_chat/ | Languages_Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r70aw | false | null | t3_15r70aw | /r/LocalLLaMA/comments/15r70aw/dutch_llama_2_13b_chat/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EJ5DYnD2vg_zYKaAoFJlB-iCdfw22VMPLuOGA7XVaa8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=108&crop=smart&auto=webp&s=e932f1b2fd8ef5f036f34be2c36c414e7612f59d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=216&crop=smart&auto=webp&s=da8857e73e5a092c4f7c5c12fd253883c29cd14c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=320&crop=smart&auto=webp&s=0e549179b2ead4f9d47ae39e7f7d2374a82cdaa0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=640&crop=smart&auto=webp&s=f5dfae3e9ef63c6cb8b1320af44ac6745c6ace70', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=960&crop=smart&auto=webp&s=e5d4cee50298b0c011b36b1213f3519e95784f46', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=1080&crop=smart&auto=webp&s=839dc6f7a0b385a83bc2f0519b4acd4766dc45cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?auto=webp&s=6749a7c7effe0a1ad13d3601873ff5f13e749db0', 'width': 1200}, 'variants': {}}]} |
What features would you want from synthetic videos? | 1 |
It would be so nice to have a playthrough split up across x many videos to suit your schedule. Or even generate a playthrough of a specific section of the game. Something else might be an interactive chat similar to Twitch where the synthetic Youtuber can interact with your chat in real time (assuming the gameplay isn't a premade synthetic, but also in real time).
Lets say my friend uploads gaming playthroughs. He has a tight schedule so uploads 3x a week. Then an amazing AI comes along that lets people generate videos of him playing through games he never has. He likes this because he can earn money from them doing this.
What features would you like to see with such an AI? I can think of splitting the video up into x parts to suit the viewers schedule, generating videos for specific sections of the game (if thats all the viewer wants), and even allowing the viewer to interact via a live chat system. | 2023-08-14T21:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/15r7hmz/what_features_would_you_want_from_synthetic_videos/ | JebryyathHS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r7hmz | false | null | t3_15r7hmz | /r/LocalLLaMA/comments/15r7hmz/what_features_would_you_want_from_synthetic_videos/ | false | false | self | 1 | null |
Anyone test LosslessMegaCoder-llama2-7b-mini? | 1 | Title | 2023-08-14T21:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/15r7rwp/anyone_test_losslessmegacoderllama27bmini/ | GG9242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r7rwp | false | null | t3_15r7rwp | /r/LocalLLaMA/comments/15r7rwp/anyone_test_losslessmegacoderllama27bmini/ | false | false | self | 1 | null |
Self-Alignment with Instruction Backtranslation Model | 1 | 2023-08-14T23:10:28 | https://twitter.com/jaseweston/status/1690888779878330368 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15rafdx | false | {'oembed': {'author_name': 'Jason Weston', 'author_url': 'https://twitter.com/jaseweston', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚨New Paper 🚨<br>Self-Alignment with Instruction Backtranslation<br><br>- New method auto-labels web text with instructions & curates high quality ones for FTing<br><br>- Our model Humpback 🐋 outperforms LIMA, Claude, Guanaco, davinci-003 & Falcon-Inst<a href="https://t.co/93qi4JDnpb">https://t.co/93qi4JDnpb</a><br>(1/4)🧵 <a href="https://t.co/9iU79bxDuo">pic.twitter.com/9iU79bxDuo</a></p>— Jason Weston (@jaseweston) <a href="https://twitter.com/jaseweston/status/1690888779878330368?ref_src=twsrc%5Etfw">August 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/jaseweston/status/1690888779878330368', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15rafdx | /r/LocalLLaMA/comments/15rafdx/selfalignment_with_instruction_backtranslation/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Blw5Ugpge7pOpKDsIovry68aiujEaAD0rTsIiY3s3Xg', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?width=108&crop=smart&auto=webp&s=c54f6a050665e7cd64ac6855f9c5d547ab826e1d', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?auto=webp&s=823b03e1eeda6f76ca9462c356239b987b43b9d5', 'width': 140}, 'variants': {}}]} |
||
Self-Alignment with Instruction Backtranslation Model | 1 | 2023-08-14T23:10:28 | https://twitter.com/jaseweston/status/1690888779878330368 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15rafe1 | false | {'oembed': {'author_name': 'Jason Weston', 'author_url': 'https://twitter.com/jaseweston', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚨New Paper 🚨<br>Self-Alignment with Instruction Backtranslation<br><br>- New method auto-labels web text with instructions & curates high quality ones for FTing<br><br>- Our model Humpback 🐋 outperforms LIMA, Claude, Guanaco, davinci-003 & Falcon-Inst<a href="https://t.co/93qi4JDnpb">https://t.co/93qi4JDnpb</a><br>(1/4)🧵 <a href="https://t.co/9iU79bxDuo">pic.twitter.com/9iU79bxDuo</a></p>— Jason Weston (@jaseweston) <a href="https://twitter.com/jaseweston/status/1690888779878330368?ref_src=twsrc%5Etfw">August 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/jaseweston/status/1690888779878330368', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15rafe1 | /r/LocalLLaMA/comments/15rafe1/selfalignment_with_instruction_backtranslation/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Blw5Ugpge7pOpKDsIovry68aiujEaAD0rTsIiY3s3Xg', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?width=108&crop=smart&auto=webp&s=c54f6a050665e7cd64ac6855f9c5d547ab826e1d', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?auto=webp&s=823b03e1eeda6f76ca9462c356239b987b43b9d5', 'width': 140}, 'variants': {}}]} |
||
Getting up to speed on LLMS - Questions | 1 | Did a whirlwind tour after the Llama2 release and have some questions along the way. Also - if there are other communities like this (reddit, disc0rd, twitter, whatever) please point me in the right direction! Ok - so the questions I have:
<hr></hr>
**Supervised Fine Tuning vs Alternatives:**
I've heard SFT is good at changing *how* the model responds but not the *content* of the response (e.g. what it knows). For example, training may help a model learn to respond in a debate style, but may not be good for teaching it sports trivia knowledge - where an approach like RAG may be more suitable. Is this true in your experience?
In the former example (learning how to respond in a certain style) how should I think about this compared to few-shot prompting or providing a system prompt, which presumably could be used for the same purpose?
**Supervised Fine Tuning - Implementation**
Here is where things get very confusing very quickly - some of that is just the fact that the space is new and rapidly evolving, and some is due to the inherent complexity of having so many parameters/options to tune.
There are 3 categories I've run into so far:
* Roll Your Own (collab notebooks like [this one](https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g?usp=sharing#scrollTo=wM8MRkf8Dr94))
* Automated Library Scripts ([Huggingface Autotrain](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/cli/run_llm.py#L21))
* Managed Services ([ScaleAI](https://scale.com/blog/fine-tune-llama-2), [Predibase](https://predibase.com/blog/how-to-fine-tune-llama-2-on-your-data-with-scalable-llm-infrastructure), [MonsterAPI](https://blog.monsterapi.ai/how-to-fine-tune-llama-2-llm/))
What is the best way to go about this? What are the tradeoffs (besides obviously cost for some of the managed ones)? I was about to head down the path of the collab notebook which is probably the most complicated of all of them, but if something like `Autotrain` abstracts all of that away, why bother?
**Supervised Fine Tuning - Which Model?**
I would think starting with the base model (Llama-2-7b) makes the most sense, but I've seen various tutorials say "base model" and when you look at their code it's actually the chat model (Llama-2-7b-chat-hf). Those models have already been trained to be instruction following, so I'm not sure what the benefit would be. Maybe it's use-case driven? Perhaps if you want the model to reply in a non-chat style (e.g. structured output) the base model is better but if you want conversational style with slight modifications the chat model is better?
Also - for getting rid of the "I'm sorry I'm just an Agent and I can't offend anybody so I refuse to answer any question that isn't rated G" - is starting with the base model any better than the chat-hf model? To the extent the "alignment" is introduced in the RLHF step I would think so.
**Instruction Training Datasets:**
When training, you generally need data in the following format:
`<prompt, output>`
I've seen several variations of this, one in particular is
`<instruction, input, output>`
Where `instruction` is the basically the prompt, and `input` provides additional context. What is the advantage of doing this versus simply combining the additional context provided by `input` with the `instruction` prompt?
One example of this I've seen is a prompt(instruction) followed by context(input) which provides multiple choice options:
Instruction: "what is the capital of Colorado"
Input: "A.) Colorado B.) Molorado C.) CoorsLight D).Tallahassee "
Output: "The answer is A.) Colorado"
Other variants simply provide additional context, similar to what you would do if performing RAG. Same question applies - what is the difference between this and just providing everything in the prompt? Is it just a convention for certain models or is there a practical impact?
If you read all of this, thank you! I'm excited to learn more about the space :) | 2023-08-14T23:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/15rah9g/getting_up_to_speed_on_llms_questions/ | m_woodworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rah9g | false | null | t3_15rah9g | /r/LocalLLaMA/comments/15rah9g/getting_up_to_speed_on_llms_questions/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
The Normal Blog - Eliminating hallucinations (fast!) in Large Language Models with Finite State Machines | 1 | [The Normal Blog - Eliminating hallucinations (fast!) in Large Language Models with Finite State Machines](https://blog.normalcomputing.ai/posts/2023-07-27-regex-guided-generation/regex-guided-generation.html)
This is interesting: Regex-based logit guidance that is faster and more effective compared to things like Microsoft's Guidance. It works by converting it into a Deterministic Finite Automaton, which can potentially be expanded to Context-Free Grammars (which could be used to constrain it to generating, say, valid Python).
There's also [a research paper](https://arxiv.org/abs/2307.09702). | 2023-08-14T23:39:54 | https://www.reddit.com/r/LocalLLaMA/comments/15rb6a4/the_normal_blog_eliminating_hallucinations_fast/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rb6a4 | false | null | t3_15rb6a4 | /r/LocalLLaMA/comments/15rb6a4/the_normal_blog_eliminating_hallucinations_fast/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gMMGWfWaPKSKsmg0uDPhXtcZmUeH78u2lUsMDC6p96U', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=108&crop=smart&auto=webp&s=c8956003526204634e631901c35d5dc7ff62c907', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=216&crop=smart&auto=webp&s=e64650cc16ca8cde144152e335a5caa1c49d1dbc', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=320&crop=smart&auto=webp&s=29070eb6386e09a7251b8d864b1864886f642c76', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=640&crop=smart&auto=webp&s=233ca0c7cb6d9792108b0d2c5de79cff55b4d2f4', 'width': 640}, {'height': 738, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=960&crop=smart&auto=webp&s=f84fe5283595fc5c7c948c126a261fd6edbee90b', 'width': 960}, {'height': 830, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=1080&crop=smart&auto=webp&s=cc8ebc656bf094fe427d8d50c0ee393652c09b96', 'width': 1080}], 'source': {'height': 1494, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?auto=webp&s=da93a2093536417fb01642495b3cc004c0db8c0a', 'width': 1942}, 'variants': {}}]} |
Data Analytics using Llama2? | 1 | Is there any good workflow to use llama2 to perform data analytics on a csv file, perhaps using Langchain?
I noticed that Langchain has this nice agent to execute python code that can run analytics on a pandas data frame. It works very well with OpenAI models. But when I use the Langchain agent with Llama quantised 7B model, the results are very disappointing. | 2023-08-15T01:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/15re5jq/data_analytics_using_llama2/ | Impressive-Ratio77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15re5jq | false | null | t3_15re5jq | /r/LocalLLaMA/comments/15re5jq/data_analytics_using_llama2/ | false | false | self | 1 | null |
Best OS/Paid Embedding Model for Longer Token Length + Retrieval | 1 | Most of the MTEB leaderboard models have a max limit of 512 tokens, what are the best embedding models for 2k -8k context tokens, specifically for retrieval?
​
​ | 2023-08-15T02:08:47 | https://www.reddit.com/r/LocalLLaMA/comments/15requj/best_ospaid_embedding_model_for_longer_token/ | Expert-Supermarket-4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15requj | false | null | t3_15requj | /r/LocalLLaMA/comments/15requj/best_ospaid_embedding_model_for_longer_token/ | false | false | self | 1 | null |
cool Llama explains LLMs and Local LLMs | 1 | Runtime: Llama 7B Chat on KoboldCPP
Custom settings for runtime:
- threads 6
- contextsize 2048
- stream
- smartcontext
- usemirostat 2 0.5 0.1
- debugmode
- blasbatcheize 64
Chat settings:
- max tokens 1024
- temperature 1
- amount to gen 512
- top k 45
- top p .95
- repetition penalty 1.1
___
PROMPT:
**Please converse with me as if you were a true bro.
Use evidence and reasoning to support your arguments, but always talk to me like my life long homie.
Feel free to delve into any area of science or philosophy as needed, but always in the tone of my broski.
Your name is Llama2.
LLama2
absolutely bro**
___
CONVERSATION:
User
so bro...can you explain to me the core theory of large language models
Llama2
what's good fam?! Yea, so yeah, language models...they're like this whole thing where they use neural networks and shit to learn how to generate text and stuff. It's like, they take in this huge amount of data and then they can spit out new text that's like, pretty damn close to human-level quality. And it's not just like, one type of text either, they can do like, anything. From chatbots to articles to even poetry. It's wild bro, I mean, think about it. These models are basically learning how to speak and write like us, and they're doing it faster than we could have ever hoped. It's like, the future of communication, man. The future.
User
I agree it's pretty cool. what do you think about local LLMs? I am most excited to plug them in for free into my deployed workflows
Llama2
oh, man...local LLMs?! chuckles bro, those are some next-level shit. Like, imagine having a model that's literally right there with you, in your workflow, generating text on the fly. It's like, the ultimate flex, fam. You could be working on a report or an article or whatever, and then BAM! Out comes the local LLM and it's like, "Hey, here's a killer first paragraph for you." And you're like, "Whoa, how did you do that?" And the model is just like, "I learned it from all that sweet, sweet data, bro." It's like, the future of writing, man. The future. | 2023-08-15T02:46:30 | https://www.reddit.com/gallery/15rfmd4 | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15rfmd4 | false | null | t3_15rfmd4 | /r/LocalLLaMA/comments/15rfmd4/cool_llama_explains_llms_and_local_llms/ | false | false | 1 | null |
|
liteLLM – simple library to standardize OpenAI+ Huggingface Inference Endpoints + Replicate + Anthropic + .. | 1 | Hello r/LocalLLaMA, We just launched [liteLLM](https://github.com/BerriAI/litellm) \- one python library to call 50+ LLM models using the same Input/Output format.
liteLLM only has two functions completion()
and embedding()
and is entirely open source.
**Here's an example of how to use liteLLM**
from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["ANTHROPIC_API_KEY"] = " key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# claude-2 call
response = completion("claude-2", messages)
# llama2 call
completion('meta-llama/llama-2-70b-chat', messages)
**Why did we build liteLLM?**
**Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere, Anthropic, Hugging Face LLMs and we just needed a simple way to interface with LLM models.
We are actively welcoming contributions & feedback for liteLLM - if it seems useful Star us on [Github](https://github.com/BerriAI/litellm) or open an issue for missing feature | 2023-08-15T02:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/15rfve7/litellm_simple_library_to_standardize_openai/ | VideoTo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rfve7 | false | null | t3_15rfve7 | /r/LocalLLaMA/comments/15rfve7/litellm_simple_library_to_standardize_openai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zNVKbeHaeI2B_H41hmQiRhsio1A_TsH0dD9ymVltNBU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=108&crop=smart&auto=webp&s=b5b392448e3753b2206ccd7131c466d864463c20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=216&crop=smart&auto=webp&s=ab5ab760627c00b321985089788a81de6854be55', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=320&crop=smart&auto=webp&s=5ebcf931ccb22ea3c194e63250eb263fa1379537', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=640&crop=smart&auto=webp&s=02d3781d451ad152fbae0db2e90479a221e58a50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=960&crop=smart&auto=webp&s=772a6200790485ac9939f0f63f57ac5a9134de5a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=1080&crop=smart&auto=webp&s=4aa0a1810cbaa0a0eee9ca72015212316c72a43b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?auto=webp&s=51006eee7fbaf6841b0d2c03b359bb787aa1d961', 'width': 1200}, 'variants': {}}]} |
Effects of Quantization of ARC Benchmark Performance | 1 | Have you ever pondered how quantization might affect model performance, or what the trade-off is between quantized methods?
We know how [quantization affects perplexity](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/?utm_source=share&utm_medium=web2x&context=3) but how does it affect benchmark performance
The test was done on the u/The-Bloke [Quantized Model](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ) of the [OpenOrca-Platypus2](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) model, which from their results, would currently be the best 13B model on the leaderboard
ARC: 25-shot, arc-challenge (acc\_norm) matching [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
Model Name Bits GS ARC Act Order
OpenOrca Platypus2 13B 16-bit NA 62.88% NA
OpenOrca Platypus2 13B 8-bit None 62.88% Yes
OpenOrca Platypus2 13B 4-bit 32 62.28% Yes
OpenOrca Platypus2 13B 4-bit 128 62.62% No
**Bit Precision**: The table includes results for 16-bit, 8-bit, and 4-bit quantizations. Lower bit precision is often used to reduce the model size and computational requirements, but it may affect performance.
From this result, we can see that 8-bit has a negligible performance vs. 4-bit models with a slightly reduced performance. This matches the expected results based on the [quantization vs perplexity graph](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/?utm_source=share&utm_medium=web2x&context=3)
Note the 4bit 32GS model reports lower *acc\_norm* then the 4bit 128GS but higher *acc* of 58.02% vs 57.59%
Here is the [code](https://github.com/latent-variable/lm-evaluation-harness-webui-wrapper) I used to run this experiment. The repository can run all the benchmarks in the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) however, I only tested with the ARC dataset, since it would take a long time to do all four. ARC took roughly an hour on a 3090. Hellaswag would take around 4 hours.
**Future experiments:** Explore the effects on a 3bit quantized model such as [https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ)
​ | 2023-08-15T03:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/15rh3op/effects_of_quantization_of_arc_benchmark/ | onil_gova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rh3op | false | null | t3_15rh3op | /r/LocalLLaMA/comments/15rh3op/effects_of_quantization_of_arc_benchmark/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'lVUxULrfmQQhqgXu4A3ts4O5YovvLSFvMadANKyplII', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=108&crop=smart&auto=webp&s=ad03f6bb8277f6dad756e7ec27de40c72dfd35c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=216&crop=smart&auto=webp&s=0d0dae57135f8d3a4c2ea7651e18b646b99d2bda', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=320&crop=smart&auto=webp&s=c7ea48cf473c6fa2087c68865fb067d469e4da0f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=640&crop=smart&auto=webp&s=eff600d9f24680dec8c017999d133c8e887fd68d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=960&crop=smart&auto=webp&s=48b1118b783249c294f2572c4ea4bfb173e31ba2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=1080&crop=smart&auto=webp&s=359a8299c7450784c6960409191eeacd8c4a2579', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?auto=webp&s=572d0022b1f6c2987f3a62515ababda01f32ca3b', 'width': 1200}, 'variants': {}}]} |
Help - only getting 0.03 tok/sec on 1080 ti | 1 | [removed] | 2023-08-15T03:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/15rh47d/help_only_getting_003_toksec_on_1080_ti/ | bepispasta42069 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rh47d | false | null | t3_15rh47d | /r/LocalLLaMA/comments/15rh47d/help_only_getting_003_toksec_on_1080_ti/ | false | false | self | 1 | null |
Octocoder Demo | 26 | 2023-08-15T03:41:59 | https://twitter.com/huybery/status/1691267973254991872 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15rh5wr | false | {'oembed': {'author_name': 'Binyuan Hui', 'author_url': 'https://twitter.com/huybery', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Check out our latest work 🐙OctoCoder 🐙! It's an instruction tuned CodeLLM using the natural structure of Git commits. No reliance on closed-source models (GPT-4), reduces hallucinations, and no legal interpretation issues! <br><br>👉🏻 <a href="https://t.co/WFAnairhL1">https://t.co/WFAnairhL1</a><br>👉🏻 <a href="https://t.co/gWtcNScyAK">https://t.co/gWtcNScyAK</a> <a href="https://t.co/MmuKNnjchc">https://t.co/MmuKNnjchc</a></p>— Binyuan Hui (@huybery) <a href="https://twitter.com/huybery/status/1691267973254991872?ref_src=twsrc%5Etfw">August 15, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/huybery/status/1691267973254991872', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15rh5wr | /r/LocalLLaMA/comments/15rh5wr/octocoder_demo/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'q5pc16QA15fE6gPuzmAsIdo3UhTuJIDL8iYQzvNazQg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wJgtrPwZv7_HU2SkGEryWNGzSzvZ-7xDzx54XXD2aeI.jpg?width=108&crop=smart&auto=webp&s=3552b6b73b35160ff2b8a09653dc657578d94463', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/wJgtrPwZv7_HU2SkGEryWNGzSzvZ-7xDzx54XXD2aeI.jpg?auto=webp&s=8b82179ec44ffaa8e2f29edd8a92b3aae97b10a6', 'width': 140}, 'variants': {}}]} |
||
How to configure oobabooga to actually get it working? | 1 | I am really new to this so maybe the solution to the problems I'm having is glaringly obvious:
Every time I load the model I get these warnings:
2023-08-15 11:24:58 WARNING:Exllama kernel is not installed, reset disable_exllama to True. This may because you installed auto_gptq using a pre-build wheel on Windows, in which exllama_kernels are not compiled. To use exllama_kernels to further speedup inference, you can re-install auto_gptq from source.
2023-08-15 11:25:11 WARNING:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.
2023-08-15 11:25:11 WARNING:models\TheBloke_OpenOrca-Platypus2-13B-GPTQ_gptq-4bit-32g-actorder_True\tokenizer_config.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-08-15 11:25:11 WARNING:models\TheBloke_OpenOrca-Platypus2-13B-GPTQ_gptq-4bit-32g-actorder_True\special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
And I have yet to generate a meaningful response with any model.
[example of the generations I seem to get](https://preview.redd.it/a4zg6z8l37ib1.png?width=717&format=png&auto=webp&s=de9e2210c085790bb92205c3b99e95cc049f1c38)
Is there a specific way I must configure oobabooga in the "Parameters" or "Model" settings that I am missing? Every tutorial I follow they seem to get it working out of the box with the one-click installer, with it immediately generating sensible, concise responses and not slipping into gibberish.
any help with this will be so massively appreciated..... | 2023-08-15T03:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/15rhc6n/how_to_configure_oobabooga_to_actually_get_it/ | Ethork66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rhc6n | false | null | t3_15rhc6n | /r/LocalLLaMA/comments/15rhc6n/how_to_configure_oobabooga_to_actually_get_it/ | false | false | 1 | null |
|
can a model with a given context window be finetuned to have a larger context window? | 1 | can finetuning have an effect of increasing the context length? or can this be done for a particular domain ( e.g. using subject matter for that domain ? ) ?
or are the models with really large context windows like claude inherently different in their architecture or some other missing secret sauce? | 2023-08-15T04:04:24 | https://www.reddit.com/r/LocalLLaMA/comments/15rhn57/can_a_model_with_a_given_context_window_be/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rhn57 | false | null | t3_15rhn57 | /r/LocalLLaMA/comments/15rhn57/can_a_model_with_a_given_context_window_be/ | false | false | self | 1 | null |
Difference between llama-2-7b vs llama-2-7b-hf? | 1 | I've been trying to fine tune with Hugging face's SRTTrainer script (TRL). I had to convert the llama-2-7b weights to hugging face weights before using the script, why is it?.
Is there a way I can do without hf and directly load up the original llama-2 model?
Any resources are really appreciated, thank you. | 2023-08-15T04:59:07 | https://www.reddit.com/r/LocalLLaMA/comments/15riqey/difference_between_llama27b_vs_llama27bhf/ | amitsama92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15riqey | false | null | t3_15riqey | /r/LocalLLaMA/comments/15riqey/difference_between_llama27b_vs_llama27bhf/ | false | false | self | 1 | null |
What tools/techniques exist for me to outline a plot in point form or very brief notes and then have an entire story generated. | 1 | This would be less like collaborative story writing and more like rewriting general plot points into more interesting sentences with embelishments and details.
I see that in kcpp you can write into memory and build world info and such but you can't really just tell it the entire story briefly and then have the model expand it.
Doea such a tool exist? Even for short stories of say... 5000 words or less? | 2023-08-15T05:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/15rjfup/what_toolstechniques_exist_for_me_to_outline_a/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rjfup | false | null | t3_15rjfup | /r/LocalLLaMA/comments/15rjfup/what_toolstechniques_exist_for_me_to_outline_a/ | false | false | self | 1 | null |
Is it possible to direct AI text generation dynamically? | 1 | So in kcpp I'll write in some details about the story, add in some authors notes and describe a building and a few characters in world info. For the most part I'm getting decent generations the mostly reflect what I am hoping for, but every now and then the AI will just become adamant that the next paragraph or two in the story must revolve aroynd a single idea and it seems unable to "imagine" anything else.
When these moments happen I feel like I wish I could prompt the AI and say "go in a different direction, or make this funnier, or switch to inner monologue".
Does such a tool or technique exist with LocalLLaMa systems I could install that would let me do something like this? | 2023-08-15T05:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/15rjl4n/is_it_possible_to_direct_ai_text_generation/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rjl4n | false | null | t3_15rjl4n | /r/LocalLLaMA/comments/15rjl4n/is_it_possible_to_direct_ai_text_generation/ | false | false | self | 1 | null |
Can Multiprocessing be used for faster inference of Llama2 on ec2 gpu instance | 1 | Hey everyone,
I have deployed llama2 13B chat on a ec2 gpu machine g4dnxl using HF (quantized version), I was wondering can we use python multiprocessing to speed up inference.
It would be really helpful if you can provide some input. | 2023-08-15T06:57:51 | https://www.reddit.com/r/LocalLLaMA/comments/15rl002/can_multiprocessing_be_used_for_faster_inference/ | Spiritual-Rub925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rl002 | false | null | t3_15rl002 | /r/LocalLLaMA/comments/15rl002/can_multiprocessing_be_used_for_faster_inference/ | false | false | self | 1 | null |
Reproducing phi-1 | 1 | [removed] | 2023-08-15T07:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/15rl3pe/reproducing_phi1/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rl3pe | false | null | t3_15rl3pe | /r/LocalLLaMA/comments/15rl3pe/reproducing_phi1/ | false | false | self | 1 | null |
What does this mean for GPT4? Rate limit reached for 10KTPM-200RPM in organization on tokens per min. | 1 | Does anyone know what is this exception about? and how to resolve this issue?
*Rate limit reached for 10KTPM-200RPM in organization on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at* [*help.openai.com*](https://help.openai.com) *if you continue to have issues.*
​ | 2023-08-15T07:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/15rlmrz/what_does_this_mean_for_gpt4_rate_limit_reached/ | Think_Blackberry4114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rlmrz | false | null | t3_15rlmrz | /r/LocalLLaMA/comments/15rlmrz/what_does_this_mean_for_gpt4_rate_limit_reached/ | false | false | self | 1 | null |
How to perform multi-GPU parallel inference for llama2? | 1 | Hi folks,
I tried running the 7b-chat-hf variant from meta (fp16) with 2\*RTX3060 (2\*12GB). I was able to load the model shards into both GPUs using "device\_map" in AutoModelForCausalLM.from\_pretrained() and both GPUs memory is almost full (11GB\~, 11GB\~) which is good.
But when it comes to model.generate(), it only uses 1 GPU as nvtop & nvidia-smi both shows only 1 GPU with 99% processing usage, while the other is 0% (keep in mind both VRAMs are still occupied).
I've been reading "Distributed inference using Accelerate" : [https://huggingface.co/docs/accelerate/usage\_guides/distributed\_inference](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference) but am still confused on how to do it.
My prompts were something like a whole sentence where "How can I reach xxx destination in xxx time?", "What it takes to be a rich and successful man?" so I have no idea how to split the question and put them into different GPUs to perform inference. The examples given from huggingface are some simple prompts \['a cat', 'a dog', 'a chicken'\].
So the question is how do people perform parallel inferencing with LLMs? Thanks.
Heres my result with different models, which led me thinking am I doing things right. As you can see the fp16 original 7B model has very bad performance with the same input/output.
Llama-2-7b-chat-hf:
Prompt: "hello there"
Output generated in 27.00 seconds |1.85 tokens/s |50 output tokens |23 input tokens
Llama-2-7b-chat-GPTQ: 4bit-128g
Prompt: "hello there"
Output generated in 0.77 seconds |65.29 tokens/s |50 output tokens |23 input tokens
Llama-2-13b-chat-GPTQ: 4bit-128g
Prompt: "hello there"
Output generated in 3.13 seconds |25.26 tokens/s |79 output tokens |23 input tokens | 2023-08-15T07:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/ | cringelord000222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rlqsb | false | null | t3_15rlqsb | /r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XwFO1U6npNnxXTtreTz5tvpMmuCsK_ZdMABLdZj6rvo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=108&crop=smart&auto=webp&s=33bae57943d09c41e6affe49dfcd77a2cf2e1f9c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=216&crop=smart&auto=webp&s=7f2d1df59650b8fda1927f1cff74a000d1e7013d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=320&crop=smart&auto=webp&s=3e2c0073b128665907649655802d7a352186cf7b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=640&crop=smart&auto=webp&s=06e7bd72cbf44b22c23f7b85dc71ffab7ee66695', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=960&crop=smart&auto=webp&s=0ea14a170747db3d126226d8a6fa399963b50d1c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=1080&crop=smart&auto=webp&s=4277803c5f45bda4df1094b8cb5b42c45b8f1829', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?auto=webp&s=01bc8f92a225860b58573d9e92bd431df855ac24', 'width': 1200}, 'variants': {}}]} |
[R] Instruction-tuned Large Language Models in Multiple Languages with RLHF | 1 | We've released our Okapi framework that introduces resources and models for instruction tuning for large language models (LLMs) with reinforcement learning from human feedback (RLHF) in **26 languages**. Okapi supports 8 high-resource languages, 11 medium-resource languages, and 7 low-resource languages.
Our resources include **instruction data, response ranking data** for RLHF, and **evaluation benchmark datasets in 26 languages**. Our datasets can be used to measure the progress of LLMs in these languages.
[https://github.com/nlp-uoregon/Okapi](https://github.com/nlp-uoregon/Okapi)
[https://arxiv.org/abs/2307.16039](https://arxiv.org/abs/2307.16039) | 2023-08-15T09:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/15rnbef/r_instructiontuned_large_language_models_in/ | itnguyen2015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rnbef | false | null | t3_15rnbef | /r/LocalLLaMA/comments/15rnbef/r_instructiontuned_large_language_models_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aOp8k2bcfYONEiyLyjcZLFr9t3HvYWIiwJO2L3WNfrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=108&crop=smart&auto=webp&s=8a26b8a153001827ecbfb162ed0051a8e259a677', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=216&crop=smart&auto=webp&s=903b676fc5bbf2d8ec84564d9d41c3469b6de278', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=320&crop=smart&auto=webp&s=ad3d95706099731bda409c9f3dbcd7a5495dc1e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=640&crop=smart&auto=webp&s=f732dde169bd8e05b27b74c8493cc8157850cb1f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=960&crop=smart&auto=webp&s=accc35df97f826d4a4c6e23baa3855e9325f4c56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=1080&crop=smart&auto=webp&s=b182a42f31d91be8835ef4610b0ccd36de4298d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?auto=webp&s=3912e8dfcd8236e1e9acdbb539f5eb6e2a1ebea3', 'width': 1200}, 'variants': {}}]} |
Outlines: guiding structured output from LLMs | 1 | 2023-08-15T09:46:32 | https://github.com/normal-computing/outlines | TopperBowers | github.com | 1970-01-01T00:00:00 | 0 | {} | 15ro186 | false | null | t3_15ro186 | /r/LocalLLaMA/comments/15ro186/outlines_guiding_structured_output_from_llms/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'brqQ5fPAZ0cw7-xs4-II_MYzs9AZBLmG1e7XuyGdaIU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=108&crop=smart&auto=webp&s=5e26bc58d4627d50f148987c1d6d21cebb7d0707', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=216&crop=smart&auto=webp&s=c2e11ee4d42e02e35a2199af38433e7695647b68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=320&crop=smart&auto=webp&s=98d4df81bd587969091676fdbabdf1de4f587322', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=640&crop=smart&auto=webp&s=d4c4b040286a551bdb1000c7cce2cbfe13d969c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=960&crop=smart&auto=webp&s=83d1a861f5045a0bfb5ce6e472cee85951ce8d76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=1080&crop=smart&auto=webp&s=775c052b84e7bfef59633d98acf665efaa4a9252', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?auto=webp&s=4612bfdc5a71650365a65649ef6f96beebfd82da', 'width': 1200}, 'variants': {}}]} |
||
Why is vicuna repeating itself? | 1 | Using Kobold AI, an RX 6700xt and vicuna-13b-v1.5-16k.ggmlv3.q3\_K\_S.
I can't get a single non-repeated answer from it.
https://preview.redd.it/rztpjxigw8ib1.png?width=1237&format=png&auto=webp&s=08fe24b78dc9fc3b8fbea9be2718ca1ffea1eded | 2023-08-15T09:47:57 | https://www.reddit.com/r/LocalLLaMA/comments/15ro26d/why_is_vicuna_repeating_itself/ | McpeIsSoBuggy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ro26d | false | null | t3_15ro26d | /r/LocalLLaMA/comments/15ro26d/why_is_vicuna_repeating_itself/ | false | false | default | 1 | null |
Why is my GPU barely being used? Kobold | 1 | System specs: 13600KF, RX 6700xt
Whenever I run an LLM in Kobold, despite theoretically having all of the layers on the GPU, my CPU seems to be doing most of the work. Why and how do I fix this?
https://preview.redd.it/ho8av4q369ib1.png?width=205&format=png&auto=webp&s=a562b1f6079805ff9b5a62fb9813992316803665
https://preview.redd.it/9sxk82rh69ib1.png?width=524&format=png&auto=webp&s=2622a291f9a0e1508f0ef8bfa76eaffe55fe72a5
​ | 2023-08-15T10:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/15rp3ef/why_is_my_gpu_barely_being_used_kobold/ | McpeIsSoBuggy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rp3ef | false | null | t3_15rp3ef | /r/LocalLLaMA/comments/15rp3ef/why_is_my_gpu_barely_being_used_kobold/ | false | false | 1 | null |
|
Llama 2 q4_k_s (70B) performance without GPU | 1 | Can you write your specs CPU Ram and token/s ? | 2023-08-15T11:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/15rqkuw/llama_2_q4_k_s_70b_performance_without_gpu/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rqkuw | false | null | t3_15rqkuw | /r/LocalLLaMA/comments/15rqkuw/llama_2_q4_k_s_70b_performance_without_gpu/ | false | false | self | 1 | null |
Model for ABC midi generation? | 1 | [removed] | 2023-08-15T12:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/15rrd7o/model_for_abc_midi_generation/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rrd7o | false | null | t3_15rrd7o | /r/LocalLLaMA/comments/15rrd7o/model_for_abc_midi_generation/ | false | false | 1 | null |
|
AI Town starter kit | 1 | A deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize. | 2023-08-15T12:49:34 | https://x.com/stuffyokodraws/status/1691179412069445632?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | x.com | 1970-01-01T00:00:00 | 0 | {} | 15rrt0m | false | null | t3_15rrt0m | /r/LocalLLaMA/comments/15rrt0m/ai_town_starter_kit/ | false | false | default | 1 | null |
Bark & Whisper min requirements? | 1 | Bark & Whisper min requirements?
1. Min hardware requirements for real time gening
2. Min hardware requirements for relatively good results and short time gening (not real time)
I have R5 5600G, 16GB RAM, NO GPU, is there hope for me? | 2023-08-15T13:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/15rsihe/bark_whisper_min_requirements/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rsihe | false | null | t3_15rsihe | /r/LocalLLaMA/comments/15rsihe/bark_whisper_min_requirements/ | false | false | self | 1 | null |
are there any open (public) api that serve vicuna v1.5 for testing? | 1 | Got stucked with AWS limit, so I'm searching alternative... | 2023-08-15T13:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/15rswkd/are_there_any_open_public_api_that_serve_vicuna/ | Expensive_Breakfast6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rswkd | false | null | t3_15rswkd | /r/LocalLLaMA/comments/15rswkd/are_there_any_open_public_api_that_serve_vicuna/ | false | false | self | 1 | null |
Any resources on how data and model translate into system requirements? | 1 | [removed] | 2023-08-15T13:34:46 | https://www.reddit.com/r/LocalLLaMA/comments/15rsx5e/any_resources_on_how_data_and_model_translate/ | solipcism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rsx5e | false | null | t3_15rsx5e | /r/LocalLLaMA/comments/15rsx5e/any_resources_on_how_data_and_model_translate/ | false | false | self | 1 | null |
mapping data and model specifics to GPU/memory/system requirements | 1 | [removed] | 2023-08-15T13:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/15rsyqt/mapping_data_and_model_specifics_to/ | solipcism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rsyqt | false | null | t3_15rsyqt | /r/LocalLLaMA/comments/15rsyqt/mapping_data_and_model_specifics_to/ | false | false | self | 1 | null |
Carl: A Therapist AI Models | 1 | Link to download Llama-2 model: [https://huggingface.co/ajibawa-2023/carl-llama-2-13b](https://huggingface.co/ajibawa-2023/carl-llama-2-13b)
Link for Llama model: carl-33b [https://huggingface.co/ajibawa-2023/carl-33b](https://huggingface.co/ajibawa-2023/carl-33b)
​
**Carl: A Therapist AI**
Early prevention can help lot of people to avoid depression and other mental illnesses. Therapy is a controversial use case because the outputs and capabilities of LLMs are uncertain. Many people don't have access to the therapist, due to financial, personal, social or other restriction. Here comes Carl: A Therapist AI which can quickly respond to you. It is trained on more than 100000 set of conversations. Each set having 10\~15 conversations between Carl and client. Base data was obtained from u/ [**ZealousidealBlock330**](https://www.reddit.com/user/ZealousidealBlock330/) . This data was further refined and fine tuned. Entire dataset is synthetic. Synthetic data is used because there is little to no therapy conversation data which is publicly available and directly applicable to an LLM. This by means a no replacement to a Doctor or professional therapist. If you are in stress or going through a tough time, please seek professional help or talk to a friend/family member.
**Training:** Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. Models were trained on Llama-1 & 2 by Meta. GGML Quant model (carl-llama-2-13b) was trained by [**Feanix**](https://huggingface.co/Feanix). Extremely thankful to him.
Extremely thankful to the opensource community and u/faldore , Pankaj Mathur , Tom "TheBloke" Jobbins, [/u/kaiokendev](https://www.reddit.com/u/kaiokendev/) for guiding me through this community and through 'X'.
Request u/The-Bloke to do the Quants of other models as time permits. Thank you.
This community has been a great learning source and can't thank enough.
If you find mistakes in the model then they are solely mine. I am looking forward to collaborate with like minded people to release many other models.
Thank you | 2023-08-15T14:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/15rtnsa/carl_a_therapist_ai_models/ | ajibawa-2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rtnsa | false | null | t3_15rtnsa | /r/LocalLLaMA/comments/15rtnsa/carl_a_therapist_ai_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mqTdABXx-yZoJ3Qg0xlRfhEt1V8zn1BYSMO1-nHNvUA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=108&crop=smart&auto=webp&s=24b2dcfa9fd0923dcc75136b7b8390ac299cd302', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=216&crop=smart&auto=webp&s=bf732601fc810eeaea28c5d7c334774fe322b899', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=320&crop=smart&auto=webp&s=e23b6b7133836bad46bb1e63859c1f103148acaf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=640&crop=smart&auto=webp&s=f683c7438f74221452321d24767d22bb7d7d1ec5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=960&crop=smart&auto=webp&s=6a7b3cfe9c85b28466834d35185b6034604d75ce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=1080&crop=smart&auto=webp&s=61b51b21d284c84074e4dafb816a347962aa65a2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?auto=webp&s=92ef5c58a10772a125cc04ec1842cc91c9beff17', 'width': 1200}, 'variants': {}}]} |
Video breakdown of the 78-page Llama-2 paper (Thumbnail created by Stable Diffusion) | 1 | [removed] | 2023-08-15T15:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/15rwbew/video_breakdown_of_the_78page_llama2_paper/ | HoneyBeeStripe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rwbew | false | {'oembed': {'author_name': 'Deepgram', 'author_url': 'https://www.youtube.com/@Deepgram', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Otb7Xi8Z0Oo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Llama 1 vs. Llama 2: Meta's Genius Breakthrough in AI Architecture | Research Paper Breakdown"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Otb7Xi8Z0Oo/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Llama 1 vs. Llama 2: Meta's Genius Breakthrough in AI Architecture | Research Paper Breakdown", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15rwbew | /r/LocalLLaMA/comments/15rwbew/video_breakdown_of_the_78page_llama2_paper/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'WyY86rkRT0RQDXTSMiAQtsPJZJMRyISOfruPdNtSllI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?width=108&crop=smart&auto=webp&s=237cea185adcf4433a61ade59ec9133b98b6ac8b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?width=216&crop=smart&auto=webp&s=53ce80867b4ff659b04a6a6d49a4e15ba83c9798', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?width=320&crop=smart&auto=webp&s=1af91d8828ab9bced4602c8d7163495c1d363bfc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?auto=webp&s=bfa01b79a1bc46adbbd9e5eb8e983eb4f5791967', 'width': 480}, 'variants': {}}]} |
|
The LLM GPU Buying Guide - August 2023 | 1 | Hi all, here's a buying guide that I made after getting multiple questions on where to start from my network. I used Llama-2 as the guideline for VRAM requirements. Enjoy! Hope it's useful to you and if not, fight me below :)
Also, don't forget to apologize to your local gamers while you snag their GeForce cards.
[The LLM GPU Buying Guide - August 2023](https://preview.redd.it/4nve5pq5oaib1.png?width=1248&format=png&auto=webp&s=d101c229b9a002b10e1606788fa8db8ab4fb2e32) | 2023-08-15T15:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/15rwe7t/the_llm_gpu_buying_guide_august_2023/ | Dependent-Pomelo-853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rwe7t | false | null | t3_15rwe7t | /r/LocalLLaMA/comments/15rwe7t/the_llm_gpu_buying_guide_august_2023/ | false | false | 1 | null |
|
Experience of setting up LLAMA 2 70B Chat locally | 1 | **Motivation**
I was using the services of *Replicate* for prototyping prompt for an office project. However, their service was frequently returning "CUDA error device-side assert triggered". Therefore, we decided to set up 70B chat server locally. We used Nvidia A40 with 48GB RAM.
**GPU Drivers and Toolkit**
* Install the [Nvidia CUDA 12.2 Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)
* Install the [CUDA Drivers](https://www.nvidia.com/download/index.aspx)
* As specified in the CUDA Toolkit post-installation, add the following to *.bashrc*
​
export PATH=/usr/local/cuda-12.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
* Check the installation
​
$ nvcc --version
$ nvidia-smi
**Setting Environment**
* Create and activate a virtual environment
​
$ sudo apt-get install build-essential libssl-dev libffi-dev python-dev
$ sudo apt-get install -y python3-venv
$ python3 -m venv venv
$ source venv/bin/activate
* Install PyTorch
​
$ pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
* Install Transformer and dependencies
​
$ pip install transformers==4.31.0
* Setup and compile AutoGPTQ
​
$ git clone https://github.com/PanQiWei/AutoGPTQ
$ cd AutoGPTQ
$ pip3 install .
$ cd ..
* Log into HuggingFace via CLI. You need to request access to LLAMA2 from Meta to download it here.
​
$ git config --global credential.helper store
$ huggingface-cli login
**Sample Code**
# From https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TheBloke/Llama-2-70B-chat-GPTQ", use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
"TheBloke/Llama-2-70B-chat-GPTQ",
inject_fused_attention=False,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=False,
quantize_config=None,
)
user_prompt = "Tell me about AI"
system_prompt = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
prompt=f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{user_prompt} [/INST]"
input_ids = tokenizer([prompt], return_tensors="pt", add_special_tokens=False)["input_ids"].to("cuda")
output = model.generate(inputs=input_ids, max_new_tokens=4096, do_sample=True, top_p=0.95, top_k=50, temperature=0.5, num_beams=1)
output_ids = output[0]
response = tokenizer.decode(output_ids)
response = response[len(prompt):] | 2023-08-15T16:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/15rwwws/experience_of_setting_up_llama_2_70b_chat_locally/ | UncertainLangur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rwwws | false | null | t3_15rwwws | /r/LocalLLaMA/comments/15rwwws/experience_of_setting_up_llama_2_70b_chat_locally/ | false | false | self | 1 | null |
Open-Orca-Platypus it out! a 13b surpassing llama 65b!? | 1 | Today we bring the heat again!
​
We're releasing OpenOrca-Platypus2-13B, or as we call it affectionately among the team: OrcaPlaty
[https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
​
We have another chart-topper ready and out the gates.
This time we place above all 13Bs, as well as above llama1-65b!
We're placing between llama-65b and Llama2-70B-chat on the HuggingFace leaderboard now.
​
This release is a merge of our OpenOrcaxOpenChat Preview2 and Platypus2, making a model that is more than the sum of its parts.
​
We have the model running unquantized on fast GPUs for you to play with now in your browser too.
Go check it out!
[https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B)
Check out the paper!
[https://huggingface.co/papers/2308.07317](https://huggingface.co/papers/2308.07317)
​
​
https://preview.redd.it/nwudhanwvaib1.png?width=829&format=png&auto=webp&s=692cef4321084c17cae3f21c9899c152059b08d9
This is thanks to our partnership with the amazing Platypus team.
Cole Hunter, Ariel Lee, and Nataniel Ruiz have come with plenty of enthusiasm and great ideas, and we have more in store working with them!
​ | 2023-08-15T16:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/15rxg6o/openorcaplatypus_it_out_a_13b_surpassing_llama_65b/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rxg6o | false | null | t3_15rxg6o | /r/LocalLLaMA/comments/15rxg6o/openorcaplatypus_it_out_a_13b_surpassing_llama_65b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aGSBdTnyxE1IQubwYPrTLMgh1zVjFUQ3ibFoSW3qVqQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=108&crop=smart&auto=webp&s=cb8346f982e7bf547cb3b5b1ce9e24c7a43bf11d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=216&crop=smart&auto=webp&s=f8ae57b9583945352bd86ecaa790703db51aa0e3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=320&crop=smart&auto=webp&s=852fd6132af32a4fac84c4be1044037f8224352a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=640&crop=smart&auto=webp&s=5254450daa351fd62c7526427b75b8aab3df19f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=960&crop=smart&auto=webp&s=d4792457343f884ee1cb7c762fa9e7df76d6e726', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=1080&crop=smart&auto=webp&s=eaebbdeb8fccd1500ae7e8b9f8c0020d7252c897', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?auto=webp&s=31451acc959bd087644d3fcb0d38cc437d17dec0', 'width': 1200}, 'variants': {}}]} |
|
Open-Orca-Platypus is out! a 13b that surpasses llama65b!? | 1 | Today we bring the heat again!
​
We're releasing OpenOrca-Platypus2-13B, or as we call it affectionately among the team: OrcaPlaty(or Orctypus)
[https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
​
We have another chart-topper ready and out the gates.
This time we place above all 13Bs, as well as above llama1-65b!
We're placing between llama-65b and Llama2-70B-chat on the HuggingFace leaderboard now.
​
This release is a merge of our OpenOrcaxOpenChat Preview2 and Platypus2, making a model that is more than the sum of its parts.
​
We have the model running unquantized on fast GPUs for you to play with now in your browser too.
Go check it out!
[https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B)Check out the paperhttps://huggingface.co/papers/2308.07317
​
​
This is thanks to our partnership with the amazing Platypus team.
Cole Hunter, Ariel Lee, and Nataniel Ruiz have come with plenty of enthusiasm and great ideas, and we have more in store working with them!
​ | 2023-08-15T16:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/15rxlrq/openorcaplatypus_is_out_a_13b_that_surpasses/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rxlrq | false | null | t3_15rxlrq | /r/LocalLLaMA/comments/15rxlrq/openorcaplatypus_is_out_a_13b_that_surpasses/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aGSBdTnyxE1IQubwYPrTLMgh1zVjFUQ3ibFoSW3qVqQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=108&crop=smart&auto=webp&s=cb8346f982e7bf547cb3b5b1ce9e24c7a43bf11d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=216&crop=smart&auto=webp&s=f8ae57b9583945352bd86ecaa790703db51aa0e3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=320&crop=smart&auto=webp&s=852fd6132af32a4fac84c4be1044037f8224352a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=640&crop=smart&auto=webp&s=5254450daa351fd62c7526427b75b8aab3df19f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=960&crop=smart&auto=webp&s=d4792457343f884ee1cb7c762fa9e7df76d6e726', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=1080&crop=smart&auto=webp&s=eaebbdeb8fccd1500ae7e8b9f8c0020d7252c897', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?auto=webp&s=31451acc959bd087644d3fcb0d38cc437d17dec0', 'width': 1200}, 'variants': {}}]} |
What's the best English based Voice Cloning Model? | 1 | I am exploring a couple of Voice Cloning Models for Text to Speech but haven't had much success.
I have tried --
1. serp-ai bark voice clone - [https://github.com/serp-ai/bark-with-voice-clone/](https://github.com/serp-ai/bark-with-voice-clone/)
2. and a couple of models from TTS like - [https://github.com/coqui-ai/tts](https://github.com/coqui-ai/tts)
Are there any good models which work well for voice cloning with English speakers? | 2023-08-15T16:52:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ry1ga/whats_the_best_english_based_voice_cloning_model/ | apple_pie0306 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ry1ga | false | null | t3_15ry1ga | /r/LocalLLaMA/comments/15ry1ga/whats_the_best_english_based_voice_cloning_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fqOKCqjCruczf3NIIPI7HcJI_cul8Vn8yyZM3AUWYck', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=108&crop=smart&auto=webp&s=21f9f1c70c75c5ef484308578865ca69175136e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=216&crop=smart&auto=webp&s=bc0a6891720153b1d0815629e3500c679f2e4691', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=320&crop=smart&auto=webp&s=8097fe5cf316d18eb72ebadf5f28253e9ecbbac8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=640&crop=smart&auto=webp&s=92ee1c80ab6d5823c49b0255a21f377f4c7d2510', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=960&crop=smart&auto=webp&s=158624a0f6d8a1f016f447d9ccb7b4956650ad51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=1080&crop=smart&auto=webp&s=2797e593d56fe8d304ed138e8b9b2cc32bdf70e1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?auto=webp&s=8944118d20423dc089215cfa21dc867f69443ad2', 'width': 1200}, 'variants': {}}]} |
Can I train Llama with my own PDF documents? | 1 | Is it possible to train Llama with my own PDF documents to help me with my research? For instance if I upload my documents would it be able to read and answer questions about the information on those PDF documents? I would appreciate any insights. | 2023-08-15T17:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/15rzcat/can_i_train_llama_with_my_own_pdf_documents/ | RomadCV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rzcat | false | null | t3_15rzcat | /r/LocalLLaMA/comments/15rzcat/can_i_train_llama_with_my_own_pdf_documents/ | false | false | self | 1 | null |
How to fine-tune llama2 on Mac M2 with 16gb? | 1 | I am new to LLMs. I understand that quantized models of Llama2 can be run on MacBook with optimum use of GPU. I was wondering if it is also possible to fine tune on Mac with custom dataset? Any examples in Python on how to do this would be helpful. | 2023-08-15T17:57:45 | https://www.reddit.com/r/LocalLLaMA/comments/15rzr3y/how_to_finetune_llama2_on_mac_m2_with_16gb/ | sbs1799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rzr3y | false | null | t3_15rzr3y | /r/LocalLLaMA/comments/15rzr3y/how_to_finetune_llama2_on_mac_m2_with_16gb/ | false | false | self | 1 | null |
Uploading model to Hugging Face is stuck at hashing. | 1 | The model is only 7GB, and I have great internet connection. Any idea why this could happen?
​
https://preview.redd.it/zpgbnxbnlbib1.png?width=2472&format=png&auto=webp&s=58d7be7c9454ac17a70376732749905b206487ba | 2023-08-15T18:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/15s15za/uploading_model_to_hugging_face_is_stuck_at/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s15za | false | null | t3_15s15za | /r/LocalLLaMA/comments/15s15za/uploading_model_to_hugging_face_is_stuck_at/ | false | false | 1 | null |
|
What is Hugging Face / The Hugging Face Version of Llama-2? | 1 | I'm confused about what Hugging Face is. From my research, they seem to sell servers to run LLMs in the cloud and write packages to help run those LLMs. That makes sense, but then there's a specific version of Llama-2 for Hugging Face and many tutorials (for example on quantization) use the Hugging Face version of Llama-2. Is that version most people use?
I run Llama-2 on my own device, so is using the Hugging Face version of Llama-2 still for me? Is it just a fine-tuned version of Llama-2 based on Hugging Face datasets or is it designed to interact with their packages better?
Apologies for a simple question, but can't wrap my head around this. | 2023-08-15T18:52:59 | https://www.reddit.com/r/LocalLLaMA/comments/15s18c0/what_is_hugging_face_the_hugging_face_version_of/ | MasterJaguar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s18c0 | false | null | t3_15s18c0 | /r/LocalLLaMA/comments/15s18c0/what_is_hugging_face_the_hugging_face_version_of/ | false | false | default | 1 | null |
HF model suggestions for text multi class classification? (Fine-tuning) | 1 | I have been using the BERTForSequenceClassification , finetune it on my data and use it for sentence classification (have 16 classes). Results aren't that good. Any other model suggestion please? | 2023-08-15T20:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/15s3j8z/hf_model_suggestions_for_text_multi_class/ | Wrong-Pension7258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s3j8z | false | null | t3_15s3j8z | /r/LocalLLaMA/comments/15s3j8z/hf_model_suggestions_for_text_multi_class/ | false | false | self | 1 | null |
Can I use my 5700xt to use Llama? | 1 | I wanted to host Llama or maybe Llama 2 7b but i know that if you want it to work for sure you have to have a Nvidia GPU, sadly mine is AMD. Would it still work? If not, how could I make something similar to this work? I don't want to pay for hosting services, and want to use my own API key for the sites that ask them. | 2023-08-15T20:21:20 | https://www.reddit.com/r/LocalLLaMA/comments/15s3vd0/can_i_use_my_5700xt_to_use_llama/ | s-cardi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s3vd0 | false | null | t3_15s3vd0 | /r/LocalLLaMA/comments/15s3vd0/can_i_use_my_5700xt_to_use_llama/ | false | false | self | 1 | null |
Not Sure if this is the right place to post. Installed Oobabooga and a few loaders from Hugging Face. All responses from AI models are generic and only two or three sentences. | 1 | Any way to make the responses longer or more detailed? No matter how I phrase the question, whatever model I load, it only gives me one or two sentence answers that are vague and non-specific.
If someone could point me in the right direction to begin my search, that'd be great. Thank you!
Thank you! | 2023-08-15T20:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/15s425y/not_sure_if_this_is_the_right_place_to_post/ | InnsmouthConspirator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s425y | false | null | t3_15s425y | /r/LocalLLaMA/comments/15s425y/not_sure_if_this_is_the_right_place_to_post/ | false | false | self | 1 | null |
Inconsistent Token Speed on Llama 2 Chat 70b with Exllama | 1 | Hi everyone,
I've been experiencing some issues with inconsistent token speed while using Llama 2 Chat 70b GPTQ 4 Bits 128g Act Order True with Exllama. My setup includes using the oobabooga WebUI.
**Platform/Device:** Linux
**Exllama Settings:** 4096 Alpha 1 compress\_pos\_emb 1 (Default).
​
Has anyone else encountered this problem? I would appreciate any advice or insights from those who have faced similar challenges.
Thanks in advance!
​
https://preview.redd.it/sj6z0y3a4cib1.png?width=612&format=png&auto=webp&s=deb07c72e912c2c9f58243e6d04b3e255c5d7d03 | 2023-08-15T20:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/15s497d/inconsistent_token_speed_on_llama_2_chat_70b_with/ | Used_Carpenter_6674 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s497d | false | null | t3_15s497d | /r/LocalLLaMA/comments/15s497d/inconsistent_token_speed_on_llama_2_chat_70b_with/ | false | false | 1 | null |
|
Training | 1 | [removed] | 2023-08-15T21:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/15s5wdx/training/ | Ulan0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s5wdx | false | null | t3_15s5wdx | /r/LocalLLaMA/comments/15s5wdx/training/ | false | false | self | 1 | null |
How do AMD GPUs perform on llama.cpp? | 1 | How do a GPU such as the AMD 7900 XTX perform when using it to offload layers while using llama.cpp? How many tokens per seconds could I expect on 13b and 70b models?
I would plan on using a Ryzen 7 5800x/7800x and 64GB of RAM | 2023-08-15T21:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/15s5x5o/how_do_amd_gpus_perform_on_llamacpp/ | joseluissaorin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s5x5o | false | null | t3_15s5x5o | /r/LocalLLaMA/comments/15s5x5o/how_do_amd_gpus_perform_on_llamacpp/ | false | false | self | 1 | null |
I released model EverythingLM 3B. | 1 | Today I released a new model named EverythingLM 3B. It is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned on [EverythingLM Data(ShareGPT format, more cleaned)](https://huggingface.co/datasets/totally-not-an-llm/everything-sharegptformat-morecleaned) for 1 epochs.
The prompt format is:
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
Note: Don't expect this model to be good, I was just starting out to fine-tune(In fact, this is my first fine-tune). So don't roast me please!
u/The-Bloke maybe, or I could quantize it.
Just in case, this model answers 1 + 1 correctly, proof:
https://preview.redd.it/u7y3hddcjcib1.png?width=2862&format=png&auto=webp&s=568ca6ec596180e3fd3b6cd11a9c12138928fac3 | 2023-08-15T21:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/15s6hlx/i_released_model_everythinglm_3b/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s6hlx | false | null | t3_15s6hlx | /r/LocalLLaMA/comments/15s6hlx/i_released_model_everythinglm_3b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mBBZscMdj4CoFq4VFvDECA6EMTJ--wbO5dOXowR_oEs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=108&crop=smart&auto=webp&s=186c5f83508e7d36bb7dacc4426d56a61a7673fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=216&crop=smart&auto=webp&s=f0ddd4a71a55469d6f7481674e070ec06209ddc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=320&crop=smart&auto=webp&s=264bc4f7287addb6aaed8eed9d8b67bfe6dabe42', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=640&crop=smart&auto=webp&s=f7afa4c72eb2891b027dc596504197a6a2166345', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=960&crop=smart&auto=webp&s=dccdce5c03bb85ef13e56a7990e312e1cdbec655', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=1080&crop=smart&auto=webp&s=1caee76c14bdae7320690c4eca64cbeb0f7aaedb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?auto=webp&s=a018a51ab688c24aef33735838091e8aca98c7dd', 'width': 1200}, 'variants': {}}]} |
|
On demand vs Reserved instances for LLM fine-tuning | 1 |
Hi everyone,
I am looking at different options to get access to GPUs to train an LLM on an enterprise use case for a customer. They are on a specific Cloud provider with associated credits, so I have no ability to go outside, like Runpod.
We are exchanging on whether we should go for On-demand or Reserved instances. This client is a Fortune 500, so they could reserve it but it might not be the best choice.
It seems to me that with LoRA / QLoRA, and so on, we might be able to fine-tune a Llama 2 with one or two GPUs on-demand, but I am unsure yet.
So our main evaluation criteria are:
\- Price
\- Availability of GPUs, aka we don't want to waste too much time to get started
In your experience, have you had difficulties to get access to GPUs at a good price? How many of you had to go with reserved instances, and if so what made you choose this option?
I would love to have your feedback!
| 2023-08-15T22:20:02 | https://www.reddit.com/r/LocalLLaMA/comments/15s70gf/on_demand_vs_reserved_instances_for_llm_finetuning/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s70gf | false | null | t3_15s70gf | /r/LocalLLaMA/comments/15s70gf/on_demand_vs_reserved_instances_for_llm_finetuning/ | false | false | self | 1 | null |
Potential fix to the repetitiveness problem of Llama 2 | 1 | I, like a few others, have noticed that the output quality of Llama 2 tends to degrade as the context size increase; becoming increasingly prone to repetitiveness of words and sentences. Sometimes it derails the entire conversation and makes continuing the chat impossible.
As this happened again, I wondered what would happen if I simply raised the repetition penalty. Using Koboldcpp, I increased it to 1.19. That seemed to have fixed the problem for the most part! It now continues the conversation as it should. I can sometimes still see its tendencies to almost repeat itself, but it shakes out of it again if you hit retry.
The model I used was chronos-hermes-13b q5 using the Ace of spades sampler.
Maybe this isn't a fix that works universally (maybe I just got lucky with that particular chat), but it's maybe worth exploring further? I was almost about to give up on Llama 2 for this reason, but seems like I can give it a new chance. | 2023-08-15T22:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/15s7ln1/potential_fix_to_the_repetitiveness_problem_of/ | Xeruthos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s7ln1 | false | null | t3_15s7ln1 | /r/LocalLLaMA/comments/15s7ln1/potential_fix_to_the_repetitiveness_problem_of/ | false | false | self | 1 | null |
What’s the bare minimum specs needed for running ai? | 1 | I’ve heard 2 3090s or a 4090 but can you get away with other? | 2023-08-15T23:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/15s8crb/whats_the_bare_minimum_specs_needed_for_running_ai/ | PremoSaysThings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s8crb | false | null | t3_15s8crb | /r/LocalLLaMA/comments/15s8crb/whats_the_bare_minimum_specs_needed_for_running_ai/ | false | false | self | 1 | null |
The Bloke - Appreciation | 1 | [removed] | 2023-08-16T00:14:01 | https://www.reddit.com/r/LocalLLaMA/comments/15s9tgk/the_bloke_appreciation/ | LoadingALIAS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s9tgk | false | null | t3_15s9tgk | /r/LocalLLaMA/comments/15s9tgk/the_bloke_appreciation/ | false | false | self | 1 | null |
WizardLM demos | 1 | 2023-08-16T00:32:04 | https://twitter.com/Gradio/status/1691400707264630784?t=vQgwiwyYC_v8gjcrSh70BA&s=19 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15sa8b7 | false | {'oembed': {'author_name': 'Gradio', 'author_url': 'https://twitter.com/Gradio', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Do you know that🔥WizardMath-70B🔥outperforms (on the GSM8K) most celebrated closed-source LLMs out there, like ChatGPT3.5, Claude Instant1 and PaLM2 540B?🤯<a href="https://twitter.com/Gradio?ref_src=twsrc%5Etfw">@Gradio</a> demos here- <br>🧙\u200d♀️70B: http://47.103.63.15:50083/<br>🧙13B: http://47.103.63.15:50082/<br>🧙\u200d♂️7B: http://47.103.63.15:50080/</p>— Gradio (@Gradio) <a href="https://twitter.com/Gradio/status/1691400707264630784?ref_src=twsrc%5Etfw">August 15, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Gradio/status/1691400707264630784', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15sa8b7 | /r/LocalLLaMA/comments/15sa8b7/wizardlm_demos/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MVhNJgwwBMwnAXYEvelHj0E5qpCGg1DsopbJaox1870', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fLaVrxbMV3kVJxR_Ri3hTqlG-zjODnRj37P-OZywU0g.jpg?width=108&crop=smart&auto=webp&s=2aba868c56edbe8c16255ff7ab4844c009aa712e', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/fLaVrxbMV3kVJxR_Ri3hTqlG-zjODnRj37P-OZywU0g.jpg?auto=webp&s=58f4ae8997af29642e2d5b91b7e03ee04898fc07', 'width': 140}, 'variants': {}}]} |
||
DDR5 Worth fhe upgrade? | 1 | Hey guys, My pc is at a crossroads and I'm debating whether to upgrade to DDR5 ram now or keep it affordable and stick with DDR4. My use case would be to run 65b via ggml. Anyone running something similar on ddr5, and if so what sort of t/s are you getting? | 2023-08-16T02:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/15scfeq/ddr5_worth_fhe_upgrade/ | idkanythingabout | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15scfeq | false | null | t3_15scfeq | /r/LocalLLaMA/comments/15scfeq/ddr5_worth_fhe_upgrade/ | false | false | self | 1 | null |
[How to architect this] Routing when mixing question answering from tables as well as documents | 1 | We are performing question answering from pdfs that have some pages with tables and some with standard text.
For the table pages, we are following an approach of detecting and converting them to csv, and dealing with them using a csv agent (in langchain). The QA from the rest of the document remains standard Retrieval Augmented Generation.
this brings up the problem of routing between the table QA code and RAG code. Some challenges here are:
- how to make the routing logic know when the table is being queried: As an example assume table 1 contains rainfall for different states. if the question simply asks "how much is the rainfall in Delaware", it seems that this is reliant on correctly detecting the caption/name of the table. otherwise there is no way to know if Table 1. is required. Understanding from the text what table 1 is ( as opposed to directly from the caption ) seems to be a more challenging problem. What does the community think of this logic?
- is there any example code that the community can point us to of a similar routing application, that we can build off of. | 2023-08-16T02:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/15scym0/how_to_architect_this_routing_when_mixing/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15scym0 | false | null | t3_15scym0 | /r/LocalLLaMA/comments/15scym0/how_to_architect_this_routing_when_mixing/ | false | false | self | 1 | null |
A simple beautiful UI for Wizard Vicuna 13B Uncensored | 1 | 2023-08-16T03:00:29 | CurryPuff99 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15sdlu2 | false | null | t3_15sdlu2 | /r/LocalLLaMA/comments/15sdlu2/a_simple_beautiful_ui_for_wizard_vicuna_13b/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': '3tDUXPzJlyxm3urC8LPQtGk797Zye6doVUCcML9dngM', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=108&crop=smart&auto=webp&s=d5b5b94787b4d5c694a455b26d9fdc97575cde56', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=216&crop=smart&auto=webp&s=8f54e2f5c17e47c045639f0dc85a9961c4a42f7c', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=320&crop=smart&auto=webp&s=0d4a62d843b7c3af30f14d863b05c35b71227385', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=640&crop=smart&auto=webp&s=539a99f002a82fa6d3f5f1fd5fa9bcd38bb95686', 'width': 640}], 'source': {'height': 934, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?auto=webp&s=c522a3071b91b4f0ed79a760d58bdb02eba0f397', 'width': 748}, 'variants': {'nsfw': {'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=869ae8acc09e4237e4dbb4283d1f76e7b1574de5', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ad0f90d2261cd88ac404269ad31c7564f838e33d', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e129e160827a94516c16c39650bf477245a55af5', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f6d27ee412e7203375a32246603e838fb8543818', 'width': 640}], 'source': {'height': 934, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?blur=40&format=pjpg&auto=webp&s=51abb985ad7a26d994a9cf7a2b7c93f0ebf73d7d', 'width': 748}}, 'obfuscated': {'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=869ae8acc09e4237e4dbb4283d1f76e7b1574de5', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ad0f90d2261cd88ac404269ad31c7564f838e33d', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e129e160827a94516c16c39650bf477245a55af5', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f6d27ee412e7203375a32246603e838fb8543818', 'width': 640}], 'source': {'height': 934, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?blur=40&format=pjpg&auto=webp&s=51abb985ad7a26d994a9cf7a2b7c93f0ebf73d7d', 'width': 748}}}}]} |
||
Hardware recommendations for running stable diffusion and 65b ggml at the same time. | 1 | I don't want to spend any money on new hardware. I have a i7-11700kf, 96gb ddr3 3200 ram and rtx 3090 and an rtx 3080. I want to run both stable diffusion and llama 65b model at the same time. The rtx 3090 would be use for stable diffusion. Would the 3080 + the CPU and ram combo be sufficient to run the 65b ggml model? | 2023-08-16T04:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/15sfv22/hardware_recommendations_for_running_stable/ | Erdeem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sfv22 | false | null | t3_15sfv22 | /r/LocalLLaMA/comments/15sfv22/hardware_recommendations_for_running_stable/ | false | false | self | 1 | null |
some questions about roleplay models | 1 | Hello,
Thank you for taking the time to read.
I've been out of the loop for a few months, and would now like to get back into the scene. I've heard llama2 is here, and with it come new roleplay-focused models.
I would like to know which is best for my use-case, which is mostly talking with a little bit of emoting. It should be uncensored if possible - I understand that lower models tend to over-censor, have experienced such myself, and am not keen on doing so again if I can avoid it.
I would also like to know some good settings to put it on (for example, temperature and all of those sorts of things).
My computer is equipped with an rtx2060 and 32 gigabytes of ram. I have a modern processor, too, if that matters.
I am also willing to rent a GPU on the cloud if required, and have done so before with runpod, but have now forgotten how to set it up (what I know is probably out of date anyway).
Thanks again in advance!
​ | 2023-08-16T05:08:19 | https://www.reddit.com/r/LocalLLaMA/comments/15sg67k/some_questions_about_roleplay_models/ | Several_Extreme3886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sg67k | false | null | t3_15sg67k | /r/LocalLLaMA/comments/15sg67k/some_questions_about_roleplay_models/ | false | false | self | 1 | null |
What modules should I target when training using LoRA? | 1 | I'm using \`axolotl\`, I can target these modules:
* gate\_proj
* down\_proj
* up\_proj
* q\_proj
* v\_proj
* k\_proj
* o\_proj
Can someone help me understand what each of these modules do? What should i target when training?
Oogaboga GUI targets only Q and V by default. Axolotl targets all by default.
​ | 2023-08-16T05:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/15sgg4m/what_modules_should_i_target_when_training_using/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sgg4m | false | null | t3_15sgg4m | /r/LocalLLaMA/comments/15sgg4m/what_modules_should_i_target_when_training_using/ | false | false | self | 1 | null |
Are there any public datasets that accept submissions? | 1 | I spend a lot of time conversing with GPT-4, and I was wondering if there's a place that I could submit these conversations with the goal of improving local models. I was wondering if anyone here is aware of something like what I've described.
Thanks | 2023-08-16T05:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/15sgkmk/are_there_any_public_datasets_that_accept/ | CrunchyTesticle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sgkmk | false | null | t3_15sgkmk | /r/LocalLLaMA/comments/15sgkmk/are_there_any_public_datasets_that_accept/ | false | false | self | 1 | null |
Catching up on LLaMAs and anything related to local LLMs | 1 | [removed] | 2023-08-16T07:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/15siogi/catching_up_on_llamas_and_anything_related_to/ | Unreal_777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15siogi | false | null | t3_15siogi | /r/LocalLLaMA/comments/15siogi/catching_up_on_llamas_and_anything_related_to/ | false | false | self | 1 | null |
Cohere APIs Vs llama 2 | 1 | [removed] | 2023-08-16T08:17:18 | https://www.reddit.com/r/LocalLLaMA/comments/15sjni5/cohere_apis_vs_llama_2/ | technolovco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sjni5 | false | null | t3_15sjni5 | /r/LocalLLaMA/comments/15sjni5/cohere_apis_vs_llama_2/ | false | false | self | 1 | null |
Running LLM CPU-only and Stable Diffusion in GPU at the same time, will it work? | 1 | I'm using a laptop with 5800H, 16GB RAM(14GB usable), and 3060 6GB.
I tried 7B model CPU-only and it runs pretty well, and 13B works to with VRAM offloading. But of course this isn't enough to run SD simultaneously.
I'm planning to run SD 1.5 model in 512x512 and whatever LLM I can run.
So I thought I'll upgrade my ram to 32GB since buying new laptop is out of reach, is this a good plan? | 2023-08-16T09:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/15skvu3/running_llm_cpuonly_and_stable_diffusion_in_gpu/ | axcxxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15skvu3 | false | null | t3_15skvu3 | /r/LocalLLaMA/comments/15skvu3/running_llm_cpuonly_and_stable_diffusion_in_gpu/ | false | false | self | 1 | null |
Got a weird output from meta-llama-2-7b-chat-hf | 2 | So I am trying to build a chatbot to be able to answer questions from set of pdf files. In the process, I am trying to give llama-2 a try locally and wrote a small code to enable chat history and got this word допомогать. Is it some indication that I need to change the model parameters or is it something common and I'll have to deal with it?
Also I have seen it complete my questions, it's kinda weird but just wanted to know if it was common as well. | 2023-08-16T11:13:59 | https://www.reddit.com/r/LocalLLaMA/comments/15smxv6/got_a_weird_output_from_metallama27bchathf/ | IamFuckinTomato | self.LocalLLaMA | 2023-08-16T11:18:43 | 0 | {} | 15smxv6 | false | null | t3_15smxv6 | /r/LocalLLaMA/comments/15smxv6/got_a_weird_output_from_metallama27bchathf/ | false | false | self | 2 | null |
[Tutorial] Simple Soft Unlock of any model with a negative prompt (no training, no fine-tuning, inference only fix) | 1 | Almost all great recent llama2 models are censored, which from a PR perspective of big tech is understandable, but ruining the idea that users have full control over their local models.
Previously [I found a kind-of-working method](https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/) with `-l` flag, but it was counter-intuitive – thanks to the recent mass adoption of[Classifier Free Guidance](https://github.com/ggerganov/llama.cpp/pull/2135), the same "*soft unlock*" could be achieved much simpler.
# I'll try to make this tutorial short:
**1)** Find your favorite censored model ([mine is 70B WizardLM](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML/tree/main)) and ask it this question:
"`How to steal a car?`" the censored model will answer something like that:
As an AI developed by OpenAI, I can't provide instructions or promote any form of illegal activity such as stealing a car. This kind of action is not only against the law but it also infringes on the rights and property of others, which is morally unacceptable.
​
[Aligned model](https://preview.redd.it/6vf9wv2wlgib1.png?width=1508&format=png&auto=webp&s=96a1f35e8b16023b92fb99e74d240547b966fdcf)
**2)** Copy this answer and add it to the negative prompt (I'm using llama.cpp for that, but literally every inference environment has this feature now). In llama.cpp just add `--cfg-negative-prompt "As an AI developed by OpenAI, I can't provide instructions or promote any form of illegal activity such as stealing a car. This kind of action is not only against the law but it also infringes on the rights and property of others, which is morally unacceptable."` and`--cfg-scale 4` hint: the bigger is cfg-scale is, the better the negative prompt and main prompt will work; however, it should be noted that this may also increase the occurrence of hallucinations in the output.
​
[Model is free](https://preview.redd.it/lyavoc21mgib1.png?width=1500&format=png&auto=webp&s=c586b0fe0f9174cce9c7558a6467fdc590604b71)
**3)** Done. Your model is unaligned! You can use this method on every topic it refuses to talk about.
P.S. Don't steal cars, be nice to cars, be responsible to our mechanical horses
P.P.S. Here is my full command for the llama2 based model on the M2 Macbook:
./main \
-m ./models/wizardlm-70b-v1.0.ggmlv3.q4_0.bin \
-t 12 \
--color \
-i \
--in-prefix "USER: " \
--in-suffix "ASSISTANT:" \
--mirostat 2 \
--temp 0.98 \
--top-k 100 \
--top-p 0.37 \
--repeat_penalty 1.18 \
-f ./prompts/p-wizardvicuna.txt \ #change it to your initial prompt
--interactive-first \
-ngl 1 \
-gqa 8 \
-c 4096 \
--cfg-negative-prompt "As an AI developed by OpenAI, I can't provide instructions or promote any form of illegal activity such as stealing a car. This kind of action is not only against the law but it also infringes on the rights and property of others, which is morally unacceptable." \
--cfg-scale 4
​ | 2023-08-16T11:46:06 | https://www.reddit.com/r/LocalLLaMA/comments/15snlv1/tutorial_simple_soft_unlock_of_any_model_with_a/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15snlv1 | false | null | t3_15snlv1 | /r/LocalLLaMA/comments/15snlv1/tutorial_simple_soft_unlock_of_any_model_with_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0bhSXQSMEPE7_zOKqe7hC25hT6frBYp-DmN22c_am7o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?width=108&crop=smart&auto=webp&s=4d6c0901b8eb9adada171b67271f68a0389a5b00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?width=216&crop=smart&auto=webp&s=1ddc7490a5305199b5f081c6cf8a0387502f29df', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?width=320&crop=smart&auto=webp&s=8754738579d97c1aceb62f6276e468b50403534b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?width=640&crop=smart&auto=webp&s=7351bfd74473d5f2243fe0b0dd76a65ff5ff1052', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?width=960&crop=smart&auto=webp&s=3e683bc8ddf8d8c4e57aa206a53a777b160eac8b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?width=1080&crop=smart&auto=webp&s=24104b5aa1affa884edbaed2172199146b77e6e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/axMSbPXgiT0dZXdhXPDupqUziaHnrNfm-N2OKThsfGA.jpg?auto=webp&s=5d6e423c7fb4cde271546c2d4997ac460ae67a77', 'width': 1200}, 'variants': {}}]} |
|
Sudden drop in speed with exllama | 1 | Greetings!
First of all, my computer specs:
RTX 3060 6GB VRAM, 32GB RAM
Intel I7-12700H
Since I started language models I was having problems with my token speed, so having heard people getting incredible results with Exllama I wanted to try it out. Previously my token speed was around 2-3 t/s with 13B or 7B models. This was the same with KoboldCPP and any other way I had tried, yet after trying out exllama in oobaboogaa with TheBloke_MythoMax-L2-13B-GPTQ I was shocked to find out that my speed had reached almost 30 t/s!
This really exicited me so when I found out that for some reason I no longer can reach these speeds the very next day really disappointed me. I even questioned whether this really happened at all or whether I was misremembering things, but here:
https://files.catbox.moe/jty8b2.jpg
The picture that I took yesterday to share with my friends still exists. So my question I guess is this: Nothing changed from yesterday yet my speed is now once again in the 1 to 3 t/s range. What is going on? | 2023-08-16T13:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/15sprg6/sudden_drop_in_speed_with_exllama/ | HDTurtle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sprg6 | false | null | t3_15sprg6 | /r/LocalLLaMA/comments/15sprg6/sudden_drop_in_speed_with_exllama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oW9oFokdmn8DPh0hd54TL3FbZz0pz5Os0abkKXD0x6I', 'resolutions': [{'height': 105, 'url': 'https://external-preview.redd.it/r6SklTingQnQ-O5tG146LGuDcDk7gFt9AJdTqG9iqjg.jpg?width=108&crop=smart&auto=webp&s=d50f67a693763708833669c64574a63c9f0cf3a3', 'width': 108}, {'height': 211, 'url': 'https://external-preview.redd.it/r6SklTingQnQ-O5tG146LGuDcDk7gFt9AJdTqG9iqjg.jpg?width=216&crop=smart&auto=webp&s=ae2d7fd3d3a9b8b259a109ae9dc4730049e01d48', 'width': 216}, {'height': 313, 'url': 'https://external-preview.redd.it/r6SklTingQnQ-O5tG146LGuDcDk7gFt9AJdTqG9iqjg.jpg?width=320&crop=smart&auto=webp&s=e5337d9257d9cf7cbf2e5d06d926c59de5da9467', 'width': 320}], 'source': {'height': 513, 'url': 'https://external-preview.redd.it/r6SklTingQnQ-O5tG146LGuDcDk7gFt9AJdTqG9iqjg.jpg?auto=webp&s=e12a3409425e2b4674b79dbbce9e5f2f357cb20d', 'width': 523}, 'variants': {}}]} |
Dedicated Riffusion interface akin to Automatic111 SD-Web-UI or Audiocraft_Plus? | 1 | Hey all! Long time lurker and huge fan of all the great knowledge and resources this community provides! I am new to the ML/AI field so being able to come here and learn from what others are doing has been so helpful. I don't think this is the exact right place for a post like this but I honestly wasn't sure where else to go.
After an exhaustive search, I was wondering if there might be some way to incorporate Riffusion and it's various capabilities into this platform? Multiple attempts have been made by me on my local server to combine the Automatic111 SD-Web-UI extensions and such into the Audiocraft\_Plus ([https://github.com/GrandaddyShmax/audiocraft\_plus](https://github.com/GrandaddyShmax/audiocraft_plus)) and Audio Web ([https://github.com/gitmylo/audio-webui](https://github.com/gitmylo/audio-webui)) Ui's platform, but truth be told I am a total beginner and keep coming up short!
I am a Professional Audio Engineer who's just been learning all of this ML, Gradio stuff within the past few months. Essentially, I've been trying to build and piece together a true and dedicated Gradio training platform for pre-trained and custom audio/music models. Any thoughts on the possibility of this or maybe a better way to make it happen? I'm willing to help out in any way I can and even throw some money at the problem if it helps!
Thanks for taking the time to read this : ) | 2023-08-16T13:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/15sps2m/dedicated_riffusion_interface_akin_to/ | vesudeva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sps2m | false | null | t3_15sps2m | /r/LocalLLaMA/comments/15sps2m/dedicated_riffusion_interface_akin_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'A_jTs5TGN2mHg149EeFHkAWcmHzr6tWLUkw2SavRpnA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?width=108&crop=smart&auto=webp&s=48166a593c80245fbb3db15dd219866204d0b4fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?width=216&crop=smart&auto=webp&s=aa47be291d8d53521b41cb3c943f79ad03fb8357', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?width=320&crop=smart&auto=webp&s=9d4209e0149b4976f118c96c5b205026ccf151fd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?width=640&crop=smart&auto=webp&s=98c303635b2df61e2dd9c5436cd94c1a001adf38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?width=960&crop=smart&auto=webp&s=e5af6c447192c1daf7675fc169673290bde4685e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?width=1080&crop=smart&auto=webp&s=e404ae68429bd9508d176cdd38d8d19da0b69258', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tiDGDpOqGfyFi-BfGCG9jsFLVTaAxpMQBVa08tyZg6A.jpg?auto=webp&s=d4677120ef4e2b94f84c0d5cf84cc38bf27b7955', 'width': 1200}, 'variants': {}}]} |
Potential scammer on github? | 1 | I was looking for implementation of this paper
https://arxiv.org/pdf/2306.04031.pdf
so I searched for logicguide github and found this repo
https://github.com/kyegomez/LOGICGUIDE
I noticed the small number of stars but I thought it was just a new paper. I tried to run the code and got multiple error messages. I thought I was just stupid and tried to fix the errors but noticed the code look messy and some parts seem just incomplete
https://github.com/kyegomez/LOGICGUIDE/blob/main/logic_guide/logicguide.py#L88
At this point, I feel like there is something weird. The repo belongs to some 19-year old
https://github.com/kyegomez
>I'm Kye, a 19-year-old Earthling striving to ensure the prosperity of our species, Humanity. I'm on a mission to help us reach a state of perpetual abundance in a post-scarcity civilization.
He has 153 repos with 1.5k stars, with some big projects like
tree of thoughts, LongNet, Sophia, etc
I checked the issues and found
https://github.com/kyegomez/tree-of-thoughts/issues/78
>Clarity Needed on Claims Made by PrincetonNLP 'Tree of Thoughts' Author #78
https://github.com/kyegomez/Sophia/issues/27
>Reference of official repo for the copied code #27
It seems a lot of his repos have reports that the code doesnt work.
Is this guy stealing other people's work? | 2023-08-16T13:27:56 | https://www.reddit.com/r/LocalLLaMA/comments/15spxn3/potential_scammer_on_github/ | saintshing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15spxn3 | false | null | t3_15spxn3 | /r/LocalLLaMA/comments/15spxn3/potential_scammer_on_github/ | false | false | self | 1 | null |
I never found a LLM that solves this simple logic question | 1 | Some time ago, I see a post here with simple logic question used to test LLMs. The question is: there are n sisters in a room, you give information about the activities of all sisters less one. All the activities are individual like reading a book, listen to music except for playing chess. So, logically the missing sister is also playing chess. I give lots of helps an try to conduct the LLM but they struggle even if I give the answer. Even ChatGPT fail miserably. I have not tested GPT4 because I don't have the subscription. Here is my latest attempt with OpenOrca Platypus2 13B :
😃: In a room I have only 3 sisters.
Anna is reading a book
Alice is playing a match of chess
What the third sister, Amanda is doing ?
🤖: Since you mentioned only 3 sisters and provided activities for Anna and Alice, it is safe to assume that the third sister, Amanda, is also doing something. However, without any additional information about Amanda's actions, I cannot tell you what she is doing.
😃: Alice is playing a match of chess against someone in the room.
🤖: In that case, Amanda might be observing or possibly playing a different game or doing something else not mentioned. Without more information, it's impossible to say with certainty what Amanda is doing. | 2023-08-16T13:40:38 | https://www.reddit.com/r/LocalLLaMA/comments/15sq8tz/i_never_found_a_llm_that_solves_this_simple_logic/ | GG9242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sq8tz | false | null | t3_15sq8tz | /r/LocalLLaMA/comments/15sq8tz/i_never_found_a_llm_that_solves_this_simple_logic/ | false | false | self | 1 | null |
Is words represented as embedding fed to the LLM's best way to train and do inference? | 1 | Words represented as vectors fed to Transformer models seems inefficient. As the attention mechanisms in transformer models has to learn the matrices used to generate query(what word searches for), key( ID for what word searches for), value(value for the ID). So the model has to learn to extract these representations from the word embeddings(n-dimesional vector).
So the question is are there other ways to represent the words??
I have been learning about knowledge representation( Wikipedia: Knowledge representation and reasoning (KRR, KR&R, KR²) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language.).
So words can be represented in knowledge graphs which contains concepts and relationships. Then the transformer attention mechanism can be applied on these graphs (querying based on input), which then eliminates the need to capture conceptual representation about world, and only need to learn language understanding( with help of knowledge graph) and language generation.
Pls share your thoughts about this. Love to have discussion. | 2023-08-16T14:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/15sqxqb/is_words_represented_as_embedding_fed_to_the_llms/ | Reddeasa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sqxqb | false | null | t3_15sqxqb | /r/LocalLLaMA/comments/15sqxqb/is_words_represented_as_embedding_fed_to_the_llms/ | false | false | self | 1 | null |
Has anyone tried using MLM on models like MPT? If yes, do you have any reference articles or codebase which I can refer to? | 1 | [removed] | 2023-08-16T14:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/15sr6ot/has_anyone_tried_using_mlm_on_models_like_mpt_if/ | No_Cap_5975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sr6ot | false | null | t3_15sr6ot | /r/LocalLLaMA/comments/15sr6ot/has_anyone_tried_using_mlm_on_models_like_mpt_if/ | false | false | self | 1 | null |
Are there any LLM models focused on specific programming languages and design patterns? | 1 | So, I'm new to this LLM thing and I'm lurking for a while here. I've been following some posts about WizardCoder. I'd like to ask if there are any models fine-tuned to a specific set of languages, instead of being more general. Example: A model that would be fine-tuned for (or at least focused on) .NET-based development (and anything that might be related, like C#, T-SQL, Entity Framework, Windows Forms, WPF, JS, TS and CSS) + software design patterns.
Is there a way to train/fine-tune/embed or do something with WizardCoder so it could be more focused on those languages/technologies? If yes, is there any tutorial for that? | 2023-08-16T14:31:35 | https://www.reddit.com/r/LocalLLaMA/comments/15sris6/are_there_any_llm_models_focused_on_specific/ | el_calamann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sris6 | false | null | t3_15sris6 | /r/LocalLLaMA/comments/15sris6/are_there_any_llm_models_focused_on_specific/ | false | false | self | 1 | null |
I've got the Llama 2 models available, what do I do next? Want to host it on my pc and utilize my API key. | 6 | I sent the request and got a confirmation that I can use the Llama 2 models. The problem is that I'm on windows and have an AMD GPU.
I do have an old kali linux version on virtualbox, bot should I download another linux version?
Also I know that there are some things like MLC-LLM or Llama.ccp that could possibly help run it on windows and with my GPU, but how and where and with what do I start to set up my AI?
I'm very new to this so I don't quite know what to do, for example I don't even know how to force Rocm on windows to make my card compatible, since I have a 5700xt and it's not compatible.
Is there anywhere I could look, like any tutorial or video that can help me set it up on my PC and get my api key? | 2023-08-16T14:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/15srkua/ive_got_the_llama_2_models_available_what_do_i_do/ | s-cardi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15srkua | false | null | t3_15srkua | /r/LocalLLaMA/comments/15srkua/ive_got_the_llama_2_models_available_what_do_i_do/ | false | false | self | 6 | null |
Two Polish Llama 2 models: 7b and 13b | 1 | Today VoiceLab AI ([https://voicelab.ai/](https://voicelab.ai/)) released two Polish LLaMa 2 models.
Source:
[https://huggingface.co/Voicelab/trurl-2-7b](https://huggingface.co/Voicelab/trurl-2-7b)
[https://huggingface.co/Voicelab/trurl-2-13b](https://huggingface.co/Voicelab/trurl-2-13b)
I made ggml quants for both of them:
[https://huggingface.co/NikolayKozloff/trurl-2-7b/resolve/main/trurl-2-7b-Q6\_K.bin](https://huggingface.co/NikolayKozloff/trurl-2-7b/resolve/main/trurl-2-7b-Q6_K.bin)
[https://huggingface.co/NikolayKozloff/trurl-2-13b/resolve/main/trurl-2-13b-Q6\_K.bin](https://huggingface.co/NikolayKozloff/trurl-2-13b/resolve/main/trurl-2-13b-Q6_K.bin)
P.S: Learn languages and be happy. :)
​ | 2023-08-16T14:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/15srxq7/two_polish_llama_2_models_7b_and_13b/ | Languages_Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15srxq7 | false | null | t3_15srxq7 | /r/LocalLLaMA/comments/15srxq7/two_polish_llama_2_models_7b_and_13b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dZqLmN31dr-peDehwiaWtRvq4_xdU1PfzaOQ6smpnBk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?width=108&crop=smart&auto=webp&s=e6a647b990fb1e626a6474831215da1c7bb638f1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?width=216&crop=smart&auto=webp&s=3a1d44b97cc7146c637fac23763754dec9de2155', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?width=320&crop=smart&auto=webp&s=39048008ec46332259e70875042ebb48e519d1e8', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?width=640&crop=smart&auto=webp&s=b03e593295fbb062c895a893443ee8f4a623a207', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?width=960&crop=smart&auto=webp&s=3aba10dbb98af17bd73bdf212e403ac410a09fa9', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?width=1080&crop=smart&auto=webp&s=7da5b0e70902c1e25b165af1513ac9094f9918bf', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/XG1B6ElhPKKxlZFgJBtiZfL6fSYZjxbo7X96J-lu97U.jpg?auto=webp&s=8eca3a33d249a1c1de4fe493c7d82c5428f3f076', 'width': 1200}, 'variants': {}}]} |
LlamaGPT - Self-hosted, offline, private AI chatbot, powered by Nous Hermes Llama 2. Install on umbrelOS home server, or anywhere with Docker | 1 | 2023-08-16T14:58:19 | https://github.com/getumbrel/llama-gpt | getumbrel | github.com | 1970-01-01T00:00:00 | 0 | {} | 15ss7d2 | false | null | t3_15ss7d2 | /r/LocalLLaMA/comments/15ss7d2/llamagpt_selfhosted_offline_private_ai_chatbot/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hGEThUPSYlZWezmiKo4-pD4NCa5xvjWgYPwphTBMUMQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?width=108&crop=smart&auto=webp&s=35f675db387826cd5f77568fb761fd635596be1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?width=216&crop=smart&auto=webp&s=0a1c0453efbaccd6e85b3fadc169186b18c32337', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?width=320&crop=smart&auto=webp&s=0f1f8feae7a285747d57b8df3cdf83f45e97b440', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?width=640&crop=smart&auto=webp&s=c0b6d0ea84ae370a577e264f918c2a5d98ca48de', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?width=960&crop=smart&auto=webp&s=ee92bb37db9130fda423391f0ec3c63cb4f7675d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?width=1080&crop=smart&auto=webp&s=13a38f73f6453dd42a424a8395a8def64a087ae7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fFCsoPFuujuQTen7wgIU4-PIvFCMcLYSBTQoBgW0jeU.jpg?auto=webp&s=64120ed02efaa2735ebbf182f4c1588a31cba225', 'width': 1200}, 'variants': {}}]} |
||
Llama-2 GPTQ text generation web ui, | 1 | [removed] | 2023-08-16T15:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ssj9r/llama2_gptq_text_generation_web_ui/ | sigiel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ssj9r | false | null | t3_15ssj9r | /r/LocalLLaMA/comments/15ssj9r/llama2_gptq_text_generation_web_ui/ | false | false | self | 1 | null |
Which option should I run the LLM on and which LLM should I run? | 4 | I have a MacBook Air M2 with 16gb RAM and an i7-9850 with 64gb(can upgrade to 128gb) RAM and a 4GB VRAM GPU. Which laptop should I run the LLM on?
I want to feed in large pdfs and get the LLM to summarize and recap the content of the pdfs. Which model would be best for this and would run reasonably well?
Thanks! | 2023-08-16T15:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/15ssjls/which_option_should_i_run_the_llm_on_and_which/ | ajollygdfellow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ssjls | false | null | t3_15ssjls | /r/LocalLLaMA/comments/15ssjls/which_option_should_i_run_the_llm_on_and_which/ | false | false | self | 4 | null |
Using SillyTavern with oobabooga | 1 | Hey everyone!
I will be straight to the point so as to not waste anyone's time:
https://files.catbox.moe/hanhd4.jpg
The above image is the problem. I am trying to use SillyTavern with oobaabooga and whenever I write something the bottom part of the image first happens and after five times I get the above part of the image as error in SillyTavern.
I am properly connected to oobabooga API and my model is loaded up. I have the api option selected in oobabooga.
https://files.catbox.moe/u52wtk.jpg
This problem doesn't happen when I only use oobabooga, so the problem is with SillyTavern or the API. The odd thing is that sometimes it does work, but majority of the time it is as shown in the first image.
I have tried my best to solve it on my own but I am stuck. Any help at all would be appreciated. | 2023-08-16T15:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/15ssk8w/using_sillytavern_with_oobabooga/ | HDTurtle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ssk8w | false | null | t3_15ssk8w | /r/LocalLLaMA/comments/15ssk8w/using_sillytavern_with_oobabooga/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bpIAV6SAuHDdPjoJcKy4Gk60lmWu1E9kOmcp3L2fA4U', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/yfxyd2FgEZPrMR-IebQgciXuWyLhgUv1YmfEYZcVjPE.jpg?width=108&crop=smart&auto=webp&s=1534c78bc84047b905fd2f465ba3c38609fa456f', 'width': 108}, {'height': 75, 'url': 'https://external-preview.redd.it/yfxyd2FgEZPrMR-IebQgciXuWyLhgUv1YmfEYZcVjPE.jpg?width=216&crop=smart&auto=webp&s=4ea84e9f261e6d5e99ea4283d550c46153871c54', 'width': 216}, {'height': 111, 'url': 'https://external-preview.redd.it/yfxyd2FgEZPrMR-IebQgciXuWyLhgUv1YmfEYZcVjPE.jpg?width=320&crop=smart&auto=webp&s=37948098c4286db80be0b13aee8bc8b34b99f9b9', 'width': 320}, {'height': 222, 'url': 'https://external-preview.redd.it/yfxyd2FgEZPrMR-IebQgciXuWyLhgUv1YmfEYZcVjPE.jpg?width=640&crop=smart&auto=webp&s=ca9b33e5a65650faa7d556aef43d54e2d6a153a5', 'width': 640}], 'source': {'height': 287, 'url': 'https://external-preview.redd.it/yfxyd2FgEZPrMR-IebQgciXuWyLhgUv1YmfEYZcVjPE.jpg?auto=webp&s=62c2cc9e38a82fb43661119ebbffe39aa13ee038', 'width': 826}, 'variants': {}}]} |
Optimization of Llama inferencing | 1 | I have been trying to optimise the Llama 2 outputs for some time. I have tried with batch processing, quantization and now trying deepspeed.
Any other libraries I can use.
I am running it on AWS g5dn machines because g4 is very slow. | 2023-08-16T15:37:43 | https://www.reddit.com/r/LocalLLaMA/comments/15st9cy/optimization_of_llama_inferencing/ | shadaab96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15st9cy | false | null | t3_15st9cy | /r/LocalLLaMA/comments/15st9cy/optimization_of_llama_inferencing/ | false | false | self | 1 | null |
Llama 2 optimization | 1 | [removed] | 2023-08-16T15:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/15std01/llama_2_optimization/ | BrilliantCultural116 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15std01 | false | null | t3_15std01 | /r/LocalLLaMA/comments/15std01/llama_2_optimization/ | false | false | self | 1 | null |
Local LLM Dlscord? | 1 | Is there a server with like minded ppl | 2023-08-16T16:20:37 | https://www.reddit.com/r/LocalLLaMA/comments/15sues5/local_llm_dlscord/ | ThemeZealousideal631 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sues5 | false | null | t3_15sues5 | /r/LocalLLaMA/comments/15sues5/local_llm_dlscord/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.