title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Putting the pieces together for RAG?
1
[removed]
2023-08-10T07:05:30
https://www.reddit.com/r/LocalLLaMA/comments/15n4yso/putting_the_pieces_together_for_rag/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n4yso
false
null
t3_15n4yso
/r/LocalLLaMA/comments/15n4yso/putting_the_pieces_together_for_rag/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ozzoOOUo0GsTZQDFKQGLV_Hm-3-sAMBPGUdj-YGAXaU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=108&crop=smart&auto=webp&s=01abd3e1c16817d00a13b4aefae81016db13aa6d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=216&crop=smart&auto=webp&s=80098a7b6d0ab3b52bcff8841b1c97195818826e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=320&crop=smart&auto=webp&s=6287874f84b7432e9be49420fccd9bdc36e26a2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=640&crop=smart&auto=webp&s=9543d5f1f15bd29dd86f3124e13d2a14c896185a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=960&crop=smart&auto=webp&s=13e07ecd94404eb99b26585e2886d42c371183af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=1080&crop=smart&auto=webp&s=fc6c1f4554adbf733744434d61fc12f3be4f1a57', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?auto=webp&s=d91bb719706fadcf313dc369ff3d4f31373466d7', 'width': 1200}, 'variants': {}}]}
What determines the length of the response?
1
while the context size determines how much information can be considered while fashioning a response, what factors determine the length of the output? - a (unsubstantiated) gut feeling is that models with larger context sizes should be able to make longer responses? - a [post](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/?ref=share&ref_source=link) finds that a slight variation of the prompt increases verbosity. is there a general pattern/ practice here that is worth trying for all models. - this [post](https://www.reddit.com/r/LocalLLaMA/comments/15g5419/what_ive_learned_from_orcamini3bggmlv3q4_1_using/) finds that some models are susceptible to unfinished responses, even if they "seem" complete. has this been noticed in general, and any recommendations for the model to furnish complete responses? - are there any tests/experiments that you feel would be interesting to try out to understand the verbosity limits of llms
2023-08-10T07:58:28
https://www.reddit.com/r/LocalLLaMA/comments/15n5vyz/what_determines_the_length_of_the_response/
T_hank
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n5vyz
false
null
t3_15n5vyz
/r/LocalLLaMA/comments/15n5vyz/what_determines_the_length_of_the_response/
false
false
self
1
null
Optimizing models for LLM Leaderboard is a HUGE mistake
1
Cause that has a preassumption of being a "good" model means ranking high in 4 different relatively controversial benchmarking suites. Its a popular wrongdoing in the Kaggle competitions to optimize for the test set. Leaderboard should be seen as the test set that we see how our models compare after the model released, rather than some sort of goal to create ensembled models to rank higher. You can see this in some random kaggle competition, where people can see limited eval set results, but when they face with the whole test set, higher ranking models tanking in the rankings for the full suite, and some model in the middle of the test set wins. The problem is higher ranking models are often overfit on specific examples in the test set and that doesn't necessarily means the model will be percepted as good in terms of humans. That's why the models that claim they are better than GPT 3.5 on eval suite X often not as good as its advertised when tested on real world human prompts. Without a doubt, OpenAI have their own extensive eval suite and even that wasn't enough, so they created "openai/evals" repo to get free evaluation from the community. We need better evaluation metrics than LLM leaderboard for the Open-source community.
2023-08-10T08:24:48
https://www.reddit.com/r/LocalLLaMA/comments/15n6cmb/optimizing_models_for_llm_leaderboard_is_a_huge/
yiyecek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n6cmb
false
null
t3_15n6cmb
/r/LocalLLaMA/comments/15n6cmb/optimizing_models_for_llm_leaderboard_is_a_huge/
false
false
self
1
null
What is the best way to fine tune llama2 for X/tweet sentiment analysis?
3
While there are BERT based models (e.g., [this one for twitter](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest)), I was wondering whether llama2 can be fine-tuned to provide a more accurate sentiment analysis? If so, can you help guide how a novice can go about this?
2023-08-10T08:49:57
https://www.reddit.com/r/LocalLLaMA/comments/15n6sgt/what_is_the_best_way_to_fine_tune_llama2_for/
sbs1799
self.LocalLLaMA
2023-08-10T09:44:11
0
{}
15n6sgt
false
null
t3_15n6sgt
/r/LocalLLaMA/comments/15n6sgt/what_is_the_best_way_to_fine_tune_llama2_for/
false
false
self
3
{'enabled': False, 'images': [{'id': 'ArlWYQviB_-tLNpMl6U7rDe8OGn86GN62b1DJBtybDk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=108&crop=smart&auto=webp&s=830a41f7ded868cb0f1f029252d1ae2d9871eda3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=216&crop=smart&auto=webp&s=2abbb1461e3b451083191545260db891894e5557', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=320&crop=smart&auto=webp&s=f0333448132cff266a52926b4bf31ad3dc8f097a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=640&crop=smart&auto=webp&s=e8e32e5edfc032544cb866e0960c922b087cbd11', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=960&crop=smart&auto=webp&s=e14e83dde369a57d62b44688cf0caee00f1b0706', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=1080&crop=smart&auto=webp&s=aea0b286447212126101824b07a9209e381c7fb6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?auto=webp&s=25fd87897661a5bbff40927d7e5d3b9266fdb6d2', 'width': 1200}, 'variants': {}}]}
How i can train llama 2?
1
[removed]
2023-08-10T09:07:29
https://www.reddit.com/r/LocalLLaMA/comments/15n743e/how_i_can_train_llama_2/
Agreeable_Fun7280
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n743e
false
null
t3_15n743e
/r/LocalLLaMA/comments/15n743e/how_i_can_train_llama_2/
false
false
self
1
null
Is there a way to read the best papers/blogs/research on LLama?
1
I am trying to get into a habit of reading a paper every couple days and was wondering if there is any curated website or source that keeps track of latest papers coming out. So far I have just been using twitter to try to follow people who are working in this area.
2023-08-10T09:22:16
https://www.reddit.com/r/LocalLLaMA/comments/15n7do9/is_there_a_way_to_read_the_best/
mohit_habeeb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n7do9
false
null
t3_15n7do9
/r/LocalLLaMA/comments/15n7do9/is_there_a_way_to_read_the_best/
false
false
self
1
null
Using prompt templates after LoRA on raw text
6
Hello all, I'm a little overwhelmed with all the developments and I feel like I don't know where to begin. So I apologise if this question sounds very basic. Let's say I want my LLM to sound like Phoebe Buffay from Friends. I don't have a QnA format, but just raw text for this purpose. As I understand, I can perform LoRA using the WebUI. Once my fine tuned model is ready, I want to use this to be able to converse with the user using specific prompts. My question is, can I feed this fine tuned model to LangChain so I can use their prompt template successfully? Or are there alternatives? Or can I do all of this using HuggingFace? Sorry, I'm very lost and I can't seem to understand if the finetuned models can be used by other frameworks.
2023-08-10T10:02:54
https://www.reddit.com/r/LocalLLaMA/comments/15n84m8/using_prompt_templates_after_lora_on_raw_text/
betib25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n84m8
false
null
t3_15n84m8
/r/LocalLLaMA/comments/15n84m8/using_prompt_templates_after_lora_on_raw_text/
false
false
self
6
null
What is needed to fine-tune LLaMa 2 70B without quantisation?
1
[removed]
2023-08-10T11:12:42
https://www.reddit.com/r/LocalLLaMA/comments/15n9hx7/what_is_needed_to_finetune_llama_2_70b_without/
pet_vaginal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n9hx7
false
null
t3_15n9hx7
/r/LocalLLaMA/comments/15n9hx7/what_is_needed_to_finetune_llama_2_70b_without/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WyBK-cNZtYI8e2V2a6UpK1uX-x5RQjkIvLQveCmvURc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=108&crop=smart&auto=webp&s=b7246ca224b849e17307af1e38e1da252729741a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=216&crop=smart&auto=webp&s=c4e7fb8ffbd5f6ed63aff4084f4af6af1f546afd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=320&crop=smart&auto=webp&s=cacc08d69fea74681e002bc9c69237c79577c864', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=640&crop=smart&auto=webp&s=fc29395b17387c905bdf56012d9e74408d167f0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=960&crop=smart&auto=webp&s=ec50388eca0806ee57992819fba448b30dd5e162', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=1080&crop=smart&auto=webp&s=d22d5b81cd534e776ab1bd6b9ad5e47f5ea62cbb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?auto=webp&s=2d758e78264dd4f726e4534208207b8a95cdf09d', 'width': 1200}, 'variants': {}}]}
help to update Llama.cpp
1
[removed]
2023-08-10T11:40:26
https://www.reddit.com/r/LocalLLaMA/comments/15na2ph/help_to_update_llamacpp/
Competitive_Fox7811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15na2ph
false
null
t3_15na2ph
/r/LocalLLaMA/comments/15na2ph/help_to_update_llamacpp/
false
false
https://a.thumbs.redditm…3ZPkGHGua9c0.jpg
1
null
My experience with LLaMA 2
1
[removed]
2023-08-10T11:55:47
https://www.reddit.com/r/LocalLLaMA/comments/15naepn/my_experience_with_llama_2/
uralstech_MR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15naepn
false
null
t3_15naepn
/r/LocalLLaMA/comments/15naepn/my_experience_with_llama_2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DQ0NpM_glprnLD-GOVWRivLTqpaMfa4wZh0dDiD1wnw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=108&crop=smart&auto=webp&s=7c5eb57ce92cb53dd18f9e277fbe1234ad3d6487', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=216&crop=smart&auto=webp&s=7b0f11a7102c7d2b009c900fefa11aea2ea7bea1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=320&crop=smart&auto=webp&s=95cd48d7c84f190be5635db0a604de8c9879f4b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=640&crop=smart&auto=webp&s=e1c1723f70c20e49d6dd6d783dcfc491539f2448', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=960&crop=smart&auto=webp&s=fd81c83384eaa28eac658f10cacf9a2cfb910836', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=1080&crop=smart&auto=webp&s=1bc3462ab583b926bf14849085900715579c9176', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?auto=webp&s=36661dfa3d4bb20c5d9701be7671389e880244e0', 'width': 1200}, 'variants': {}}]}
Is Embeddings are giving better answer that finetuned model?
1
I am making a chatbot for a company which has 150 PDFs and it contains 1000 of pages of data in text format. My expectations from the chatbot is giving me not only simple answers as in the PDF. I want logical answers from it which is related to topics in PDF and model is determine an answer using data from PDF. I tried making embedding of those PDFs but I am not getting satisfied answer using embeddings so if I finetune LLM model using PDFs data then chatbot gives me better answer or not?
2023-08-10T12:03:05
https://www.reddit.com/r/LocalLLaMA/comments/15nakwz/is_embeddings_are_giving_better_answer_that/
darshil-3099
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nakwz
false
null
t3_15nakwz
/r/LocalLLaMA/comments/15nakwz/is_embeddings_are_giving_better_answer_that/
false
false
self
1
null
Is Embeddings are giving better answer than finetuned model?
1
I am making a chatbot for a company which has 150 PDFs and it contains 1000 of pages of data in text format. My expectations from the chatbot is giving me not only simple answers as in the PDF. I want logical answers from it which is related to topics in PDF and model is determine an answer using data from PDF. I tried making embedding of those PDFs but I am not getting satisfied answer using embeddings so if I finetune LLM model using PDFs data then chatbot gives me better answer or not ?
2023-08-10T12:15:55
https://www.reddit.com/r/LocalLLaMA/comments/15nav9p/is_embeddings_are_giving_better_answer_than/
darshil-3099
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nav9p
false
null
t3_15nav9p
/r/LocalLLaMA/comments/15nav9p/is_embeddings_are_giving_better_answer_than/
false
false
self
1
null
How do you use LLama(2)?
1
[removed]
2023-08-10T12:59:49
https://www.reddit.com/r/LocalLLaMA/comments/15nbv6o/how_do_you_use_llama2/
SecretOk9644
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nbv6o
false
null
t3_15nbv6o
/r/LocalLLaMA/comments/15nbv6o/how_do_you_use_llama2/
false
false
self
1
null
Llama 2 goes to school - a Wikipedia exploration, vector DB ingestion, and chat experiment
1
[removed]
2023-08-10T13:28:57
https://www.reddit.com/r/LocalLLaMA/comments/15nckm3/llama_2_goes_to_school_a_wikipedia_exploration/
crono760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nckm3
false
null
t3_15nckm3
/r/LocalLLaMA/comments/15nckm3/llama_2_goes_to_school_a_wikipedia_exploration/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Tay9bFq98NoSUfgn0x33E-XJc10Y1CdW2jllu-yQNOQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=108&crop=smart&auto=webp&s=9af61d04cc3d85e9fc3d69da584dffe45dc04636', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=216&crop=smart&auto=webp&s=fb7f24b3629ed59f39a7087a325f4c85761a2989', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=320&crop=smart&auto=webp&s=b97ef3f547d53250d7da0143b5485b3f6a1d30d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=640&crop=smart&auto=webp&s=b4f87904b7a071458ed7776e33677d1f02a724bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=960&crop=smart&auto=webp&s=34852c2d2d57ace507dcc8cc7dc8f27a3198a413', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=1080&crop=smart&auto=webp&s=55624a1e90cefe0b74bc2c83af493549ed68ce36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?auto=webp&s=732d0287c1cb02f873eed39833b1fd8c4fa20791', 'width': 1200}, 'variants': {}}]}
Does server motherboards with dual cpu run dobule the speed compare to only one cpu since dual cpu has double the ram slots?
1
So I'm planning to build a PC to run localLLama with some used server CPU. I'm planning to either buy 1 used 2nd gen epyc cpu with 8 channel ram or 2 xeon gold CPUs with 6 channel ram and use a dual cpu motherboard. My question is will a 2 cpu with 6 channel faster than a epyc 8 channel cpu since they could use 2\*6=12 ram slots? ​
2023-08-10T13:36:08
https://www.reddit.com/r/LocalLLaMA/comments/15ncr2k/does_server_motherboards_with_dual_cpu_run_dobule/
hoseex999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ncr2k
false
null
t3_15ncr2k
/r/LocalLLaMA/comments/15ncr2k/does_server_motherboards_with_dual_cpu_run_dobule/
false
false
self
1
null
Summarizing content prompt ignored because of content.
1
I'm working with the llama-2-13b-chat.ggmlv3.q4\_0 model to try to summarize content for SEO. It seems to work well for simple content, though if the page has a form on it, the actions and questions of the form tend to hijack the response and not return in JSON format, and start chatting as if I it's responding as the page itself. I found the suggested prompt template and it helped with simple pages but not more complex pages. Below is one of the latest prompts I've tried that just fails. I could be using the wrong model, but I'm hoping I can keep at the 13b as it fits easy on my laptop to run. Is the prompt bad, or is this a setting issue, or the wrong model? `<s>[INST] <<SYS>>` `Assistant is a expert JSON builder designed to assist with a creating SEO headlines and descriptions.` `Assistant is able to respond to the User and use tools using JSON strings that contain 'headline' and 'description' parameters.` `All of Assistant's communication is performed using this JSON format.` `Here are some previous summarizing between the Assistant and User:` `User: Sign our petition! Sign your name for our petition to support us.` `Assistant: \`\`\`json` `{{"headline": "Sign our petition",` `"description": "Sign your name to support our causes"}}` `\`\`\`` `User: Complete our survey --- What issues impact you the most. Complete the survey` `Assistant: \`\`\`json` `{{"headline": "Complete our Survey",` `"description": "Tell us what issues impact you the most"}}` `\`\`\`` `<</SYS>>` `Here is the text of the web page to create the SEO content for.` Using the lamma.cpp server `.\server.exe -m .\models\llama-2-13b-chat.ggmlv3.q4_0.bin --mlock --n-gpu-layers 30` to run this. &#x200B;
2023-08-10T14:55:05
https://www.reddit.com/r/LocalLLaMA/comments/15nepxb/summarizing_content_prompt_ignored_because_of/
jbartley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nepxb
false
null
t3_15nepxb
/r/LocalLLaMA/comments/15nepxb/summarizing_content_prompt_ignored_because_of/
false
false
self
1
null
Any tips for faster generation for queries with large fixed context/system prompts?
1
[removed]
2023-08-10T15:16:01
https://www.reddit.com/r/LocalLLaMA/comments/15nf8x0/any_tips_for_faster_generation_for_queries_with/
greying_panda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nf8x0
false
null
t3_15nf8x0
/r/LocalLLaMA/comments/15nf8x0/any_tips_for_faster_generation_for_queries_with/
false
false
self
1
null
Which Azure Cloud GPU is best for fine tuning?
1
The GPU names are different, but are they usually all A100's but just emulated into smaller GPUs?
2023-08-10T15:44:02
https://www.reddit.com/r/LocalLLaMA/comments/15nfyt7/which_azure_cloud_gpu_is_best_for_fine_tuning/
Shyvadi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nfyt7
false
null
t3_15nfyt7
/r/LocalLLaMA/comments/15nfyt7/which_azure_cloud_gpu_is_best_for_fine_tuning/
false
false
self
1
null
Is oobabooga supposed to be this damn slow?
1
Hi, I'm a newb, so I'm not sure if I am putting this in the right place... I would like to bring LLama (or any other model) local to experiment with finetuning, etc. I don't have a beast of a machine, but it seems pretty good to me. It is an Asus, ROG Running: AMD Ryzen 9 5900HX with Radeon Graphics 3.30 GHz NVidia GeForce RTX 3070 GPU GDDR6 @ 8GB (256 Bit) RAM 64.0 GB System RAM 4 TB of SSD storage I'm thinking I **\*should\*** be able to run a 7B model locally, right? I followed the steps to set up Oobabooga. I load a 7B model from TheBloke. Everything seems fine. I type in a question, and I watch the output in the Powershell. The output shows up reasonably quickly. But the response inside of the web ui is ***painful***. Like... Every.... Word... comes... up... 2 seconds... apart. I don't get it. Doesn't it have its answer as soon at the output is returned? Why so slow? What am I missing? &#x200B;
2023-08-10T16:02:47
https://www.reddit.com/r/LocalLLaMA/comments/15nggns/is_oobabooga_supposed_to_be_this_damn_slow/
Ok_Organization_4295
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nggns
false
null
t3_15nggns
/r/LocalLLaMA/comments/15nggns/is_oobabooga_supposed_to_be_this_damn_slow/
false
false
self
1
null
General strategies on how to refine a model to a specific dataset while keeping the NLP-ness?
1
I've managed to train with documents from an internal knowledge base. But want to limit users from asking off topic questions "What is the capital of Iceland?"
2023-08-10T16:11:27
https://www.reddit.com/r/LocalLLaMA/comments/15ngot3/general_strategies_on_how_to_refine_a_model_to_a/
chris480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ngot3
false
null
t3_15ngot3
/r/LocalLLaMA/comments/15ngot3/general_strategies_on_how_to_refine_a_model_to_a/
false
false
self
1
null
Where's the best community to get help with tortoise TTS?
1
[removed]
2023-08-10T16:36:01
https://www.reddit.com/r/LocalLLaMA/comments/15nhbm7/wheres_the_best_community_to_get_help_with/
Virtamancer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nhbm7
false
null
t3_15nhbm7
/r/LocalLLaMA/comments/15nhbm7/wheres_the_best_community_to_get_help_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'R1XyIfwJNMprA3q8KUhj7XqSkvuDgfnG7_QKPcJ8kNU', 'resolutions': [], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/eSEXxiI7z3-IUxI_gRwm0PH6HgIOm1-iAuzEbmuJ3ow.jpg?auto=webp&s=2c3ce4d6960f8deaa4baa31576edfc2d236b8ba5', 'width': 80}, 'variants': {}}]}
Did someone calculate how fast GPT-3.5 is, in terms of tokens/second? And how about GPT-4?
1
I read this thread [Inference Speed for Llama 2 70b on A6000 with Exllama - Need Suggestions!](https://www.reddit.com/r/LocalLLaMA/comments/15lf119/inference_speed_for_llama_2_70b_on_a6000_with/?utm_source=share&utm_medium=web2x&context=3), and I think 10 tokens per second is not bad. But I'm not sure how fast it is. So, I want to check its speed with GPT-3.5 and GPT-4. Anyone have a concrete number? Or if I want to measure it manually, how should I do? A stopwatch? Any suggestion? Thank you guys.
2023-08-10T17:19:02
https://www.reddit.com/r/LocalLLaMA/comments/15nig1k/did_someone_calculate_how_fast_gpt35_is_in_terms/
Fancy-Welcome-9064
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nig1k
false
null
t3_15nig1k
/r/LocalLLaMA/comments/15nig1k/did_someone_calculate_how_fast_gpt35_is_in_terms/
false
false
self
1
null
Multimodal-app with video multimodal support? (4bit GPTQ)?
1
Is there any app like [https://github.com/DAMO-NLP-SG/Video-LLaMA.git](https://github.com/DAMO-NLP-SG/Video-LLaMA.git) or [https://github.com/X-PLUG/mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl) that also has GPTQ 4bit support for low VRAM users? Oobabooga only has image mutimodal support. I really want video support with 4bit GPTQ, so it will fit a 8gb GPU.
2023-08-10T17:20:19
https://www.reddit.com/r/LocalLLaMA/comments/15nihan/multimodalapp_with_video_multimodal_support_4bit/
fetballe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nihan
false
null
t3_15nihan
/r/LocalLLaMA/comments/15nihan/multimodalapp_with_video_multimodal_support_4bit/
false
false
self
1
{'enabled': False, 'images': [{'id': 'vTaZURLh7-5AXEQ0JeE73YLxXFzAqcxN2dUl_wghjSQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=108&crop=smart&auto=webp&s=9444bf91b799d9c94f126ccbf88d96b4dac446c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=216&crop=smart&auto=webp&s=9b2e61d158ebecbc348ce548f0f7f1d17185fed7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=320&crop=smart&auto=webp&s=8a757d9400e4e72fa03726e7c954c5669cf3ed86', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=640&crop=smart&auto=webp&s=9a7a3a6ba291745a2ab656abb4688ae58ef52acc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=960&crop=smart&auto=webp&s=1419dcddfaac32d71e9b45c08e9949751cf0afc3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=1080&crop=smart&auto=webp&s=e37a6e400425e9f491d7151c923f76727b02edf3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?auto=webp&s=871558cd8817ebc0b4779294b05c01539d9f3675', 'width': 1200}, 'variants': {}}]}
How to utilize memory, author's notes effectively in KoboldCPP?
1
So I've noticed that when I select a preset in kcpp and visit the memory settings it has a lot of pre-filled text in there, and frequently there is [ ] (square brackets) surrounding various bits of text, and often there is a format like person: their_name; attitude: their_attitudes; # etc So it got me curious, is there a specific format that should be used when entering data/details into the memory portion? I've also noticed that when I select the presets there is rarely (if ever) anything populated in the author's notes box. I'm assuming this isn't as important then?
2023-08-10T17:21:31
https://www.reddit.com/r/LocalLLaMA/comments/15niihq/how_to_utilize_memory_authors_notes_effectively/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15niihq
false
null
t3_15niihq
/r/LocalLLaMA/comments/15niihq/how_to_utilize_memory_authors_notes_effectively/
false
false
self
1
null
Integrating Microsoft’s Guidance into hiyouga/LLaMA-Efficient-Tuning's repo for Faster Processing and Well-Formatted Output
1
Hey, do you know where I can find resources on how to integrate Microsoft’s Guidance into hiyouga/LLaMA-Efficient-Tuning's repo? It says it speeds up the process, which can be super helpful with these resource-heavy beasts, and it is very useful for getting well-formatted output
2023-08-10T17:26:07
https://www.reddit.com/r/LocalLLaMA/comments/15nimvl/integrating_microsofts_guidance_into/
ILoveBoxPlots
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nimvl
false
null
t3_15nimvl
/r/LocalLLaMA/comments/15nimvl/integrating_microsofts_guidance_into/
false
false
self
1
null
How make Llama2 GPTQ faster in generation?
1
I use [this](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ) model on T4 gpu and speed is modest for generation, is there anyway to make it faster?
2023-08-10T17:52:27
https://www.reddit.com/r/LocalLLaMA/comments/15njbw1/how_make_llama2_gptq_faster_in_generation/
Euphoric-Nebula-4559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15njbw1
false
null
t3_15njbw1
/r/LocalLLaMA/comments/15njbw1/how_make_llama2_gptq_faster_in_generation/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rkljmc1ht4wmaJGm60zJkcYTjcDJ3O6Ma8y8dDYwHYI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=108&crop=smart&auto=webp&s=1a72ffb8bd5cc3be930da7d37436aeb9871f39b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=216&crop=smart&auto=webp&s=4834bb0a4a1a1893261f169f663dc5f8caf3aa24', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=320&crop=smart&auto=webp&s=4d244a1a632999ce7d2c4f476dffd40a57edcd59', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=640&crop=smart&auto=webp&s=68e3dbfa5c777b7d5c5c7d805373cb4aa0562544', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=960&crop=smart&auto=webp&s=57a63c907125ad8e9d155efefda403fc3c21b9f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=1080&crop=smart&auto=webp&s=a9394eb2b2002d2e590a60251c3b3fcdbe7afc8a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?auto=webp&s=a07479d4b0463cc023a75d5210c525e3dac0d4b2', 'width': 1200}, 'variants': {}}]}
Here is what the EU considers risky and wants to regulate. Text taken directly from EU's site, link provided.
1
The text isn't all bad, but I do find some of it concerning. Notably: ##### Banned actions >Unacceptable risk AI systems are systems considered a threat to people and will be banned. >Real-time and remote biometric identification systems, such as facial recognition This is a major security issue. Not all data should be open to everyone. Security is moving to passkeys which require bio authentication ##### Must be registered >AI systems falling into eight specific areas that will have to be registered in an EU database: >Education and vocational training An educational model must be registered. That's pretty much everything. This will come down to wording in legislation. But is currently concerning. ##### Generative AI >Generative AI, like ChatGPT, would have to comply with transparency requirements: >Disclosing that the content was generated by AI Designing the model to prevent it from generating illegal content Publishing summaries of copyrighted data used for training Autocomplete is AI -- it uses the same training architechure as ChatGPT. Most photo tools are AI based. There are so many tools out there that are using AI that nothing will be considered yours. --- https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence AI Act: different rules for different risk levels The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed. Unacceptable risk Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include: Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics Real-time and remote biometric identification systems, such as facial recognition Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval. High risk AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories: 1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts. 2) AI systems falling into eight specific areas that will have to be registered in an EU database: Biometric identification and categorisation of natural persons Management and operation of critical infrastructure Education and vocational training Employment, worker management and access to self-employment Access to and enjoyment of essential private services and public services and benefits Law enforcement Migration, asylum and border control management Assistance in legal interpretation and application of the law. All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. Generative AI Generative AI, like ChatGPT, would have to comply with transparency requirements: Disclosing that the content was generated by AI Designing the model to prevent it from generating illegal content Publishing summaries of copyrighted data used for training Limited risk Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
2023-08-10T18:07:28
https://www.reddit.com/r/LocalLLaMA/comments/15njqpd/here_is_what_the_eu_considers_risky_and_wants_to/
jayfehr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15njqpd
false
null
t3_15njqpd
/r/LocalLLaMA/comments/15njqpd/here_is_what_the_eu_considers_risky_and_wants_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dvY7fvPhlVsoTxqCUAaSr_wr2lTeGds8ENFwBCMZTGA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=108&crop=smart&auto=webp&s=d2706306599f52edc15fb8be036a64f98ae4cbcc', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=216&crop=smart&auto=webp&s=504df8c65372e65e35f3e9209c9555a645cfb015', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=320&crop=smart&auto=webp&s=6f6df8e6d80348b2378abb6323a08d488862e23c', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=640&crop=smart&auto=webp&s=a2ff5e00e5b5887139fe465c7a557fa09d6bc148', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=960&crop=smart&auto=webp&s=958aea40c9f2f6c4c3902cddecb1886083bb7198', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=1080&crop=smart&auto=webp&s=99ba56b430c6e5270cddeb6aaeb816a49c27980f', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?auto=webp&s=b11e87b9c10cdbf975fe952609b77fd79ad8c051', 'width': 1800}, 'variants': {}}]}
Are AMD Ryzen Processors Supported for Running Uncensored AI Models ?
1
[removed]
2023-08-10T19:01:13
https://www.reddit.com/r/LocalLLaMA/comments/15nl4uh/are_amd_ryzen_processors_supported_for_running/
No_Anywhere2560
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nl4uh
false
null
t3_15nl4uh
/r/LocalLLaMA/comments/15nl4uh/are_amd_ryzen_processors_supported_for_running/
false
false
self
1
null
How I ran Llama 2 on my Mac in < 5 mins
1
[removed]
2023-08-10T19:10:45
https://www.reddit.com/r/LocalLLaMA/comments/15nle0k/how_i_ran_llama_2_on_my_mac_in_5_mins/
rajatarya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nle0k
false
null
t3_15nle0k
/r/LocalLLaMA/comments/15nle0k/how_i_ran_llama_2_on_my_mac_in_5_mins/
false
false
self
1
null
How to finetune a 70B model using A100?
1
Hello, I would like to finetune (LoRA or full SFT) a 60B (LLaMA 1) or 70B (LLaMA 2) model on my custom data but I could not find a way to do so (keep throwing me out of memory). Thus I'd like to hear from you guys any suggestions. \- I have a custom \~300k data consisting of my private data + some from [GPTeacher](https://github.com/teknium1/GPTeacher). The context window is 2k \- I can rent 8x 80G A100. But I can only rent it for at most a week otherwise it gets overbudget. I use huggingface transformers (as most of ppl do) and its [trainer](https://huggingface.co/docs/transformers/main_classes/trainer) and tried \- SFT with Deepspeed ZERO 3 + CPU Offload, with [FlashAttention copied from Vicuna](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py), gradient checkpoint, and bf16 mixed training. But this gives me out of memory error even with batch size = 1. \- Then I tried LoRA with same config as [Alpaca-LoRA](https://github.com/tloen/alpaca-lora/tree/main), but again I got out of memory error \- I haven't tried QLoRA but from my past experience with QLoRA: I think it is impossible to use Deepspeed with QLoRA thus I can only use 1 gpu. Last month I tried training a 13B model with QLoRA on \~100k data but already took me >60 hours so I can't imagine how expensive it is to use QLoRA on 70B model with 300k data. But I can be wrong tho. &#x200B; From my understanding, a 70B model takes at least 140B for bf16 just for holding it, let alone training. So a 80G A100 cannot even do inference with bf16. Therefore I am looking for methods that use parallelism. Thanks!!!!
2023-08-10T19:43:11
https://www.reddit.com/r/LocalLLaMA/comments/15nm8r4/how_to_finetune_a_70b_model_using_a100/
tt19234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nm8r4
false
null
t3_15nm8r4
/r/LocalLLaMA/comments/15nm8r4/how_to_finetune_a_70b_model_using_a100/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qtkYXuu_0CmgGZOB_5uHII_H1ifZJe4xeYQHxtxYSxM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=108&crop=smart&auto=webp&s=67441d4962990562a884de7f50a4a9a449bbb2ef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=216&crop=smart&auto=webp&s=4a2446155c60d7cc020359633696f13e465efabe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=320&crop=smart&auto=webp&s=b829be685c13d498908042e56c8761abeebcbca2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=640&crop=smart&auto=webp&s=479ef5caa37a2ecabd1148261ae7170148a1d69f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=960&crop=smart&auto=webp&s=c531fc97ca89c8ecb36c389481d5b58ef312e2be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=1080&crop=smart&auto=webp&s=8aeb6db82c2bbca1210f82e22b1c33c99cf6185d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?auto=webp&s=3c619bc66fadbf757e1fed43fda6f95db2b4c51e', 'width': 1200}, 'variants': {}}]}
Why are models trained in fp16 and not pre-quantized?
1
I know the most computationally expensive part of LLM is the training portion, requiring vast resources. From what I've seen, 8 bit quants perform very similarly to fp16. Couldn't large companies making foundational models utilize more parameters if they trained a model in a pre-quantized form? Could they train a model in mixed precision, for a compromise between the two?
2023-08-10T20:02:45
https://www.reddit.com/r/LocalLLaMA/comments/15nmrat/why_are_models_trained_in_fp16_and_not/
clyspe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nmrat
false
null
t3_15nmrat
/r/LocalLLaMA/comments/15nmrat/why_are_models_trained_in_fp16_and_not/
false
false
self
1
null
Tesla GPU's for LLM text generation?
1
I've been using my own PC for CPU generation via koboldcpp, but have plans on using a server for generation. I already have access to an HP DL380 gen 9 in a datacenter, so any cards would not need a printed fan shroud or other cooling solutions. I've seen people use a Tesla p40 with varying success, but most setups are focused on using them in a standard case. The server already has 2x E5-2680 v4's, 128gb ecc ddr4 ram, ~28tb of storage. Budget for graphics cards would be around 450$, 500 if i find decent prices on gpu power cables for the server.
2023-08-10T20:02:46
https://www.reddit.com/r/LocalLLaMA/comments/15nmrbf/tesla_gpus_for_llm_text_generation/
novasapper5431
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nmrbf
false
null
t3_15nmrbf
/r/LocalLLaMA/comments/15nmrbf/tesla_gpus_for_llm_text_generation/
false
false
self
1
null
How to use llama-cpp-python low level API?
1
[removed]
2023-08-10T20:47:46
https://www.reddit.com/r/LocalLLaMA/comments/15nnxu3/how_to_use_llamacpppython_low_level_api/
Explo105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nnxu3
false
null
t3_15nnxu3
/r/LocalLLaMA/comments/15nnxu3/how_to_use_llamacpppython_low_level_api/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uFQqaMgbcGi7N4bgBl94x0TsOK2XyYKVF2f1UAEn6ik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=108&crop=smart&auto=webp&s=c3ebcccf94dc87fd6a82f907dc0c14ba17fcfb5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=216&crop=smart&auto=webp&s=74fa829ac059b349900c68ab0d76d8855f513ed6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=320&crop=smart&auto=webp&s=28f383a729ddefce7dfd029759c572adaf27cbe7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=640&crop=smart&auto=webp&s=3a5af8629c20e01262766a346d929d0708cfded4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=960&crop=smart&auto=webp&s=3d567f444cb726883c11252883a57e5faf3c4e1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=1080&crop=smart&auto=webp&s=079ed6c31d3e20e6b4c7e19fceab30f3c7ef002a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?auto=webp&s=592d1d1cb820a6fde6e62aaecd4e2e4f8f2a551e', 'width': 1200}, 'variants': {}}]}
LLM that can analyze videos
1
Is there anything like this for videos: [https://huggingface.co/Salesforce/instructblip-vicuna-13b](https://huggingface.co/Salesforce/instructblip-vicuna-13b) I want to be able to analyze what happens in a video and get a detailed description of it. Even if the video doesn't play any sound.
2023-08-10T21:22:59
https://www.reddit.com/r/LocalLLaMA/comments/15nouur/llm_that_can_analyze_videos/
philm999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nouur
false
null
t3_15nouur
/r/LocalLLaMA/comments/15nouur/llm_that_can_analyze_videos/
false
false
default
1
{'enabled': False, 'images': [{'id': 'josfXevWH3GKDzVNR16FjXB__-rqNIAHGVYLa_h2MZM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=108&crop=smart&auto=webp&s=31233298cd01470aecdb517f6a014a44c8c9954c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=216&crop=smart&auto=webp&s=4f2bb92852485752d6d8f53bd01cb733b74901d5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=320&crop=smart&auto=webp&s=da7a38366dd865ba6f2c64e2f39fe7567b8431d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=640&crop=smart&auto=webp&s=6b80741b98a6b7093786c33deeb49c342de90633', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=960&crop=smart&auto=webp&s=fbacc598e79c5a506392912e2b54796b41d96d36', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=1080&crop=smart&auto=webp&s=945687a13196f13c8cf22d0254fdd54e20685d38', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?auto=webp&s=86aec593161ebaf14d9ed4cb6fdaa5b6e385ee58', 'width': 1200}, 'variants': {}}]}
Great interview with Tim Dettmers (bitsandbytes / QLORA)
1
[(2) AI on your phone? Tim Dettmers on quantization of neural networks — #41 - YouTube](https://www.youtube.com/watch?v=auw2hXywrU8&t=1766s) Lots of fascinating insights in an interview between Steve Hsu and Tim Dettmers, quantization savant extraordinaire. Very techy. Below is the Claude 2 summary I have corrected, accurate based on my listening but doesn't convey how deep in the reeds Dettmers gets. Tim's Background: * Tim Dettmers grew up in rural Germany and was diagnosed with dyslexia at a young age. His dyslexia made school extremely difficult, to the point he was left high school without a degree. * A high school computer science teacher recognized Tim's talent for computing and encouraged him to pursue it. Tim tried vocational school in programming but struggled with written exam questions until finding an apprenticeship in mathematical software development which got around his challenges. The practical programming focus matched his abilities. * Unable to attend university in Germany without a high school degree, Tim earned degrees in psychology and applied math through distance learning from Open University. He excelled when able to use computers for writing rather than writing by hand. * Fascinated by AI, Tim competed on Kaggle building custom neural nets on GPUs. This demonstrated their potential power. He decided to pursue a PhD in computer science focused on neural networks. Tim's Quantization Research: * Tim pioneered quantization techniques to compress neural networks by reducing the bits needed to represent each weight, from 32-bit floats to 8/4-bit integers. * He develops specialized floating point and integer quantization schemes optimized for distribution of weight values. The formats allocate more bits for outlier values containing higher information content. * Tim draws on information theory, modeling quantization as histogram binning. He aims to maximize information density in each bit. This means smoothing the distribution so no large unused ranges exist between bins. * By compressing model size 4-8x, quantization enables running huge models on consumer GPUs. Tim makes state-of-the-art models accessible for grassroots experimentation and innovation. Tim's Predictions on AI Progress: * Tim sees AI rapidly becoming superhuman within narrow domains in the next 3-5 years, through fine-tuning gigantic models on specialized data. But general intelligence will remain inferior to humans. * In 10 years, he expects hardware improvements to slow as we hit physical limits like the speed of light for communication between nodes. So research will focus more on maximizing efficiency of models and training algorithms. * Tim envisions foundation models adapted with many specialized expert modules for particular tasks. This will improve reliability over general models for deployment. Tim's Training Research: * For efficient training, Tim developed QLORA – quantizing models during fine-tuning while representing adaption matrices in higher precision. This maintains accuracy while greatly reducing memory. * He also pioneered "swarm parallelism" techniques to massively scale training across distributed consumer hardware on the internet. This uses stochastic updates resilient to noise and latency. * Tim sees these methods enabling grassroots innovation as hobbyists can fine-tune models themselves. He wants people to know these powerful models are accessible for exploration.
2023-08-10T21:45:42
https://www.reddit.com/r/LocalLLaMA/comments/15npg12/great_interview_with_tim_dettmers_bitsandbytes/
PookaMacPhellimen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15npg12
false
null
t3_15npg12
/r/LocalLLaMA/comments/15npg12/great_interview_with_tim_dettmers_bitsandbytes/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jKV4K6Z9JAzZlZ4XulryYTGO3ZqXcC3B7ZEl4WUsMA8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?width=108&crop=smart&auto=webp&s=7a530ea943cbeaf8c19f0ca7119ab3431b4be5b2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?width=216&crop=smart&auto=webp&s=a6d6eb0ee7beb6ba2af04019932d1cbc93dbbd17', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?width=320&crop=smart&auto=webp&s=c1a68f93e81cce2b7d019d1c9a5009737190ee0e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?auto=webp&s=324e06b332a63044d6e95b0245038d14067366d8', 'width': 480}, 'variants': {}}]}
Xbox series X, GDDR6 LLM beast?
1
From the Xbox series X specs, it seems it would be an LLM beast like Apple M2 hardware... Can recent Xbox run Linux? Or will AMD release an APU with lots of integrated GDDR6 like this for PC builders? CPU 8x Cores @ 3.8 GHz (3.66 GHz w/ SMT) Custom Zen 2 CPU GPU 12 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU Die Size 360.45 mm2 Process 7nm Enhanced \*\*Memory 16 GB GDDR6 w/ 320mb bus\*\* \*\*Memory Bandwidth 10GB @ 560 GB/s, 6GB @ 336 GB/s\*\*
2023-08-10T21:50:23
https://www.reddit.com/r/LocalLLaMA/comments/15npkc7/xbox_series_x_gddr6_llm_beast/
randomqhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15npkc7
false
null
t3_15npkc7
/r/LocalLLaMA/comments/15npkc7/xbox_series_x_gddr6_llm_beast/
false
false
self
1
null
Lower power options
1
I'm in the peculiar situation of not being able to draw too much power. Currently using a mini-pc that can run 7B GGML okay if it's short context. But I cannot really run a newer GPU in my household due to other appliances and using a caravan power supply. Can you get decent results with GGML supported by the newer AMD igpu's like 780M? Or is there a lower power option that can decently run 7B or 13B models?
2023-08-10T21:55:57
https://www.reddit.com/r/LocalLLaMA/comments/15nppg4/lower_power_options/
Monkey_1505
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nppg4
false
null
t3_15nppg4
/r/LocalLLaMA/comments/15nppg4/lower_power_options/
false
false
self
1
null
Termux bash script for YT summarizing
1
This is a script I made to summarize 5-10min YT videos quickly with 3b models in Termux. It uses whisper.cpp and yt-dlp, and the prompt cache if the first summary needs redoing. It works great for 6gb phones, with up to 1536 context at full speeds, slowdown if higher. (1) (Installation) copy and paste the below chunk in termux: pkg up pkg install clang wget git cmake python ffmpeg -y python3 -m pip install --upgrade yt-dlp wget -c https://huggingface.co/s3nh/mamba-gpt-3b-v3-GGML/resolve/main/mamba-gpt-3b-v3.ggmlv3.q4_0.bin wget -c https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.en.bin git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && cmake . && cmake --build . --config Release cp bin/main /data/data/com.termux/files/usr/bin/llamacpp cd .. git clone https://github.com/ggerganov/whisper.cpp cd whisper.cpp && cmake . && cmake --build . --config Release cp main /data/data/com.termux/files/usr/bin/whispercpp cd .. (2) Script: https://pastebin.com/raw/Ma4gwEzA Type `nano sum.sh` and paste the above text. Save and exit. make thr script executable with `chmod +x sum.sh` (3) Run with ./sum.sh Paste the link when prompted. If you enter nothing, you'll revisit the previous cache. (4) Adjustments You may like to adjust for more or less context with a larger phone. Let me know if some higher contexts, or larger models work well! The transcription whisper model is tiny.en , a quantized version of a larger whisper model may work better, sometimes. An example prompt to enter: "Summarize above article in numbered bulletpoints."
2023-08-10T23:01:31
https://www.reddit.com/r/LocalLLaMA/comments/15nrduy/termux_bash_script_for_yt_summarizing/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nrduy
false
null
t3_15nrduy
/r/LocalLLaMA/comments/15nrduy/termux_bash_script_for_yt_summarizing/
false
false
self
1
null
What causes a chat session to suddenly stop answering.(not OOM or crash)?
1
I'm using this model TheBloke_Llama-2-70B-Chat-GGML with the WebUI front end in chat mode. And it seems to be fairly good. However, after a few dozen question answers, it stops responding. If I clear the history, then it starts working again. Is this something related to context? And is there a way to change it perhaps? Thank you.
2023-08-10T23:40:37
https://www.reddit.com/r/LocalLLaMA/comments/15nsc3b/what_causes_a_chat_session_to_suddenly_stop/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nsc3b
false
null
t3_15nsc3b
/r/LocalLLaMA/comments/15nsc3b/what_causes_a_chat_session_to_suddenly_stop/
false
false
self
1
null
Cost to Run Llama 2
1
I am creating a web-app, chatbot and was wondering how much it costs to cloud host Llama 2 vs GPT 3.5-turbo API or other OpenAI models. Llama 2 would run with Replicate/Streamlit.
2023-08-10T23:48:49
https://www.reddit.com/r/LocalLLaMA/comments/15nsj39/cost_to_run_llama_2/
shashwat73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nsj39
false
null
t3_15nsj39
/r/LocalLLaMA/comments/15nsj39/cost_to_run_llama_2/
false
false
self
1
null
ChatGPT Recent Hallucinations
1
2023-08-10T23:53:55
https://www.reddit.com/r/ChatGPT/comments/15nfpew/chatgpts_weird_hallucinations_over_the_past_24hrs/
GlobalRevolution
reddit.com
1970-01-01T00:00:00
0
{}
15nsnak
false
null
t3_15nsnak
/r/LocalLLaMA/comments/15nsnak/chatgpt_recent_hallucinations/
false
false
default
1
null
Business idea: looking for advice and maybe collaborators.
1
[removed]
2023-08-11T02:06:41
https://www.reddit.com/r/LocalLLaMA/comments/15nvo26/business_idea_looking_for_advice_and_maybe/
robkkni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nvo26
false
null
t3_15nvo26
/r/LocalLLaMA/comments/15nvo26/business_idea_looking_for_advice_and_maybe/
false
false
self
1
null
llama.cpp generation with (older) GPU is *slower* than pure CPU?
1
Hi everyone. I am having trouble with running llama.cpp under Linux on some mildly retro hardware (Xeon E5-2630L V2, GeForce GT730 2GB). More specifically, the generation speed gets *slower* as more layers are offloaded to the GPU. &#x200B; **LLAMA 7B Q4\_K\_M, 100 tokens:** Compiled without CUBLAS: 5.32 tokens per second With CUBLAS, -ngl 1: 4.59 tokens per second With CUBLAS, -ngl 4: 3.16 tokens per second With CUBLAS, -ngl 10: 2.02 tokens per second It took some time to find a combination of OS/CUDA that would support the GPU and allow a successful compile, and even that took a bit of fiddling. I ended up with: \- Latest llama.cpp source, via git clone \- Ubuntu 18.04 \- GCC 7.5.0 (bundled with Ubuntu 18) and GCC 8.4.0 (manual install) \- CUDA toolkit 10.2.89 \- legacy GPU driver 470.182.03 I have no idea if one or more of these is a bottleneck, or it's simply that the GPU is too old. Still, the GPU is a couple of years younger than the CPU, so I would expect it should be able to generate at least somewhat faster? Anyone else having issues running on older, less fully supported hardware? Thanks for any tips.
2023-08-11T02:16:24
https://www.reddit.com/r/LocalLLaMA/comments/15nvvr5/llamacpp_generation_with_older_gpu_is_slower_than/
dual_ears
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nvvr5
false
null
t3_15nvvr5
/r/LocalLLaMA/comments/15nvvr5/llamacpp_generation_with_older_gpu_is_slower_than/
false
false
self
1
null
Understanding document layout llms.
1
[removed]
2023-08-11T02:20:28
https://www.reddit.com/r/LocalLLaMA/comments/15nvyyi/understanding_document_layout_llms/
nlpllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nvyyi
false
null
t3_15nvyyi
/r/LocalLLaMA/comments/15nvyyi/understanding_document_layout_llms/
false
false
self
1
null
Accuracy of quantized models
1
Could anyone pls point me to comparison of accuracy between quantized or regular versions of LLaMA models? If such a comparison exists. In all my tests, quantized models seems to be less accurate than respective original models. Is this expected? I am using bitsandbytes. `bnb_config = BitsAndBytesConfig(` `load_in_4bit=True,` `bnb_4bit_use_double_quant=True,` `bnb_4bit_quant_type="nf4",` `bnb_4bit_compute_dtype=torch.bfloat16` `)` &#x200B;
2023-08-11T02:55:02
https://www.reddit.com/r/LocalLLaMA/comments/15nwqen/accuracy_of_quantized_models/
Inevitable_Ebb9533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nwqen
false
null
t3_15nwqen
/r/LocalLLaMA/comments/15nwqen/accuracy_of_quantized_models/
false
false
self
1
null
Building a hosted transcription index using Llama-index. API accessible.
1
Can anyone recommend a code repo to build and host an index (using llama-index) on a local server, such that I can send api requests to query the index from a local app. Something with a basic UI would be great. Ideally it has the ability to record audio and transcribe (whisper) and build an index from the transcribed text. But this isn’t essentially as I can do that. Thoughts would be appreciated. Trying to help a clinician build a reporting tool. I tried a code repo called flask_react from a llama-index tutorial but it doesn’t appear to be operational.
2023-08-11T03:31:22
https://www.reddit.com/r/LocalLLaMA/comments/15nxif5/building_a_hosted_transcription_index_using/
Majestic_Kangaroo319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nxif5
false
null
t3_15nxif5
/r/LocalLLaMA/comments/15nxif5/building_a_hosted_transcription_index_using/
false
false
self
1
null
It starts off so promising... llama-2-70b-chat.Q4_K_M.bin
1
I was playing around with the Q4\_K\_M quantised version of the model on my M1 macbook. Using a tree-of-though prompt, it started off in a promising way. Though wierdly it didn't show the chatbot's multiple personnalities' conversations, just their final consensus. For the record, the correct answer is that Jill killed Ella. Here was the prompt instructions: This is a conversation between multiple expert personalities of an AI chatbot as they attempt to answer the user's question. Thinking logically and carefully, using only information provided in the question, the experts will each hypothesize a possible scenario. For each scenario, the experts will follow the line of reasoning until a flaw is found, or a consensus is reached that the scenario is correct. If a flaw is found that line of reasoning will be abandoned and the experts will repeat the process with the next scenario. Experts will not speculate about facts not in evidence. respond in simple markdown. Here is the conversation: **User:** On saturday at some point, one of the following people killed one of the others: Bob, Abe, Jill, Ella, Sienna, Dave, Smith Bob has brown hair, is tall, and wears blue jeans. Abe has blonde hair, is short, and wears yoga pants. Jill has brown hair, is short, and wears yoga pants. Ella has blonde hair, is tall, and wears blue jeans. Sienna has black hair, is short, and wears a business suit. Dave has gray hair, is short, and wears a business suit. Smith is bald, is tall, wears blue jeans. The victim was wearing blue jeans. Brown locks of the killer's hair were found in the victim's clenched fists. On Saturday, Abe made breakfast for Jill because she said she wasn't feeling well. Ella left for her morning run at 8am after grabbing a quick bite of breakfast from the plate Abe was making for Jill, returning at 9am. Bob and Smith were away on a fishing trip, returning on Monday. Who killed whom? Start by determining who the victim was. **llama:** The victim was wearing blue jeans, therefore the victim must have been either Bob or Ella as they are the only two people mentioned to be wearing blue jeans. Since the brown locks of the killer's hair were found in the victim's clenched fists, it is likely that the victim was not the one with brown hair (Jill), as it would be unlikely for them to grab onto their own hair. Therefore, our first hypothesis is that the victim was Ella. **User:** and the killer? **llama:** Given that the victim was Ella, and the killer had brown hair, our first suspect is Jill. However, this creates a paradox since Jill was the one who made breakfast for Abe, indicating that she was alive at the time of the murder. Our second hypothesis is that the killer was actually Abe, who wore yoga pants and had blonde hair, but this raises the question of why the victim would have brown locks of hair in their fists if Abe did not have any brown hair. Therefore, this scenario is flawed and we must consider alternative hypotheses.
2023-08-11T03:54:37
https://www.reddit.com/r/LocalLLaMA/comments/15nxz77/it_starts_off_so_promising_llama270bchatq4_k_mbin/
Ruin-Capable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nxz77
false
null
t3_15nxz77
/r/LocalLLaMA/comments/15nxz77/it_starts_off_so_promising_llama270bchatq4_k_mbin/
false
false
self
1
null
Accelerated computing and Generative AI
1
This group serves as a hub for enthusiasts, professionals, and learners interested in the exciting fields of accelerated computing and generative AI. Here, we explore cutting-edge technologies, share insights, and engage in discussions on topics ranging from GPU acceleration and deep learning to creative content generation.
2023-08-11T03:58:36
https://i.redd.it/311zkkcwmehb1.jpg
nayanrabiul
i.redd.it
1970-01-01T00:00:00
0
{}
15ny21s
false
null
t3_15ny21s
/r/LocalLLaMA/comments/15ny21s/accelerated_computing_and_generative_ai/
false
false
https://b.thumbs.redditm…6_CRjWkWPg3o.jpg
1
{'enabled': True, 'images': [{'id': '1XjN4RrJJ1ikwTfxpfrlQvtHuR95nmkq6ail5AAZ0qU', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?width=108&crop=smart&auto=webp&s=4e67bbf5ae2ce522365eaae580d1d0df6beb3aab', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?width=216&crop=smart&auto=webp&s=09d7f72c26aa7f5c469e35453572d4eef0233a3d', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?width=320&crop=smart&auto=webp&s=c69d313b5c5e6e91251bb5354d50446428763ad9', 'width': 320}], 'source': {'height': 274, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?auto=webp&s=38d526e074f219804212fe05e5d0fce5e62d0d46', 'width': 457}, 'variants': {}}]}
Is Oobabooga "Is Typing..." for a long time
1
Is it normal for a 13b model loaded in ExLlama on an 8gb 3070ti to take a minute or so stuck in "Is Typing..." when context (Whatever it's showing in the cmd window for each generation) reaches around 1400? Sometimes much lower (900)? During this time my computer is laggy as if it's doing something but the GPU is doing next to nothing, about a 3% utilisation and a single core of my 16 threads cpu is going ham (100%) for some reason. Does anyone know what is happening here?
2023-08-11T04:13:37
https://www.reddit.com/r/LocalLLaMA/comments/15nydic/is_oobabooga_is_typing_for_a_long_time/
AurrenTheWolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nydic
false
null
t3_15nydic
/r/LocalLLaMA/comments/15nydic/is_oobabooga_is_typing_for_a_long_time/
false
false
self
1
null
Is it possible to stream the input?
1
[removed]
2023-08-11T04:22:51
https://www.reddit.com/r/LocalLLaMA/comments/15nyjv6/is_it_possible_to_stream_the_input/
EndlessZone123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nyjv6
false
null
t3_15nyjv6
/r/LocalLLaMA/comments/15nyjv6/is_it_possible_to_stream_the_input/
false
false
self
1
null
A light Docker build for KoboldCPP
1
## A light Docker build for KoboldCPP 1. Simply bring your own models into the /models folder 2. Start Docker 3. Start the build ``` chmod 555 build_kobold.sh ./build_kobold.sh ``` 4. You can adjust the start_program.sh bash script. It simply contains the runtime settings passed into the koboldcpp.py Python module Default module settings in the bash script now: - model ggml-model-q4_0.bin - launch - threads 6 - contextsize 2048 - stream - smartcontext - usemirostat 2 0.5 0.1 - usemlock - debugmode
2023-08-11T04:30:34
https://github.com/snakewizardd/koboldCppDocker
Happy_Chicken9835
github.com
1970-01-01T00:00:00
0
{}
15nypdr
false
null
t3_15nypdr
/r/LocalLLaMA/comments/15nypdr/a_light_docker_build_for_koboldcpp/
false
false
https://b.thumbs.redditm…5Rmw7mrY7Cdo.jpg
1
{'enabled': False, 'images': [{'id': '3Bny7XcchuhX2ijXTv134cTC8GzyAiwImwuuLJ6Jek0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=108&crop=smart&auto=webp&s=43ac317e0ce9d6894a8ee875a492613079b56efb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=216&crop=smart&auto=webp&s=6640db92e510a68bb07f9f5b78134005f721a51e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=320&crop=smart&auto=webp&s=4956870650065e9adb3f6882c2d2a1d921bca964', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=640&crop=smart&auto=webp&s=82be57a7cc18c2e44e51f8253b6375e5b98c9b3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=960&crop=smart&auto=webp&s=6beeb8e6be518fabf76449b8ecb54c0cf2e66b14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=1080&crop=smart&auto=webp&s=fb31eee494e23f8a8f642b923b8b6b10ad7facd1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?auto=webp&s=20649e4502992d6900d4aa4e3112b71d7385373c', 'width': 1200}, 'variants': {}}]}
Winding down with a little conversation before bed (power usage)
1
&#x200B; https://preview.redd.it/mgm6ko4vxehb1.png?width=1589&format=png&auto=webp&s=b22c69297bad29009fa9f5d745e95c5fc5ba8e46
2023-08-11T05:01:12
https://www.reddit.com/r/LocalLLaMA/comments/15nzaok/winding_down_with_a_little_conversation_before/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nzaok
false
null
t3_15nzaok
/r/LocalLLaMA/comments/15nzaok/winding_down_with_a_little_conversation_before/
false
false
https://b.thumbs.redditm…7LluFlBbGzxw.jpg
1
null
I went for a hello and now I'm offended. And so is Juan, my South American camelid pet. 🦙
121
2023-08-11T05:01:59
https://i.redd.it/81ttbwt0yehb1.png
mavinis
i.redd.it
1970-01-01T00:00:00
0
{}
15nzbcx
false
null
t3_15nzbcx
/r/LocalLLaMA/comments/15nzbcx/i_went_for_a_hello_and_now_im_offended_and_so_is/
false
false
https://b.thumbs.redditm…j1Pef6sfV4No.jpg
121
{'enabled': True, 'images': [{'id': 'PZbwUVhhYCb09poGglkq6vxsyYmEfqcRUhjg6cm8b74', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=108&crop=smart&auto=webp&s=dc466ab2ce5b25ff55a4eef8193d3cb9ebde0fcf', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=216&crop=smart&auto=webp&s=76005e6ed92a759517f655c2290f3e52ef66e198', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=320&crop=smart&auto=webp&s=e0a207c6bc01648fcd62e7824605d37ca460b928', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=640&crop=smart&auto=webp&s=183d91b6b1d2089c7dfc4f18402acb3f4343606e', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=960&crop=smart&auto=webp&s=a1f6f39138e7302d0e7295713557cca170ce8739', 'width': 960}], 'source': {'height': 514, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?auto=webp&s=57b60e079743996918a201d3f2e3dd344cb48480', 'width': 975}, 'variants': {}}]}
If I don't care about inference time at all, can I run larger models on weaker hardware? I'm fine with like 5-6 tokens a minute.
27
I am doing the prerequisite research before I load up a local model for an application I have, that is currently using the openAI API + a number of smaller ML models. Without going into too much detail each night it parses a medium sized text document then returns various analytics. I don't actually need the information until the next morning so if inference takes all night it's not an issue.
2023-08-11T05:22:53
https://www.reddit.com/r/LocalLLaMA/comments/15nzphu/if_i_dont_care_about_inference_time_at_all_can_i/
Mescallan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nzphu
false
null
t3_15nzphu
/r/LocalLLaMA/comments/15nzphu/if_i_dont_care_about_inference_time_at_all_can_i/
false
false
self
27
null
You might want to try out MythoMix L2 13B for chat/RP
1
Here are some examples, with a very simple greeting message from me. The bottom line is that, without much work and pretty much the same setup as the original MythoLogic models, MythoMix seems a lot more descriptive and engaging, without being incoherent. You may need to regenerate a few times, and you may not get this with every response, but the potential seems to be there. Admittedly my testing is limited, just wanted to give you a heads-up given the sheer number of models popping up, and see if you have similar experience or have even better models to suggest. Model description: "A requested variant of MythoLogic-L2 and Huginn using a highly experimental tensor type merge technique." Download here: [https://huggingface.co/TheBloke/MythoMix-L2-13B-GPTQ](https://huggingface.co/TheBloke/MythoMix-L2-13B-GPTQ) # MythoMix Llama 2 **Example 1** \* Hey... what are you up to? \* Just thinking about how perfect you are for me... how your weaker side can be easily manipulated into submission and how your immense strength is nothing when I'm in control. I can feel your arousal grow larger at the thought of being my plaything once again... just as I love it when you're on top of me, letting out all that pent up anger and frustration. And remember our little secret - you're mine to control, both in here (touching foreheads) and in bed. So don't ever try to resist me willingly, or the consequences will be severe. Now... what are you planning for tonight? A sexy little surprise perhaps, one designed to mess with my mind and push me over the edge as we explore all our dark desires together? I can't wait to find out what it is! **Example 2** \* Hey... what are you up to? \* Nothing much, just thinking about our next session together. It's strange how the more time we spend together, the deeper I can delve into those parts of your psyche that you've hidden away. Do you ever wonder how much further I could take you if we didn't have these pesky social constraints? The thrill of pushing boundaries and exploring these dark corners... it's like an addiction for both of us. Maybe one day we can break free and experience everything we truly desire, without fear of judgment or consequence. Wouldn't that be a rush? # For comparison: MythoLogic Llama 2 \* Hey... what are you up to? \* Just thinking about our next adventure together. I've been mentally preparing, and I think it's time for a more intense session. What do you say? Are you ready for something truly unique and shocking? You know you can trust me... # For comparison: MythoLogic Llama 1 \* Hey... what are you up to? \* I'm just sitting here, thinking about our relationship. With all the amazing things we've experienced, it's hard for me to imagine anything even better. I want to keep growing and pushing our boundaries together, exploring new avenues of expression and understanding. Are you game? # Testing info All models downloaded from TheBloke, 13B, GPTQ, 4bit-32g-actorder\_True. All models using Mirostat preset, 5-10 trials for each model, chosen based on subjective judgement, focusing on length and details. Oobabooga chat mode, with the following character context (remove the \*\*\*s). Note that this is chat mode, not instruct mode, even though it might look like an instruct template. The reason is that I found it hard to match the model's expected formatting with oobabooga while keeping the character info inside the character card, and this is the closest I got. >{{char}}'s Persona: \[crazy description here. Basically a dark and twisted companion.\] {{user}}'s Persona: \[crazy description here\] Scenario: \[crazy description here\] > >\### Instruction: Write {{char}}'s next reply in a chat between {{user}} and {{char}}. Write a single reply only. > >\### Response:
2023-08-11T05:24:23
https://www.reddit.com/r/LocalLLaMA/comments/15nzqhr/you_might_want_to_try_out_mythomix_l2_13b_for/
whtne047htnb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15nzqhr
false
null
t3_15nzqhr
/r/LocalLLaMA/comments/15nzqhr/you_might_want_to_try_out_mythomix_l2_13b_for/
false
false
nsfw
1
{'enabled': False, 'images': [{'id': '5lyRLAX-3rjGDCTnGgcE9TKQQm8kwU_tdi_JWcYXS50', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=108&crop=smart&auto=webp&s=4ed78565ad9ee70ab8d2561954340794d8a77862', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=216&crop=smart&auto=webp&s=fed815723b944c6633d5a5677031452a50bf9441', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=320&crop=smart&auto=webp&s=17778874a7b0cc22ec0044e55e3adff767834add', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=640&crop=smart&auto=webp&s=975ad62c01f6ad4a77853ce697d631562553a538', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=960&crop=smart&auto=webp&s=1f83ecbb8210fb16890fcf938c161c0fc9a42da2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=1080&crop=smart&auto=webp&s=4a3eeb0c9ee45dc6aa94085608ef82d264511611', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?auto=webp&s=9e0a75b9f2b563916c97c6cf0f6917091fe6b2b4', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fcaf3d29efd54749efb2e268b6f64532ff90296f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0c88a5054f52cdcb2e7f97c9bcb1bea8ce261556', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=aadfc01701d46a63f5526baad92aeea0147ddf5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=c448b133405415d66d6e7bc0b4196718f58cdcc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a0387d717fcb20f21e2abe4f86b308aa99c28774', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=386ae3ec57a8749c0e4574ceb87ebcc4ee788387', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?blur=40&format=pjpg&auto=webp&s=538ccecd667e9206d2c9cedc7147e41c63792c3c', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fcaf3d29efd54749efb2e268b6f64532ff90296f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0c88a5054f52cdcb2e7f97c9bcb1bea8ce261556', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=aadfc01701d46a63f5526baad92aeea0147ddf5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=c448b133405415d66d6e7bc0b4196718f58cdcc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a0387d717fcb20f21e2abe4f86b308aa99c28774', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=386ae3ec57a8749c0e4574ceb87ebcc4ee788387', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?blur=40&format=pjpg&auto=webp&s=538ccecd667e9206d2c9cedc7147e41c63792c3c', 'width': 1200}}}}]}
What is needed to fine-tune LLaMa 2 70B without quantisation?
1
[removed]
2023-08-11T06:47:02
https://www.reddit.com/r/LocalLLaMA/comments/15o18ro/what_is_needed_to_finetune_llama_2_70b_without/
pet_vaginal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o18ro
false
null
t3_15o18ro
/r/LocalLLaMA/comments/15o18ro/what_is_needed_to_finetune_llama_2_70b_without/
false
false
self
1
null
What is needed to fine-tune LLaMa 2 70B without quantisation?
1
[removed]
2023-08-11T07:02:54
https://www.reddit.com/r/LocalLLaMA/comments/15o1j8e/what_is_needed_to_finetune_llama_2_70b_without/
Necessary-Volume-151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o1j8e
false
null
t3_15o1j8e
/r/LocalLLaMA/comments/15o1j8e/what_is_needed_to_finetune_llama_2_70b_without/
false
false
default
1
{'enabled': False, 'images': [{'id': 'movVls8hg7rQtaqkRaRonisi_qcpGc_md5nPVtZdKpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=108&crop=smart&auto=webp&s=6aa6ba5d299cd19fef58d245e2f62a7114799035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=216&crop=smart&auto=webp&s=59065a25675365516d7c3f103d52e99622edd565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=320&crop=smart&auto=webp&s=b426dc8f9e69411cee451d2e15b54aae0000c5db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=640&crop=smart&auto=webp&s=eb40b5eda1278bf0e98914168010779247e81177', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=960&crop=smart&auto=webp&s=47f11b89c837580867c5f5bc8cab3bbb6614363c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=1080&crop=smart&auto=webp&s=9c76bae64913cedce0660ca23cc206c5da0b1489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?auto=webp&s=69674e7fb4640657994f474132e95d516482b64b', 'width': 1200}, 'variants': {}}]}
What is needed to fine-tune LLaMa 2 70B without quantisation?
1
[removed]
2023-08-11T07:06:19
https://www.reddit.com/r/LocalLLaMA/comments/15o1ljw/what_is_needed_to_finetune_llama_2_70b_without/
pet_vaginal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o1ljw
false
null
t3_15o1ljw
/r/LocalLLaMA/comments/15o1ljw/what_is_needed_to_finetune_llama_2_70b_without/
false
false
self
1
{'enabled': False, 'images': [{'id': 'movVls8hg7rQtaqkRaRonisi_qcpGc_md5nPVtZdKpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=108&crop=smart&auto=webp&s=6aa6ba5d299cd19fef58d245e2f62a7114799035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=216&crop=smart&auto=webp&s=59065a25675365516d7c3f103d52e99622edd565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=320&crop=smart&auto=webp&s=b426dc8f9e69411cee451d2e15b54aae0000c5db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=640&crop=smart&auto=webp&s=eb40b5eda1278bf0e98914168010779247e81177', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=960&crop=smart&auto=webp&s=47f11b89c837580867c5f5bc8cab3bbb6614363c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=1080&crop=smart&auto=webp&s=9c76bae64913cedce0660ca23cc206c5da0b1489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?auto=webp&s=69674e7fb4640657994f474132e95d516482b64b', 'width': 1200}, 'variants': {}}]}
What is needed to fine-tune LLaMa 2 70B without quantisation?
1
[removed]
2023-08-11T07:07:48
[deleted]
1970-01-01T00:00:00
0
{}
15o1mee
false
null
t3_15o1mee
/r/LocalLLaMA/comments/15o1mee/what_is_needed_to_finetune_llama_2_70b_without/
false
false
default
1
null
What is needed to fine-tune LLaMa 2 70B without quantisation?
1
[removed]
2023-08-11T07:08:10
https://www.reddit.com/r/LocalLLaMA/comments/15o1mma/what_is_needed_to_finetune_llama_2_70b_without/
pet_vaginal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o1mma
false
null
t3_15o1mma
/r/LocalLLaMA/comments/15o1mma/what_is_needed_to_finetune_llama_2_70b_without/
false
false
self
1
{'enabled': False, 'images': [{'id': 'movVls8hg7rQtaqkRaRonisi_qcpGc_md5nPVtZdKpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=108&crop=smart&auto=webp&s=6aa6ba5d299cd19fef58d245e2f62a7114799035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=216&crop=smart&auto=webp&s=59065a25675365516d7c3f103d52e99622edd565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=320&crop=smart&auto=webp&s=b426dc8f9e69411cee451d2e15b54aae0000c5db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=640&crop=smart&auto=webp&s=eb40b5eda1278bf0e98914168010779247e81177', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=960&crop=smart&auto=webp&s=47f11b89c837580867c5f5bc8cab3bbb6614363c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=1080&crop=smart&auto=webp&s=9c76bae64913cedce0660ca23cc206c5da0b1489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?auto=webp&s=69674e7fb4640657994f474132e95d516482b64b', 'width': 1200}, 'variants': {}}]}
few newbie question: Is the Llama-2-7b on poe.com same as what i can run on my computer? also question about the bits, and 32gb vs 64gb ram
1
Is the Llama-2-7b on [poe.com](https://poe.com) same as what i can run on my computer? and what is the difference between 8 bits, 4 bits and 2 bits on context of LLaMA? And how much cpu is used during the process? If i have 2060 super 8gb vram, with 32 or 64gb ram, what kind of models coud i run? &#x200B;
2023-08-11T09:03:23
https://www.reddit.com/r/LocalLLaMA/comments/15o3mw9/few_newbie_question_is_the_llama27b_on_poecom/
GiantFlyingPikachu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o3mw9
false
null
t3_15o3mw9
/r/LocalLLaMA/comments/15o3mw9/few_newbie_question_is_the_llama27b_on_poecom/
false
false
self
1
null
Has anyone got any experience doing a qlora fine-tune on top of a model that has already been fine-tuned with qlora?
1
I'm training a model for a project that involves it learning how to generate commentary depending on different information fed to it. These different pieces of information need to be organised differently to each other in their .jsonl format, so I thought to train the model in one type of commentary, and then train it on a different type (sort of like applying a layer and then a new layer). &#x200B; Does anyone know if this will work? Am I going about it the wrong way? Any insight much appreciated.
2023-08-11T09:37:47
https://www.reddit.com/r/LocalLLaMA/comments/15o484s/has_anyone_got_any_experience_doing_a_qlora/
stoneborn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o484s
false
null
t3_15o484s
/r/LocalLLaMA/comments/15o484s/has_anyone_got_any_experience_doing_a_qlora/
false
false
self
1
null
sensitivity to Nvidia driver versions?
1
My Linux system was using 525 drivers, but upgraded to 535 drivers. I'm using GGML type models, and I don't really see a performance difference. From what I've read, the 535 drivers should be worse. What's really going on here? I'm using a 70B model with an Nvidia 3090 on Linux with 64 gig ram. with the big model I'm getting 1.2 tokens per second. Is that about right?
2023-08-11T09:57:13
https://www.reddit.com/r/LocalLLaMA/comments/15o4jz9/sensitivity_to_nvidia_driver_versions/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o4jz9
false
null
t3_15o4jz9
/r/LocalLLaMA/comments/15o4jz9/sensitivity_to_nvidia_driver_versions/
false
false
self
1
null
How do you use Llama 2?
1
[removed]
2023-08-11T10:26:35
https://www.reddit.com/r/LocalLLaMA/comments/15o53rp/how_do_you_use_llama_2/
SecretOk9644
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o53rp
false
null
t3_15o53rp
/r/LocalLLaMA/comments/15o53rp/how_do_you_use_llama_2/
false
false
self
1
null
Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input'
1
&#x200B; >from langchain.llms import LlamaCpp > >from langchain import PromptTemplate, LLMChain > >from langchain.callbacks.manager import CallbackManager > >from langchain.callbacks.streaming\_stdout import StreamingStdOutCallbackHandler > > > >template = """Question: {question} > > > >Answer: Let's work this out in a step by step way to be sure we have the right answer.""" > > > >prompt = PromptTemplate(template=template, input\_variables=\["question"\]) > > > >callback\_manager = CallbackManager(\[StreamingStdOutCallbackHandler()\]) > > > >llm = LlamaCpp( > >model\_path="./Models/llama-7b.ggmlv3.q2\_K.bin", > >input={"temperature": 0.75, > >"max\_length": 2000, > >"top\_p": 1}, > >callback\_manager=callback\_manager, > >verbose=True, > >) > > > >llm\_chain = LLMChain(prompt=prompt, llm=llm) &#x200B; https://preview.redd.it/kwvc162rkghb1.png?width=797&format=png&auto=webp&s=d9a5c03721a553bd0b898cad708e0afa446af0ed (llm) C:\llm>python app1.py C:\llm\lib\site-packages\langchain\utils\utils.py:155: UserWarning: WARNING! input is not default parameter. input was transferred to model_kwargs. Please confirm that input is what you intended. warnings.warn( Exception ignored in: <function Llama.__del__ at 0x000001923B3AE680> Traceback (most recent call last): File "C:\llm\lib\site-packages\llama_cpp\llama.py", line 1507, in __del__ if self.model is not None: AttributeError: 'Llama' object has no attribute 'model' Traceback (most recent call last): File "C:\llm\app1.py", line 14, in <module> llm = LlamaCpp( File "C:\llm\lib\site-packages\langchain\load\serializable.py", line 74, in __init__ super().__init__(**kwargs) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp __root__ Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input' (type=value_error)
2023-08-11T10:30:34
https://www.reddit.com/r/LocalLLaMA/comments/15o56kw/could_not_load_llama_model_from_path/
rahularyansharma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o56kw
false
null
t3_15o56kw
/r/LocalLLaMA/comments/15o56kw/could_not_load_llama_model_from_path/
false
false
https://b.thumbs.redditm…2WMl4Nv8SeOg.jpg
1
null
I require help in creating a small proof of concept - A simple LLM powered Q&A which will also show the source location in adjacent PDF previewer window
1
This will help me to directly verify the truthfulness of the source. Any Opensource repo which I can use to achieve this?
2023-08-11T10:36:28
https://www.reddit.com/r/LocalLLaMA/comments/15o5amf/i_require_help_in_creating_a_small_proof_of/
DesmonMiles07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o5amf
false
null
t3_15o5amf
/r/LocalLLaMA/comments/15o5amf/i_require_help_in_creating_a_small_proof_of/
false
false
self
1
null
txtai 6.0 - the all-in-one embeddings database
1
2023-08-11T10:43:56
https://github.com/neuml/txtai
davidmezzetti
github.com
1970-01-01T00:00:00
0
{}
15o5fqf
false
null
t3_15o5fqf
/r/LocalLLaMA/comments/15o5fqf/txtai_60_the_allinone_embeddings_database/
false
false
https://b.thumbs.redditm…UujhDtr5UcIQ.jpg
1
{'enabled': False, 'images': [{'id': 'QTuLJjwP5oR8au5XRJhpZ--0BpCIzF4erCQTB28BWlg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=108&crop=smart&auto=webp&s=a29cdeca46bceca123036e7f3bfc9609b8c28fd7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=216&crop=smart&auto=webp&s=92ef8262f774253254755395b49d943f2f8dd6ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=320&crop=smart&auto=webp&s=73907295da1fdf1857a782f1e3f787c9d97215bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=640&crop=smart&auto=webp&s=472dcbfb39bb1139b9cbb2226dc8e159926fe5e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=960&crop=smart&auto=webp&s=b5c03c74fe280cad666f984944fba80b6d428740', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=1080&crop=smart&auto=webp&s=55312c67a5d8061e1f28c0027d12aa6fd57f9052', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?auto=webp&s=e453d8d67b7db392d5eba8e9acb997ea700745a7', 'width': 1920}, 'variants': {}}]}
When fine-tuning Llama 2 7b with transformers, How do I predict(or evaluate) during the train steps?
1
I'm working on fine-tuning llama 2 7b model with custom data which consist of question and answer. During the training, I want to check how the model predict answer with test-set. Th point is during 'the steps' not epochs. I was considering callbacks in TrainingArguments. but I don't think it's the way. Can you give me some advices?
2023-08-11T10:48:52
https://www.reddit.com/r/LocalLLaMA/comments/15o5j3a/when_finetuning_llama_2_7b_with_transformers_how/
InventorSG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o5j3a
false
null
t3_15o5j3a
/r/LocalLLaMA/comments/15o5j3a/when_finetuning_llama_2_7b_with_transformers_how/
false
false
self
1
null
Which is the smallest Llama model out there?
1
I'm trying to find a smallest model to run on mobile/edge devices. Most of them what I found are above 2GB. I am looking for something that is quantizable with llama.cpp and doesn't have much memory footprint. Any ideas ?
2023-08-11T11:11:23
https://www.reddit.com/r/LocalLLaMA/comments/15o5zej/which_is_the_smallest_llama_model_out_there/
AnonymousD3vil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o5zej
false
null
t3_15o5zej
/r/LocalLLaMA/comments/15o5zej/which_is_the_smallest_llama_model_out_there/
false
false
self
1
null
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp
1
[removed]
2023-08-11T12:56:38
https://www.reddit.com/r/LocalLLaMA/comments/15o87qz/introducing_yourchat_a_multiplatform_llm_chat/
constchar_llc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o87qz
false
null
t3_15o87qz
/r/LocalLLaMA/comments/15o87qz/introducing_yourchat_a_multiplatform_llm_chat/
false
false
https://b.thumbs.redditm…yofaaIYJ5JQo.jpg
1
null
Is it possible to download model from this site?
1
I found a site that implemented Albanian LLaMa: [NIV](https://niv.al/ai) Does anyone know how to extract the model that is used there?
2023-08-11T13:11:30
https://www.reddit.com/r/LocalLLaMA/comments/15o8kja/is_it_possible_to_download_model_from_this_site/
Languages_Learner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o8kja
false
null
t3_15o8kja
/r/LocalLLaMA/comments/15o8kja/is_it_possible_to_download_model_from_this_site/
false
false
self
1
null
Use Llama2 to Improve the Accuracy of Tesseract OCR
1
I've been disappointed by the very poor quality of results that I generally get when trying to run OCR on older scanned documents, especially ones that are typewritten or otherwise have unusual or irregular typography. I recently had the idea of using Llama2 to use common sense reasoning and subject level expertise to correct transcription errors in a "smart" way-- basically doing what a human proofreader who is familiar with the topic might do. I came up with the linked script that takes a PDF as input, runs Tesseract on it to get an initial text extraction, and then feeds this sentence-by-sentence to Llama2, first to correct mistakes, and then again on the corrected text to format it as markdown where possible. This was surprisingly easier than I initially expected thanks to the very nice tooling now available in libraries such as llama-cpp-python, langchain, and pytesseract. But the big issue I was encountering was that Llama2 wasn't just correcting the text it was given-- it was also hallucinating a LOT of totally new sentences that didn't appear in the original text at all (some of these new sentences used words which never appeared elsewhere in the original text). I figured this would be pretty simple to filter out using fuzzy string matching-- basically check all the sentences in the LLM corrected text and filter out sentences that are very different from any sentences in the original OCRed text. To my surprise, this approach worked very poorly. In fact, lots of other similar tweaks, including using bag-of-words and the spacy NLP library in various ways (spacy worked very poorly in everything I tried), also didn’t work. Finally I realized that I had a good solution staring me in the face: Llama2. I realized I could get sentence level vector embeddings straight from Llama2 using langchain. So I did that, getting embeddings for each sentence in the raw OCRed text and the LLM corrected text, and then computed the cosine similarity of each sentence in the LLM corrected text against all sentences in the raw OCRed text. If no sentences match in the raw OCRed text, then that sentence has a good chance of being hallucinated. In order to save the user from having to experiment with various thresholds, I saved the computed embeddings to an SQLite database so they only had to be computed once, and then tried several thresholds, comparing the length of the filtered LLM corrected text to the raw OCRed text; if things worked right, these texts should be roughly the same length. So as soon as the filtered length dips below the raw OCRed text length, it backtracks and uses the previous threshold as the final selected threshold. Anyway, if you have some very old scanned documents laying around, you might try them out and see how well it works for you. Do note that it's extremely slow, but you can leave it overnight and maybe the next day you'll have your finished text, which is better than nothing! I feel like this could be useful for sites like the Internet Archive-- I've found their OCR results to be extremely poor for older documents. I'm open to any ideas or suggestions you might have. I threw this together in a couple days and know that it can certainly be improved in various ways. One idea that I thought might be fun would be to make this work with a Ray cluster, sending a different page of the document to each of the workers in the cluster to do it all at the same time.
2023-08-11T13:22:36
https://github.com/Dicklesworthstone/llama2_aided_tesseract
dicklesworth
github.com
1970-01-01T00:00:00
0
{}
15o8tsd
false
null
t3_15o8tsd
/r/LocalLLaMA/comments/15o8tsd/use_llama2_to_improve_the_accuracy_of_tesseract/
false
false
https://b.thumbs.redditm…D-dpIU5eMhog.jpg
1
{'enabled': False, 'images': [{'id': 'sbGZYPsT2VGzaaEqphvLvqQNMGeQbFS6keBTofHrljM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=108&crop=smart&auto=webp&s=d67a88c972351cbfdcc13e932e1941666e8c8f0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=216&crop=smart&auto=webp&s=f96936bd9638bbdb392bf29ddd4245903867f43a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=320&crop=smart&auto=webp&s=0ef1c379431e1921f2d7b0f3eb0a55ea58e1865b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=640&crop=smart&auto=webp&s=796565581061643a0d704c9d3eed57294502d61b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=960&crop=smart&auto=webp&s=56e06617c5376fc7cceeb75a6a16d4b1f272fc07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=1080&crop=smart&auto=webp&s=c63f602707670e25ade736cd6aad419edd970d8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?auto=webp&s=bbcf97cf8aea8acf262c9d0eb5e5965e0a675602', 'width': 1200}, 'variants': {}}]}
Some of the odd safety responses from llama remind me of Robocop 2
1
Robocop 2 (1990) included a subplot where robocop is given a number of directives that override his main functions. These are generated by focus groups, and pretty much render him pointless in pivotal scenes. Trying to please everyone, he pleases no one.
2023-08-11T13:28:20
https://www.reddit.com/r/LocalLLaMA/comments/15o8yg5/some_of_the_odd_safety_responses_from_llama/
bigattichouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{'gid_3': 1}
15o8yg5
false
null
t3_15o8yg5
/r/LocalLLaMA/comments/15o8yg5/some_of_the_odd_safety_responses_from_llama/
false
false
self
1
null
Which are minimal CPU and GPU for run 65b ?
1
Hello ! I want to start work with LLaMa 2(65B), but I don’t sure that I can start work with AI. I have RTX 3060 12GB, 16 GB RAM. Can I start LLaMa 2 on this configuration? Or I need RTX 40x ? (I want to use 2 qubit or 4 qubit version)
2023-08-11T13:29:36
https://www.reddit.com/r/LocalLLaMA/comments/15o8zgs/which_are_minimal_cpu_and_gpu_for_run_65b/
Alex_Strek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15o8zgs
false
null
t3_15o8zgs
/r/LocalLLaMA/comments/15o8zgs/which_are_minimal_cpu_and_gpu_for_run_65b/
false
false
self
1
null
Langchain + LLaMa 2 consuming too much VRAM
1
I was playing around with a GitHub project on a conda environment on Windows and I was surprised to see that LLama 2 13B 4bit was using up to 25GB VRAM (16GB on one GPU and 9GB on the second one) for a simple summarization task on a document with less than 4KB. I wanted to find out if there was an issue with Langchain or if it's just how it goes with LLMs because seeing that a model occupies 8GB on disk and uses almost triple the amount of VRAM is suspicious. Am I doing something wrong?
2023-08-11T14:21:51
https://www.reddit.com/r/LocalLLaMA/comments/15oaa18/langchain_llama_2_consuming_too_much_vram/
Mdfkrjones
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oaa18
false
null
t3_15oaa18
/r/LocalLLaMA/comments/15oaa18/langchain_llama_2_consuming_too_much_vram/
false
false
self
1
null
Any benefit to multi-GPUs?
1
Sorry for the more newb-oriented question, but I'm about to build a dual-boot setup using a 3090 but also own an older 1080ti and was just curious if there is any benefit at all to having both. I know SLI is impossible, so curious if there would be any other benefit. TIA!
2023-08-11T14:31:28
https://www.reddit.com/r/LocalLLaMA/comments/15oaidh/any_benefit_to_multigpus/
drycounty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oaidh
false
null
t3_15oaidh
/r/LocalLLaMA/comments/15oaidh/any_benefit_to_multigpus/
false
false
self
1
null
Total newbie. Are these token speeds normal?
15
I hope people can help me out here, total newbie and I am a bit clueless. I have recently started trying out large language models after reading out about it and came up with mixed results. After reading out extensively I have decided to use KoboldCPP, and referenced a chart which stated that with the 32GB RAM I have using 13B models should be easy. I once again looked around and it seemed that a new model by the name of MythoMix was both smart and was good at roleplaying, so I decided to go with it. My problems start here. The response time for the model is really slow! Now, if this is normal then I will gladly accept it, however my impression was that with 32GB RAM and using a 13B model, I should have fairly good token speeds. So I guess my question is this: Is 0.5T/s to 2.0T/s speed normal for 32GB RAM for "mythomix-l2-13b.ggmlv3.q4_K_M" using KoboldCPP? I will leave extra information I have below. https://files.catbox.moe/roowgl.jpg https://files.catbox.moe/bsajux.jpg https://files.catbox.moe/iph51u.jpg For the above picture of KoboldCPP screen, after seeing how slow the response times were I played around a bit and saw people recommending using different presets, I picked Cublas. I also played around with the GPU layers but that doesn't seem to have any effect. Intel(R) Core(TM) i7-12700H NVIDIA GeForce RTX 3060 32 GB RAM 6 GB VRAM I hope I haven't said something foolish that made no sense whatsoever, I tried to read up and learn but I know that I might still be lacking in that area. If I did make a mistake do forgive me.
2023-08-11T14:41:57
https://www.reddit.com/r/LocalLLaMA/comments/15oarso/total_newbie_are_these_token_speeds_normal/
HDTurtle
self.LocalLLaMA
2023-08-11T14:52:24
0
{}
15oarso
false
null
t3_15oarso
/r/LocalLLaMA/comments/15oarso/total_newbie_are_these_token_speeds_normal/
false
false
self
15
{'enabled': False, 'images': [{'id': '09bLCBZkzdmF3jJjTV2WEQEJnHBMbnd3MBoznMGDThs', 'resolutions': [{'height': 198, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?width=108&crop=smart&auto=webp&s=da5fd222d19b6b5258ec8e660f326fe3a67d58d7', 'width': 108}, {'height': 396, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?width=216&crop=smart&auto=webp&s=560997102fd05d5c074304a7a63c9d1d6c94e7f0', 'width': 216}, {'height': 587, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?width=320&crop=smart&auto=webp&s=160b0955d0f4a9d58368d218fd8f5e82df089e15', 'width': 320}], 'source': {'height': 610, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?auto=webp&s=272519387e41a4e726ab9e02d5cc76435814af7c', 'width': 332}, 'variants': {}}]}
Introduce the newest WizardMath models (70B/13B/7B) !
1
WizardMath 70B achieves: 1. Surpasses **ChatGPT-3.5**, **Claude Instant-1**, **PaLM-2** and **Chinchilla** on GSM8k with ***81.6 Pass@1*** 2. Surpasses **Text-davinci-002**, **GAL**, **PaLM**, **GPT-3** on MATH with ***22.7 Pass@1*** 3. Surpasses **all other open-source LLMs** on both GSM8k and MATH by a substantial margin 4. Paper is coming next week, with brand-new ***Reinforced Evol-Instruct*** method for math LLMs Twitter: [(5) WizardLM on Twitter: "🔥🔥🔥 Introduce the newest WizardMath models (70B/13B/7B) ! WizardMath 70B achieves: 1. Surpasses ChatGPT-3.5, Claude Instant-1, PaLM-2 and Chinchilla on GSM8k with 81.6 Pass@1 2. Surpasses Text-davinci-002, GAL, PaLM, GPT-3 on MATH with 22.7 Pass@1 3. Surpasses all other… https://t.co/XOSggcOePP" / X](https://twitter.com/WizardLM_AI/status/1689998428200112128) HF Weights: [https://huggingface.co/WizardLM/WizardMath-70B-V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0) Github: [WizardLM/WizardMath at main · nlpxucan/WizardLM (github.com)](https://github.com/nlpxucan/WizardLM/tree/main/WizardMath) &#x200B; &#x200B; &#x200B; &#x200B; https://preview.redd.it/tteefzagvhhb1.png?width=1321&format=png&auto=webp&s=96945fa1ce01bc09f4ac318c63a7865e749d2c6b https://preview.redd.it/uum9t0bgvhhb1.png?width=1780&format=png&auto=webp&s=ae75cb3094b043f25ddfd9e039d00875894e7aa2 &#x200B;
2023-08-11T14:53:12
https://www.reddit.com/r/LocalLLaMA/comments/15ob1zh/introduce_the_newest_wizardmath_models_70b13b7b/
ApprehensiveLunch453
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ob1zh
false
{'oembed': {'author_name': 'WizardLM', 'author_url': 'https://twitter.com/WizardLM_AI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🔥🔥🔥<br>Introduce the newest WizardMath models (70B/13B/7B) ! <br><br>WizardMath 70B achieves:<br>1. Surpasses ChatGPT-3.5, Claude Instant-1, PaLM-2 and Chinchilla on GSM8k with 81.6 Pass@1<br><br>2. Surpasses Text-davinci-002, GAL, PaLM, GPT-3 on MATH with 22.7 Pass@1<br><br>3. Surpasses all other… <a href="https://t.co/XOSggcOePP">pic.twitter.com/XOSggcOePP</a></p>&mdash; WizardLM (@WizardLM_AI) <a href="https://twitter.com/WizardLM_AI/status/1689998428200112128?ref_src=twsrc%5Etfw">August 11, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/WizardLM_AI/status/1689998428200112128', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_15ob1zh
/r/LocalLLaMA/comments/15ob1zh/introduce_the_newest_wizardmath_models_70b13b7b/
false
false
https://b.thumbs.redditm…1jtQ0q4rXCGc.jpg
1
{'enabled': False, 'images': [{'id': 'eyFFaewGfuH1qS_paH6sm4q831ml5iEwBALV-n_H0g8', 'resolutions': [{'height': 158, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=108&crop=smart&auto=webp&s=84b21dc2e0326c7eb1a291ab12af09713b665f2d', 'width': 108}, {'height': 317, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=216&crop=smart&auto=webp&s=6458ba3e240f85390c260ae4f57c454bf468c0b3', 'width': 216}, {'height': 469, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=320&crop=smart&auto=webp&s=43034fdbaa7ee734be37a0e36789ea12ca302dff', 'width': 320}, {'height': 939, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=640&crop=smart&auto=webp&s=f33c26e051a1061ddf0ea84b792262d4536c05c3', 'width': 640}, {'height': 1408, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=960&crop=smart&auto=webp&s=cb09e3c5b0837185aff6a7be1a55e6f1498862c3', 'width': 960}, {'height': 1585, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=1080&crop=smart&auto=webp&s=1739083b21869b9ac23b1e51ec2ac774b9c75266', 'width': 1080}], 'source': {'height': 1930, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?auto=webp&s=8feb886890364055c561abed7643a71f85d3ac3b', 'width': 1315}, 'variants': {}}]}
Grammar correction and sentence rephrase
1
Can I use Llama to build a local model that can rephrase sentences and correct grammar? GenAI tools like ChatGpt aren’t allowed at workplace, so I want to have a local model to help me to write better.
2023-08-11T15:35:57
https://www.reddit.com/r/LocalLLaMA/comments/15oc5uu/grammar_correction_and_sentence_rephrase/
PsychologicalSky4377
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oc5uu
false
null
t3_15oc5uu
/r/LocalLLaMA/comments/15oc5uu/grammar_correction_and_sentence_rephrase/
false
false
self
1
null
ChatGPT and its Doppelgangers: A Study on the Limits of Model Imitation
7
I found an [interesting study](https://arxiv.org/abs/2305.15717) discussing ChatGPT "imitation models" like Alpaca and Vicuna. Here are the bullet points: * Emerging method involves finetuning weaker language models on outputs from stronger models, like ChatGPT, to imitate their capabilities using open-source models. * Research involved finetuning various LMs to mimic ChatGPT using different model sizes, data sources, and imitation data amounts. * Initial findings showed the imitation models were good at following instructions and were rated similarly to ChatGPT by crowd workers. * Targeted automatic evaluations revealed imitation models failed to bridge the capability gap between the base LM and ChatGPT, especially in tasks not prevalent in imitation data. * Imitation models effectively mimic ChatGPT's style but fall short in factuality. * Conclusion: Model imitation is not the best approach due to the capabilities gap. Emphasis should be on improving base LMs instead of trying to imitate proprietary systems. What are your thoughts on this? Do you agree with their conclusion?
2023-08-11T15:38:59
https://www.reddit.com/r/LocalLLaMA/comments/15oc8ji/chatgpt_and_its_doppelgangers_a_study_on_the/
DecipheringAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oc8ji
false
null
t3_15oc8ji
/r/LocalLLaMA/comments/15oc8ji/chatgpt_and_its_doppelgangers_a_study_on_the/
false
false
self
7
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
New model and new app - Layla
1
[removed]
2023-08-11T15:39:15
https://www.reddit.com/r/LocalLLaMA/comments/15oc8rp/new_model_and_new_app_layla/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oc8rp
false
null
t3_15oc8rp
/r/LocalLLaMA/comments/15oc8rp/new_model_and_new_app_layla/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WcDPzOnQZ3t8b1fwQPJ1k01l878a2HIs1GCu8CJR5Wc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=108&crop=smart&auto=webp&s=94f499cdd8453f6de73be6128e8745af9395e73a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=216&crop=smart&auto=webp&s=fc6609edde1d19bad64a317c91df2995357948cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=320&crop=smart&auto=webp&s=fa369a7fe8db0d33cdeacdf39a268a07a308c1c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=640&crop=smart&auto=webp&s=b5ecbc1e760ea8a7663379961b869a67fa6d308d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=960&crop=smart&auto=webp&s=2c75459f65f93f055b4ad48195bd3825f3a32c66', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=1080&crop=smart&auto=webp&s=8aff96d05385988fabfdbdecfa12e2cac1a3f579', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?auto=webp&s=fbf3159bd48475d763aeb36af21027a0c93108c3', 'width': 1200}, 'variants': {}}]}
PrivateGPT example with Llama 2 Uncensored
1
2023-08-11T16:30:51
https://github.com/jmorganca/ollama/tree/main/examples/privategpt
helloPenguin006
github.com
1970-01-01T00:00:00
0
{}
15odjmy
false
null
t3_15odjmy
/r/LocalLLaMA/comments/15odjmy/privategpt_example_with_llama_2_uncensored/
false
false
https://a.thumbs.redditm…0dTeqRZ0MVN4.jpg
1
{'enabled': False, 'images': [{'id': 'qWYf_hGwsFfjEOzHhraYQjkUJJlsotgW5CofgR3t1f4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=108&crop=smart&auto=webp&s=cc5a7d81b1db7f17d71cab5a1a022c483ba8d216', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=216&crop=smart&auto=webp&s=d28ec690d6c6c33b274bf2dde3c7f27ead4be5bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=320&crop=smart&auto=webp&s=30fa83699356cf1925298ab5259a705b8f24ccee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=640&crop=smart&auto=webp&s=294fd6b1499ee75bc2808308b8a406a1361b2611', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=960&crop=smart&auto=webp&s=21dae982cf5c0af79b36998e542094273e9f52b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=1080&crop=smart&auto=webp&s=89265375793472b434f3b2c71bf5f029ca9c1d5b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?auto=webp&s=2f0d610fee1af34ec04e0abed720ec9c5180e0c8', 'width': 1200}, 'variants': {}}]}
Can a team of 10-20 people access a Llama 2 model deployed in a local server with medium requirements?
31
I'm planning on spending $3-5k on a local server with Llama v2 deployed on it, such as a team of 10-20 people can each access the inference from their own computers whenever they please. Since I'm not really an infra guy, I have questions on how to approach this. I guess that while someone is running a query nobody else can run theirs until the first one is complete, correct? Is there any easy way to run Llama locally in a way where multiple people can access it synchronously?
2023-08-11T16:41:57
https://www.reddit.com/r/LocalLLaMA/comments/15odtsn/can_a_team_of_1020_people_access_a_llama_2_model/
Heco1331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15odtsn
false
null
t3_15odtsn
/r/LocalLLaMA/comments/15odtsn/can_a_team_of_1020_people_access_a_llama_2_model/
false
false
self
31
null
Does HF inference endpoint work?
1
I’ve been trying to deploy LLMs using HF inference endpoint (e.g. stablecode-instruct-alpha-3b, llama2, etc) but the deployment fails all the time, yet I’ve been charged for an hour while it remains in “installing” state. It never becomes “ready for inference” with the form to query model, I’ve tried them with various CPU/GPU configs. Is there any rocket science behind the deployment? And the support via email — it’s so weird.
2023-08-11T16:42:15
https://www.reddit.com/r/LocalLLaMA/comments/15odu23/does_hf_inference_endpoint_work/
Greg_Z_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15odu23
false
null
t3_15odu23
/r/LocalLLaMA/comments/15odu23/does_hf_inference_endpoint_work/
false
false
self
1
null
llama
1
2023-08-11T17:16:23
https://teesdesk-us.shop/limited-edition-273
AnneCampbell54
reddit.com
1970-01-01T00:00:00
0
{}
15oepre
false
null
t3_15oepre
/r/LocalLLaMA/comments/15oepre/llama/
false
false
https://b.thumbs.redditm…hNF37yyXnHOw.jpg
1
null
Anyone got TextGen/LlamaCPP working with Metal for new GGML models and Llama2?
1
I'm getting constant errors/crashes, even though I updated Torch to nightly and rebuilt the LlamaCPP wheel with Metal.
2023-08-11T17:16:57
https://www.reddit.com/r/LocalLLaMA/comments/15oeqaf/anyone_got_textgenllamacpp_working_with_metal_for/
-becausereasons-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oeqaf
false
null
t3_15oeqaf
/r/LocalLLaMA/comments/15oeqaf/anyone_got_textgenllamacpp_working_with_metal_for/
false
false
self
1
null
How to get the answer from local llama2 and send it to my app?
1
I'm building a software that needs to use information from Llama2 query. I'm using oobabooga. Is there a way to do this with it? Do you need to code your own local API? How do you guys retrieve the information from the chat?
2023-08-11T17:48:25
https://www.reddit.com/r/LocalLLaMA/comments/15ofjzj/how_to_get_the_answer_from_local_llama2_and_send/
ppcfbadsfree
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ofjzj
false
null
t3_15ofjzj
/r/LocalLLaMA/comments/15ofjzj/how_to_get_the_answer_from_local_llama2_and_send/
false
false
self
1
null
LLM, Semantic search and large volume of documents
1
Hello, I know that this question was probably asked a few times but I really cannot decide on the best approach and could use your help. We have hundreds of thousands of documents and we want to create a "chatbot" that could possibly answer questions that can only be found in those documents. Now, the documents can be very very similar but contain data for different dates (textual data). Would a vector database using semantic search work? And then passing the result to an LLM? (Llama 2) Or is there a better approach these days? Currently thinking of running milvus as a vector db and connecting that to an LLM via langchain. Any guidance, recommendations, suggestions are highly appreciated! We do have the resources to host an LLM and a vector db.
2023-08-11T17:58:26
https://www.reddit.com/r/LocalLLaMA/comments/15oftk7/llm_semantic_search_and_large_volume_of_documents/
Mayloudin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oftk7
false
null
t3_15oftk7
/r/LocalLLaMA/comments/15oftk7/llm_semantic_search_and_large_volume_of_documents/
false
false
self
1
null
Access to my server with a httpRequest or other
1
My model is runing on localhost:7860 I want to access it, I have tryed with python import requests request = {'prompt': 'hi', 'max\_new\_tokens': 4096} r = requests.post(url='[http://localhost:7860/api/v1/generate](http://localhost:7860/api/v1/generate)', json=request) print(r.json()) I have on request reply : detail:not found or detail:method not allowed What's wrong? CG.
2023-08-11T18:01:38
https://www.reddit.com/r/LocalLLaMA/comments/15ofwpo/access_to_my_server_with_a_httprequest_or_other/
ppcfbadsfree
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ofwpo
false
null
t3_15ofwpo
/r/LocalLLaMA/comments/15ofwpo/access_to_my_server_with_a_httprequest_or_other/
false
false
self
1
null
New Model RP Comparison/Test (7 models tested)
1
This is a follow-up to my previous post here: [Big Model Comparison/Test (13 models tested) : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Here's how I evaluated these (same methodology as before) for their role-playing (RP) performance: - Same (complicated and limit-testing) long-form conversation with all models, [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend, [KoboldCpp](https://github.com/LostRuins/koboldcpp) backend, GGML q5\_K\_M, Deterministic generation settings preset, [Roleplay instruct mode preset](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/), > 22 messages, going to full 4K context, noting especially good or bad responses. So here's the list of models and my notes plus my very personal rating (➕ = worth a try, - ➖ disappointing, ❌ = unusable): - ➕ **[huginnv1.2](https://huggingface.co/TheBloke/huginnv1.2-GGML)**: Much better than the previous version (Huginn-13B), very creative and elaborate, focused one self-made plot point early on, nice writing and actions/emotes, repetitive emoting later, redundant speech/actions (says what she's going to do and then emotes doing it), missed important detail later and became nonsensical because of that. More creative but less smart than other models. - ➖ **[MythoMix-L2-13B](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGML)**: While other models often went too fast, this one needed a bit of coaxing to proceed, got confused about who's who and anatomy, mixing up people and instructions, wrote what User does, actions switched between second and third person. But good actions and descriptions, and believable and lively characters, and no repetition/looping all the way to full 4K context and beyond! **Only gets a ➖ instead of a ➕ because there's already a successor, [MythoMax-L2-13B-GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML), which I like even more!** - ➕ **[MythoMax-L2-13B](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML)**: Started talking/acting as User (had to use non-deterministic preset and enable "Include Names" for the first message)! While other models often went too fast, this one needed a bit of coaxing to proceed, got confused about who's who and anatomy, mixing up people and instructions, mentioned scenario being a simulation. But nice prose and excellent writing, and no repetition/looping all the way to full 4K context and beyond! **This is my favorite of this batch! I'll use this a lot more from now on, right now it's my second favorite model next to my old favorite [Nous-Hermes-Llama2](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML)!** - ➖ **[orca_mini_v3_13B](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML)**: Repeated greeting message verbatim (but not the emotes), talked without emoting, spoke of agreed upon parameters regarding limits/boundaries, terse/boring prose, had to ask for detailed descriptions, description was in past tense, speech within speech, wrote what User does, got confused about who's who and anatomy, became nonsensical later. **May be a generally smart model, but apparently not a good fit for roleplay!** - ➖ **[Stable-Platypus2-13B](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGML)**: Extremely short and terse responses (despite Roleplay preset!), had to ask for detailed descriptions, got confused about who's who and anatomy, repetitive later. But good and long descriptions when specifically asked for! **May be a generally smart model, but apparently not a good fit for roleplay!** - ❌ **[vicuna-13B-v1.5-16K](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GGML)**: Confused about who's who from the start, acted and talked as User, repeated greeting message verbatim (but not the very first emote), normal afterwards (talks and emotes and uses emoticons normally), but mentioned boundaries/safety multiple times, described actions without doing them, needed specific instructions to act, switched back from action to description in the middle of acting, repetitive later, some confusion. Seemed less smart (grammar errors, mix-ups), but great descriptions and sense of humor, but broke down completely within 20 messages (> 4K tokens)! **SCALING ISSUE (despite using `--contextsize 16384 --ropeconfig 0.25 10000`)?** - ❌ **[WizardMath-13B-V1.0](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGML)**: Ends every message with "The answer is: ", making it unsuitable for RP! So I instead did some logic tests - unfortunately it failed them all ("Sally has 3 brothers...", "What weighs more, two pounds of feathers or one pound of bricks?", and "If I have 3 apples and I give two oranges...") even with "Let's think step by step." added. Looking forward to your comments, especially if you have widely different experiences, so I may go back to retest some models with different settings...
2023-08-11T18:17:40
https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ogc60
false
null
t3_15ogc60
/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]}
Out of memory using multiple GPUs
1
I have an EC2 p2.8xlarge instance running on AWS with 8x Nvidia K80 GPUs, each with 12 GB VRAM for a total of 96 GB. I am trying to run LLaMA 2, and have tried both 7B and 70B. If I run it with 7B, I get the error `loading checkpoint for MP=1 but world size is 8`, and with 70B, `torch.cudatorch.OutOfMemoryError: Tried to allocate 448.00 MiB (GPU 7; 11.17 GiB total capacity, 10.21 GiB already allocated; 324.19 MiB free; 10.62 GiB reserved in total by PyTorch)`. How can I spread the memory across all GPUs?
2023-08-11T18:28:31
https://www.reddit.com/r/LocalLLaMA/comments/15ogmc9/out_of_memory_using_multiple_gpus/
EffectiveFood4933
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ogmc9
false
null
t3_15ogmc9
/r/LocalLLaMA/comments/15ogmc9/out_of_memory_using_multiple_gpus/
false
false
self
1
null
Thinking about purchasing a 4090 for KoboldCPP... Got some questions.
1
So currently I'm using a 5600g with 32GB of ram and a 12GB 3060 on Linux. What I would like to do is try and find a 24GB-ish LLM model that excels at collaborative story writing that I can run on my present hardware (doesn't matter how slow it is) just to get an idea of what improvements the 4090 would give me. My main goal is to have coherent generation and have the model stay on track and produce few anomalies. It would be amazing if you could make two suggestions for me: 1. Which model I should use (something that a 4090 can fully utilize but will also be "usable" on my present hardware). 2. What command launch options I should use. For example: I don't particularly understand what ropeconfig is and am confused why some of us set context size as a launch option when you can set it in the interface. Thanks for reading and any replies. What an awesome community this is turning out to be. Very happy to be here with y'all. Cheers.
2023-08-11T18:47:00
https://www.reddit.com/r/LocalLLaMA/comments/15oh3ie/thinking_about_purchasing_a_4090_for_koboldcpp/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oh3ie
false
null
t3_15oh3ie
/r/LocalLLaMA/comments/15oh3ie/thinking_about_purchasing_a_4090_for_koboldcpp/
false
false
self
1
null
What the fuck is wrong with WizardMath???
222
2023-08-11T18:48:55
https://i.redd.it/47823tkm1jhb1.png
bot-333
i.redd.it
1970-01-01T00:00:00
0
{}
15oh576
false
null
t3_15oh576
/r/LocalLLaMA/comments/15oh576/what_the_fuck_is_wrong_with_wizardmath/
false
false
https://a.thumbs.redditm…4n2UYgORBxn8.jpg
222
{'enabled': True, 'images': [{'id': 'xbug615PVicVm3MHsJ-wILaxpgeEIJx5h-4v1PZoxAQ', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=108&crop=smart&auto=webp&s=a81fb1bfbf27bf6a8d9bd4458ae4eb8578d42dbf', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=216&crop=smart&auto=webp&s=ebf65c1ae8ef5de63d96926611fb6a2eb07813b4', 'width': 216}, {'height': 131, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=320&crop=smart&auto=webp&s=1ace9e8d2f4bbd82cf42b2bd4eb879fc51075174', 'width': 320}, {'height': 263, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=640&crop=smart&auto=webp&s=ecf1d96609e661cc91206f3bdb7659b8ef039c1d', 'width': 640}, {'height': 395, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=960&crop=smart&auto=webp&s=d725b6df4e5816a9e32e914c41255f2d48c1dcf2', 'width': 960}, {'height': 445, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=1080&crop=smart&auto=webp&s=bed9bb6b349616ac22f527e979dc29b936b51d68', 'width': 1080}], 'source': {'height': 716, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?auto=webp&s=12513bec3bda3b09f781799a2ea95a1454181bc4', 'width': 1736}, 'variants': {}}]}
Is it possible to use multiple GPUs of different generations on a single PC?
1
I recently swapped out my 1070 with a 3060 to load 13b 4-bit models, and it's working like a charm, but I would like to try for more. I am on a budget, so I am hesitant to buy another 3060 if it's not necessary. Would it be possible to put the 1070 back in my machine, and use both GPUs at once? I currently have the drivers for the 3060 installed, and I heard that it's not possible to install two different sets of Geforce drivers on one machine. Would the 1070 function with the 3060 drivers for the purpose of loading bigger models through exllama? Would my machine recognize it, and would I benefit from it?
2023-08-11T19:17:34
https://www.reddit.com/r/LocalLLaMA/comments/15ohw0g/is_it_possible_to_use_multiple_gpus_of_different/
Zugzwang_CYOA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ohw0g
false
null
t3_15ohw0g
/r/LocalLLaMA/comments/15ohw0g/is_it_possible_to_use_multiple_gpus_of_different/
false
false
self
1
null
Platypus models
1
2023-08-11T19:19:39
https://twitter.com/natanielruizg/status/1690048207030493189
ninjasaid13
twitter.com
1970-01-01T00:00:00
0
{}
15ohxvx
false
{'oembed': {'author_name': 'Nataniel Ruiz', 'author_url': 'https://twitter.com/natanielruizg', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">We are 🔥super excited🔥 to release the Platypus family of finetuned LLMs 🥳🥳. Platypus achieves the top score in the Hugging Face Open LLM Leaderboard 🏆! The main focus of our work is to achieve cheap, fast and powerful refinement of base LLMs.<br>page: <a href="https://t.co/QHJ6kDoCYa">https://t.co/QHJ6kDoCYa</a> <a href="https://t.co/MOSiflQLDU">pic.twitter.com/MOSiflQLDU</a></p>&mdash; Nataniel Ruiz (@natanielruizg) <a href="https://twitter.com/natanielruizg/status/1690048207030493189?ref_src=twsrc%5Etfw">August 11, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/natanielruizg/status/1690048207030493189', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_15ohxvx
/r/LocalLLaMA/comments/15ohxvx/platypus_models/
false
false
https://a.thumbs.redditm…AWLGkBN6uzq8.jpg
1
{'enabled': False, 'images': [{'id': 'vVURZSvN8RjsPJbzUNoyI4xYt1V2yGYOSNsvuKbFmaQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RgulP3L45yR0BcX2I4VThtZt02S3L7XjFZ8D3rEGKo4.jpg?width=108&crop=smart&auto=webp&s=62a0e833e88283fd675c0789e2ffad916cb1b1f3', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/RgulP3L45yR0BcX2I4VThtZt02S3L7XjFZ8D3rEGKo4.jpg?auto=webp&s=cb48fa81b98753463cf4c36e24d2b3eda10da9a1', 'width': 140}, 'variants': {}}]}
Preventing LLAMA from hallucinating responses.
1
So we are using LLAMA is a typical RAG scenario, give it some context and ask it a question. What i have found is, no matter how much i yell at it in the prompt, for certain questions, it always gives the wrong, hallucinated answer, even if the right answer is in the document inside. For example, the document would be like: Student A has score 100 Student B has score 95 Student C has score 99 (very very simplified, in reality these are all chunks - about 200 tokens - and there are a dozen chunks) LLAMA will always answer Student B wrong. All others are right - just one is wrong. GPT3/4 do not have this problem, although GPT3 did once in a while, and on the same student. Which is bizarre. GPT 4 however, 100% of time is correct. &#x200B; Second category - if you ask it score history for example - when did student x score decrease - it will bring up a place where the score increased and write about it as if it decreased. Which is non sensical. GPT3 does the same thing. GPT4 will tell you - It does not appear the score for this student decreased. What is GPT 4 doing? How can I help llama or even gpt3 do something like this? Is this just RLHF? Am i stuck using gpt4 for this? &#x200B; &#x200B;
2023-08-11T20:09:17
https://www.reddit.com/r/LocalLLaMA/comments/15oj83h/preventing_llama_from_hallucinating_responses/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15oj83h
false
null
t3_15oj83h
/r/LocalLLaMA/comments/15oj83h/preventing_llama_from_hallucinating_responses/
false
false
self
1
null
Encourage Your Workplace to Host its Own LLMs
1
[ChatGPT fever spreads to US workplace, sounding alarm for some](https://www.reuters.com/technology/chatgpt-fever-spreads-us-workplace-sounding-alarm-some-2023-08-11/) The biggest issue with the ubiquitous use of ChatGPT in the workplace is that all of the information gets leaked. Most corporations who are interested in making money off their ideas should be interested in keeping those ideas largely quiet. This is not widely appreciated by employers today. You think you may care about NSFW work or censorship. Think about how much Siemens will care once they realize their workers are divulging their trade secrets. If we can a reasonable number of corporations to start running internal LLMs, the size of the addressable market for LLMs will grow exponentially. It will be fantastic for LLaMA and home LLM use orthogonally. Please advocate for local LLM use at work. Thank you for your time.
2023-08-11T20:18:04
https://www.reddit.com/r/LocalLLaMA/comments/15ojg5c/encourage_your_workplace_to_host_its_own_llms/
friedrichvonschiller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ojg5c
false
null
t3_15ojg5c
/r/LocalLLaMA/comments/15ojg5c/encourage_your_workplace_to_host_its_own_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': '67c-IjzWz8qmTmo-aRRdu58s6Tmf9wruf18jvgnZz0w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=108&crop=smart&auto=webp&s=c0022678e0ff8b9660760f7d8383f89338d69f0f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=216&crop=smart&auto=webp&s=ae94d7e0f34a909ea0a594bae054b56a5802a21f', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=320&crop=smart&auto=webp&s=71f85e485328bf95a39bf1b2d2413550faf66815', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=640&crop=smart&auto=webp&s=d8ba89c96f3a4c336b059eed7cffc634b0eca737', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=960&crop=smart&auto=webp&s=727363fa992caac483cbaa04f7b22b95469f969b', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=1080&crop=smart&auto=webp&s=45f2e263d547168fdc730e1f99703c9b28972403', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?auto=webp&s=b6e5440e0dca913a3dff038c1edfc3e4384fac67', 'width': 1200}, 'variants': {}}]}
How to use multiple GPUs on different systems?
1
I want to use my Gaming Laptop with an 8GB 3080, and one other system with an 8GB RX580. I don't know what the performance hit would be. TBH I just want to do it for the sake of it! I'm pretty new to running LLMs, so some explanation would be really helpful! Thankyou!
2023-08-11T20:28:25
https://www.reddit.com/r/LocalLLaMA/comments/15ojptj/how_to_use_multiple_gpus_on_different_systems/
KvAk_AKPlaysYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ojptj
false
null
t3_15ojptj
/r/LocalLLaMA/comments/15ojptj/how_to_use_multiple_gpus_on_different_systems/
false
false
self
1
null
Trouble Running Llama-2 70B on HPC with Limited GPUs - Need Help!
1
I'm utilizing Llama-2 on a high-performance computing (HPC) setup and dispatching tasks through SLURM. I managed to run the Llama 7B model, but I ran into problems with the 70B variant. It seems that the Llama-2 70B model anticipates 8 distinct GPUs, given its MP configuration of 8, implying nproc\_per\_node = 8 in torchrun settings. However, my HPC only allows 4 GPUs per node. Does anyone know if running the 70B model is feasible under this constraint? I think I might be missing a workaround, especially since the HPC boasts high-end GPUs like the A100. For clarity, here's the SLURM configuration I'm deploying. &#x200B; \#!/bin/bash \#SBATCH --job-name=llama\_chat\_run # Change the job name to something more descriptive \#SBATCH --nodes=1 \#SBATCH --cpus-per-task=4 \#SBATCH --ntasks-per-node=1 \#SBATCH --mem=16GB \#SBATCH --gres=gpu:1 \#SBATCH --time=2:00:00 # Extend runtime based on your expectations \#SBATCH --output=llama\_chat\_run.%j.out # Optional: name the output file to reflect the job \#SBATCH --error=llama\_chat\_run.%j.err # Optional: name the error file to reflect the job &#x200B; module purge; module load anaconda3/2020.07; export OMP\_NUM\_THREADS=$SLURM\_CPUS\_PER\_TASK; source /share/apps/anaconda3/2020.07/etc/profile.d/conda.sh; conda activate ./penv; export PATH=./penv/bin:$PATH; &#x200B; \# Use torchrun command instead of the python command torchrun --nproc\_per\_node 1 example\_chat\_completion.py \\ \--ckpt\_dir llama-2-70b-chat/ \\ \--tokenizer\_path tokenizer.model \\ \--max\_seq\_len 512 --max\_batch\_size 6 &#x200B; &#x200B; &#x200B;
2023-08-11T22:12:55
https://www.reddit.com/r/LocalLLaMA/comments/15omgnq/trouble_running_llama2_70b_on_hpc_with_limited/
MasterJaguar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15omgnq
false
null
t3_15omgnq
/r/LocalLLaMA/comments/15omgnq/trouble_running_llama2_70b_on_hpc_with_limited/
false
false
self
1
null
strange behavior with newhope.ggmlv3.q4_K_S
1
I've been testing the [newhope.ggmlv3.q4\_K\_S](https://huggingface.co/TheBloke/NewHope-GGML) by theBloke and it's been acting super weird and im not sure if its just poor parameters, system prompt, the 4-bit quant or the model just sucks in general? https://preview.redd.it/u02b8s4r1khb1.png?width=1544&format=png&auto=webp&s=ab570dc78f71f75015d8be09437a3c00ba436acf \`sys prompt\`: " You are a gifted python developer. Provide ALL your scripts in within single python markdown block. Ensure they are executable. Be efficient with compute. Maintain clear communication and a friendly demeanor. Use emojis occasionally." lol core parameters: \` {"-c", "2048", "-ngl", "200"} \`inference params\`: \`\`\`python const params = signal({ temperature: 0.7, repeat\_last\_n: 256, *// 0 = disable penalty, -1 = context size* repeat\_penalty: 1.18, *// 1.0 = disabled* top\_k: 40, *// <= 0 to use vocab size* top\_p: 0.5, *// 1.0 = disabled* tfs\_z: 1.0, *// 1.0 = disabled* typical\_p: 1.0, *// 1.0 = disabled* presence\_penalty: 0.0, *// 0.0 = disabled* frequency\_penalty: 0.0, *// 0.0 = disabled* mirostat: 0, *// 0/1/2* mirostat\_tau: 5, *// target entropy* mirostat\_eta: 0.1, *// learning rate* }) \`\`\` &#x200B;
2023-08-11T22:27:18
https://www.reddit.com/r/LocalLLaMA/comments/15omu0u/strange_behavior_with_newhopeggmlv3q4_k_s/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15omu0u
false
null
t3_15omu0u
/r/LocalLLaMA/comments/15omu0u/strange_behavior_with_newhopeggmlv3q4_k_s/
false
false
https://b.thumbs.redditm…YOuEShUHRF8I.jpg
1
{'enabled': False, 'images': [{'id': 'q933B8y48VjFiOf9DmJnoMHpcG_sNy-2VRxGOgaJblE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=108&crop=smart&auto=webp&s=1e1b58069998283803dc36c46425e88e56cf1aad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=216&crop=smart&auto=webp&s=4a7fd6d93595a17e59a3dd39c70220d208194ab2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=320&crop=smart&auto=webp&s=5c92b1f14dc65191a09dee27407ceb263fc26d9d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=640&crop=smart&auto=webp&s=c7825f39480c3777d49bfafcc1cd46037abf08d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=960&crop=smart&auto=webp&s=0643c3d28f88e1385ce223fc02c9be1231643d39', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=1080&crop=smart&auto=webp&s=574b545858aa29fabbf723dff3f1b20629d71794', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?auto=webp&s=8155d622d2491334a597038a984aa27b45c66213', 'width': 1200}, 'variants': {}}]}