title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Tesla P4 vs P40 in AI (found this Paper from Dell, thought it'd help)
1
Writing this because although I'm running 3x Tesla P40, it takes the space of 4 PCIe slots on an older server, plus it uses 1/3 of the power. FYI it's also possible to unblock the full 8GB on the P4 and Overclock it to run at 1500Mhz instead of the stock 800Mhz Hope it helps anyone that like me, is trying to have the best bang for the bug in bigger models [https://downloads.dell.com/manuals/all-products/esuprt\_software/esuprt\_it\_ops\_datcentr\_mgmt/high-computing-solution-resources\_white-papers14\_en-us.pdf](https://downloads.dell.com/manuals/all-products/esuprt_software/esuprt_it_ops_datcentr_mgmt/high-computing-solution-resources_white-papers14_en-us.pdf)
2023-07-19T14:45:18
https://www.reddit.com/r/LocalLLaMA/comments/153x19p/tesla_p4_vs_p40_in_ai_found_this_paper_from_dell/
ChobPT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153x19p
false
null
t3_153x19p
/r/LocalLLaMA/comments/153x19p/tesla_p4_vs_p40_in_ai_found_this_paper_from_dell/
false
false
self
1
null
How do I increase the speed of LLaMa v2 inference?
1
Let’s say I have an unlimited cloud budget and I need to get Llama-2-70b to generate 10-15 t/s. Does anyone know how would I go about it?
2023-07-19T14:59:25
https://www.reddit.com/r/LocalLLaMA/comments/153xeg7/how_do_i_increase_the_speed_of_llama_v2_inference/
Hatter_The_Mad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153xeg7
false
null
t3_153xeg7
/r/LocalLLaMA/comments/153xeg7/how_do_i_increase_the_speed_of_llama_v2_inference/
false
false
self
1
null
Exllama updated to support GQA and LLaMA-70B quants!
1
2023-07-19T15:06:22
https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee
panchovix
github.com
1970-01-01T00:00:00
0
{}
153xlk3
false
null
t3_153xlk3
/r/LocalLLaMA/comments/153xlk3/exllama_updated_to_support_gqa_and_llama70b_quants/
false
false
https://b.thumbs.redditm…zQk--ZKdYrMQ.jpg
1
{'enabled': False, 'images': [{'id': 'XX9udkos62Mo1y2eprWaTAPZrFU181C-yQD_vwuJStI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?width=108&crop=smart&auto=webp&s=03ef052884ac4db4f7c21ea72992f5e16335aed4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?width=216&crop=smart&auto=webp&s=69199154ad5495c4b9185da6a738d4ca1b859c73', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?width=320&crop=smart&auto=webp&s=90044452f2bd6dd1b999a0962470352526b00a2a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?width=640&crop=smart&auto=webp&s=24ef68aa89ebe254f51c97665784243296e273af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?width=960&crop=smart&auto=webp&s=4e8eaca4a7737999868f3ce76fd92a024e28cd96', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?width=1080&crop=smart&auto=webp&s=0421b8231df096ec0075a4116170c6854c39e3ef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9gn62eUYGg1vYJv47XLDrEQLJvUNZFDB89t1TulIewA.jpg?auto=webp&s=e90b5b1185109a5c0bd0e79cc49cf40036226f88', 'width': 1200}, 'variants': {}}]}
Locally hosted documentation with LLM support for QA?
1
Does anyone know what options there may be for wiki-like locally hosted documentation with support for question answering? Emphasis should be on human written documentation with support for a model running on CPU. Thanks!
2023-07-19T15:09:44
https://www.reddit.com/r/LocalLLaMA/comments/153xooq/locally_hosted_documentation_with_llm_support_for/
2muchnet42day
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153xooq
false
null
t3_153xooq
/r/LocalLLaMA/comments/153xooq/locally_hosted_documentation_with_llm_support_for/
false
false
self
1
null
Llama2 Qualcom partnership
1
Qualcomm will make Meta’s open-source Llama 2 models available on Qualcomm devices, which it believes will enable applications like intelligent virtual assistants. What does it mean? The weights embedded in a specialized AI the chip? full text: https://www.cnbc.com/2023/07/18/meta-and-qualcomm-team-up-to-run-big-ai-models-on-phones.htm
2023-07-19T15:10:33
https://www.reddit.com/r/LocalLLaMA/comments/153xpih/llama2_qualcom_partnership/
AstrionX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153xpih
false
null
t3_153xpih
/r/LocalLLaMA/comments/153xpih/llama2_qualcom_partnership/
false
false
self
1
null
Guanaco Llama 2 finetune
25
Wanted to know if anyone's tried this out already I would test it out but I've really just started with all this and have only used ggmls. Wouldn't even know where to start with running straight PyTorch models https://huggingface.co/Mikael110/llama-2-7b-guanaco-fp16 From Mikael110's model card though this was fine tuned with no changes whatsoever. Just the guanaco dataset which from what I understand has no system prompt. Unsure how that would affect things since people have gathered that system prompts are baked into llama 2 looking forward to people testing it
2023-07-19T15:50:56
https://www.reddit.com/r/LocalLLaMA/comments/153yryd/guanaco_llama_2_finetune/
FoxFlashy2527
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153yryd
false
null
t3_153yryd
/r/LocalLLaMA/comments/153yryd/guanaco_llama_2_finetune/
false
false
self
25
{'enabled': False, 'images': [{'id': 'W35txtRv3DsS13b-FbzHwkMfrFQKcedrnHzhbqg3T9k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?width=108&crop=smart&auto=webp&s=0ed54606b4a653b15f414cb678b7a69c17635441', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?width=216&crop=smart&auto=webp&s=860fdc93f52bdd2acee9bea51c6468fce20253f5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?width=320&crop=smart&auto=webp&s=59fa079508f0bd57331b061fec236ccd8c65fafd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?width=640&crop=smart&auto=webp&s=79181764b31af7d3962895240e4e9baf11078113', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?width=960&crop=smart&auto=webp&s=0892b8895a0ecb0997fc21bbe425d514cf71141b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?width=1080&crop=smart&auto=webp&s=9116f6dc465a3dba6c85838b358d9a75bf5b3970', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qBqKAiVttxvdmVZILzlqAPD5qFKuZAV8C9j4EX-Ukd4.jpg?auto=webp&s=06ef6768c07b0ba37b1ff3f2184a81790775fdf9', 'width': 1200}, 'variants': {}}]}
Try out Llama 70B Chat model for free in HuggingChat
1
2023-07-19T15:51:54
https://huggingface.co/chat
hackerllama
huggingface.co
1970-01-01T00:00:00
0
{}
153ysug
false
null
t3_153ysug
/r/LocalLLaMA/comments/153ysug/try_out_llama_70b_chat_model_for_free_in/
false
false
https://b.thumbs.redditm…AAGwyrjzxubU.jpg
1
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
Llama2-7b vs Falcon-7b qLoRA finetuning on Paraphrasing and Changing the tone of a sentence
1
[removed]
2023-07-19T15:58:28
https://www.reddit.com/r/LocalLLaMA/comments/153yyuo/llama27b_vs_falcon7b_qlora_finetuning_on/
krumb0y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153yyuo
false
null
t3_153yyuo
/r/LocalLLaMA/comments/153yyuo/llama27b_vs_falcon7b_qlora_finetuning_on/
false
false
https://a.thumbs.redditm…wfO_bOWlJ9X4.jpg
1
{'enabled': False, 'images': [{'id': 'm9jmQQ11SdwqjO3XUjnEn1ZPpbqhFYTu8x6Z_FiQPQ0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=108&crop=smart&auto=webp&s=b5c9a699f07abf448fdcde5857c81f989852a3ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=216&crop=smart&auto=webp&s=36fd912fae3c4940954e6b932078fab37fbc1474', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=320&crop=smart&auto=webp&s=d740ff4872c1cfffa680f0bc25bca7194ff2aee9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=640&crop=smart&auto=webp&s=8b54594fafe72bc2d4ab7d02158eb1eb7933a5c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=960&crop=smart&auto=webp&s=043933583fdf8e0d2eaea903f226292723604789', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=1080&crop=smart&auto=webp&s=ec626dfddf277cc0bd58fe53a63e0a5b98f2055d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?auto=webp&s=84c25f943d463c7c97570f636c0ba766163084b0', 'width': 1200}, 'variants': {}}]}
UpstageAI's LLaMA model has reached the top of the HuggingFace leaderboard.
1
2023-07-19T16:11:42
https://www.reddit.com/r/MachineLearning/comments/153yfry/n_upstage_ais_30m_llama_1_outshines_70b_llama2/
yynnoot
reddit.com
1970-01-01T00:00:00
0
{}
153zc5u
false
null
t3_153zc5u
/r/LocalLLaMA/comments/153zc5u/upstageais_llama_model_has_reached_the_top_of_the/
false
false
https://b.thumbs.redditm…_NG1NG9MtGEw.jpg
1
null
upstrage
1
[removed]
2023-07-19T16:14:17
https://www.reddit.com/r/LocalLLaMA/comments/153zeix/upstrage/
yynnoot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153zeix
false
null
t3_153zeix
/r/LocalLLaMA/comments/153zeix/upstrage/
false
false
self
1
null
My local instance of Llama2 30B 16f has a flair for the dramatic
1
2023-07-19T16:14:27
https://i.redd.it/azmvpsr45ycb1.png
frownGuy12
i.redd.it
1970-01-01T00:00:00
0
{}
153zeoa
false
null
t3_153zeoa
/r/LocalLLaMA/comments/153zeoa/my_local_instance_of_llama2_30b_16f_has_a_flair/
false
false
https://b.thumbs.redditm…O1IcbD2B4FaI.jpg
1
{'enabled': True, 'images': [{'id': 'IvJU0lqL3mhpPrpLxNgyQ0t9cm00glnLD3UyxjXDA8g', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/azmvpsr45ycb1.png?width=108&crop=smart&auto=webp&s=e0e2a24b9ec19575affcb02e389a157fedd965b5', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/azmvpsr45ycb1.png?width=216&crop=smart&auto=webp&s=31c986ce53c558825940e255f0c84fb7895f6852', 'width': 216}, {'height': 119, 'url': 'https://preview.redd.it/azmvpsr45ycb1.png?width=320&crop=smart&auto=webp&s=5daa04fc402a439cbc9f6763b64da702468786ba', 'width': 320}, {'height': 239, 'url': 'https://preview.redd.it/azmvpsr45ycb1.png?width=640&crop=smart&auto=webp&s=d786dbb54b3b532ad0b1b1a1071268677f644e1b', 'width': 640}], 'source': {'height': 303, 'url': 'https://preview.redd.it/azmvpsr45ycb1.png?auto=webp&s=a2e774462936d6afbe9051f7ab1598781e228551', 'width': 811}, 'variants': {}}]}
Are you running Linux on Windows through WSL2 or using a dual-boot setup to switch between Windows and Linux?
1
I'm considering which is better to execute a deep learning program on Linux. I've heard that there is a GPU issue to run Linux on Windows through WSL2. (**It may not be true**)
2023-07-19T16:22:33
https://www.reddit.com/r/LocalLLaMA/comments/153zm9t/are_you_running_linux_on_windows_through_wsl2_or/
HolidayRadio8477
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
153zm9t
false
null
t3_153zm9t
/r/LocalLLaMA/comments/153zm9t/are_you_running_linux_on_windows_through_wsl2_or/
false
false
self
1
null
It didn't occur to me that "uncensored model" means it makes no attempt to sound professional (Llama-2-13B-GPTQ)
1
2023-07-19T16:46:05
https://i.redd.it/naizd1toaycb1.png
NLTPanaIyst
i.redd.it
1970-01-01T00:00:00
0
{}
15408ei
false
null
t3_15408ei
/r/LocalLLaMA/comments/15408ei/it_didnt_occur_to_me_that_uncensored_model_means/
false
false
https://a.thumbs.redditm…v5dAiITs5KD4.jpg
1
{'enabled': True, 'images': [{'id': '5_-s2URQgjmY6rajGEG2ayQZDHulESNJ5i0l6Sj8gWY', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/naizd1toaycb1.png?width=108&crop=smart&auto=webp&s=06c831cf51e6a18e33bd2c84a8e3a5f43be718f6', 'width': 108}, {'height': 148, 'url': 'https://preview.redd.it/naizd1toaycb1.png?width=216&crop=smart&auto=webp&s=bc725b9893862bd22ac7084b4427eda6b848b0d2', 'width': 216}, {'height': 219, 'url': 'https://preview.redd.it/naizd1toaycb1.png?width=320&crop=smart&auto=webp&s=73d4cf1c5ab719e718e52d8ff32589ba802c4cc8', 'width': 320}, {'height': 438, 'url': 'https://preview.redd.it/naizd1toaycb1.png?width=640&crop=smart&auto=webp&s=f50337052fa522291f15d37574b77880a91e37dc', 'width': 640}, {'height': 657, 'url': 'https://preview.redd.it/naizd1toaycb1.png?width=960&crop=smart&auto=webp&s=b1dfea150e7ccba0de5bfe88df2f4b3e9e4b1417', 'width': 960}, {'height': 740, 'url': 'https://preview.redd.it/naizd1toaycb1.png?width=1080&crop=smart&auto=webp&s=b16a71dacf6ec94db78dca1f84924a85d2a54e34', 'width': 1080}], 'source': {'height': 1106, 'url': 'https://preview.redd.it/naizd1toaycb1.png?auto=webp&s=3789297442f59aa40172195c47c2164ff27e2819', 'width': 1614}, 'variants': {}}]}
Llama 2 try out free
1
[removed]
2023-07-19T16:54:25
https://www.reddit.com/r/LocalLLaMA/comments/1540g2p/llama_2_try_out_free/
prdmgmt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1540g2p
false
null
t3_1540g2p
/r/LocalLLaMA/comments/1540g2p/llama_2_try_out_free/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YBiNYapi6D4Z8bwm8d0GQuZwncMj-OfTeKftpAxiFbk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?width=108&crop=smart&auto=webp&s=26978bf5041b150520d3aa6c259d2f036ef2d4de', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?width=216&crop=smart&auto=webp&s=03b5e74e7deefb66c8346a6f02e5fb0ed2fe4a30', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?width=320&crop=smart&auto=webp&s=d423a3ed080ef488245937af29646a9615a68f90', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?width=640&crop=smart&auto=webp&s=c1928795977881aaa604e22c6e9c8728cb40900b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?width=960&crop=smart&auto=webp&s=f23a870d00b1a645afda779d2d10a16169cb7e7f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?width=1080&crop=smart&auto=webp&s=2fe153488fe007c47080e3e82fb136c3fc0f94a9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8jH1uVFeySLxRDUKSxy7bHlE4dONrQ59pQjW3ZtvjWA.jpg?auto=webp&s=3ee3b9d832e50ea92fba5a5dfc6caf12cfa00ae7', 'width': 1200}, 'variants': {}}]}
Luna AI 7B Chat Uncensored (LLama 2 finetune)
1
After testing Llama2 Yesterday and getting a moral refusal from it to kill a JS function :) I decided to do something about it, and provide a less censored model. Today We're releasing Today a new LLama2 7B chat model. "Luna AI Llama2-7b Uncensored" is a llama2 based model fine-tuned on over 40,000 multiple round chats between Human & AI. This model was fine-tuned by [Tap](https://tap.pm/). The result is an enhanced Llama2 7b Chat model that rivals ChatGPT in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of censorship mechanisms. The fine-tuning process was performed on an 8x a100 80GB machine. Link: [https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored)
2023-07-19T16:57:37
https://www.reddit.com/r/LocalLLaMA/comments/1540j3i/luna_ai_7b_chat_uncensored_llama_2_finetune/
yanjb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1540j3i
false
null
t3_1540j3i
/r/LocalLLaMA/comments/1540j3i/luna_ai_7b_chat_uncensored_llama_2_finetune/
false
false
self
1
null
Guide: Running Llama 2 on Runpod with Oobabooga's text-generation-webui
1
2023-07-19T17:19:21
https://gpus.llm-utils.org/running-llama-2-on-runpod-with-oobaboogas-text-generation-webui/
TikkunCreation
gpus.llm-utils.org
1970-01-01T00:00:00
0
{}
15413lq
false
null
t3_15413lq
/r/LocalLLaMA/comments/15413lq/guide_running_llama_2_on_runpod_with_oobaboogas/
false
false
default
1
null
If LLAMA2 is my neighbor, I am a dead person now !
1
[removed]
2023-07-19T17:24:29
https://www.reddit.com/r/LocalLLaMA/comments/15418fr/if_llama2_is_my_neighbor_i_am_a_dead_person_now/
RuwDz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15418fr
false
null
t3_15418fr
/r/LocalLLaMA/comments/15418fr/if_llama2_is_my_neighbor_i_am_a_dead_person_now/
false
false
https://b.thumbs.redditm…iK4nZgWq-8gw.jpg
1
null
Llama2-7b vs Falcon-7b qLoRA finetuning on Paraphrasing and Changing the tone of a sentence
1
[removed]
2023-07-19T17:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1541k1w/llama27b_vs_falcon7b_qlora_finetuning_on/
krumb0y
self.LocalLLaMA
2023-07-19T17:46:29
0
{}
1541k1w
false
null
t3_1541k1w
/r/LocalLLaMA/comments/1541k1w/llama27b_vs_falcon7b_qlora_finetuning_on/
false
false
default
1
{'enabled': False, 'images': [{'id': 'm9jmQQ11SdwqjO3XUjnEn1ZPpbqhFYTu8x6Z_FiQPQ0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=108&crop=smart&auto=webp&s=b5c9a699f07abf448fdcde5857c81f989852a3ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=216&crop=smart&auto=webp&s=36fd912fae3c4940954e6b932078fab37fbc1474', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=320&crop=smart&auto=webp&s=d740ff4872c1cfffa680f0bc25bca7194ff2aee9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=640&crop=smart&auto=webp&s=8b54594fafe72bc2d4ab7d02158eb1eb7933a5c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=960&crop=smart&auto=webp&s=043933583fdf8e0d2eaea903f226292723604789', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?width=1080&crop=smart&auto=webp&s=ec626dfddf277cc0bd58fe53a63e0a5b98f2055d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PVUVuJ8E8RT_2CKxSFjC90JBBrk8jFGYj_Va3TanazQ.jpg?auto=webp&s=84c25f943d463c7c97570f636c0ba766163084b0', 'width': 1200}, 'variants': {}}]}
Quick guide to deploying LLaMA 2 in your cloud
1
2023-07-19T17:41:08
https://github.com/skypilot-org/skypilot/tree/master/llm/llama-2
skypilotucb
github.com
1970-01-01T00:00:00
0
{}
1541nt0
false
null
t3_1541nt0
/r/LocalLLaMA/comments/1541nt0/quick_guide_to_deploying_llama_2_in_your_cloud/
false
false
https://b.thumbs.redditm…_LTDvC6QyxPg.jpg
1
{'enabled': False, 'images': [{'id': 'IfiiogqnjlImjMfTf6AshIbJBDAZcW_1mlwci4zlVSE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?width=108&crop=smart&auto=webp&s=1d90fdf93c8f8458d4e364bf8f659cf8e3f8da2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?width=216&crop=smart&auto=webp&s=da49573fadcaa3f8a2775361d3af5e8b1a1333f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?width=320&crop=smart&auto=webp&s=587426f63bafbb40cc10de46b9daa71679b7b7c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?width=640&crop=smart&auto=webp&s=fce76ae0c83e5048c15fb11e686a682480be7662', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?width=960&crop=smart&auto=webp&s=a56b9c8794c546e5b2950d42bd590fe78f301433', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?width=1080&crop=smart&auto=webp&s=844f69109b55c3ccb22eac7816a4a872c0077e4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mijR_zR7ca9D_2wl75W0LgdMPdXxQtWeCh618UK_uB4.jpg?auto=webp&s=9e3bd71387d925bb747f7d773ba0ac53934eb75a', 'width': 1200}, 'variants': {}}]}
Newbie help
1
Hey guys, I wanted to try out local llms but I don't know where to start, I've seen Llamacpp, koboldcpp, and even oobabooga gui... which one should I use for best performance and ease of use? I have 12gb of vram as well.
2023-07-19T18:19:06
https://www.reddit.com/r/LocalLLaMA/comments/1542mw4/newbie_help/
ipechman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1542mw4
false
null
t3_1542mw4
/r/LocalLLaMA/comments/1542mw4/newbie_help/
false
false
self
1
null
Llama-2-70b-chat-hf went totally off the rails after a simple prompt... my goodness
1
2023-07-19T18:26:07
https://i.redd.it/nv2dkcldsycb1.png
SmashShock
i.redd.it
1970-01-01T00:00:00
0
{}
1542tgo
false
null
t3_1542tgo
/r/LocalLLaMA/comments/1542tgo/llama270bchathf_went_totally_off_the_rails_after/
false
false
https://a.thumbs.redditm…hHPnpiJCyb70.jpg
1
{'enabled': True, 'images': [{'id': 'xZXqMvaYRFZRTx0Pf4m3gKq5SNfY_BxDtq6xKzttvvY', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?width=108&crop=smart&auto=webp&s=8ebca654c4c2c8ad0fbab41a7b91321879b0ed0d', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?width=216&crop=smart&auto=webp&s=d07ed78e1b1cd1057c3bf1774e61841e4fa4ca6b', 'width': 216}, {'height': 348, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?width=320&crop=smart&auto=webp&s=c8a8eb45309fe009217a261f13de64d3416c8207', 'width': 320}, {'height': 696, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?width=640&crop=smart&auto=webp&s=4d81a7afaaba255bd6322dd2c4b5e6a86e5e8470', 'width': 640}, {'height': 1044, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?width=960&crop=smart&auto=webp&s=ffbf6e1e1939415e2f0729152c1a6bd8e4628ded', 'width': 960}, {'height': 1174, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?width=1080&crop=smart&auto=webp&s=ab2a57319bf6117df4e4f336b1c06309a1313901', 'width': 1080}], 'source': {'height': 2088, 'url': 'https://preview.redd.it/nv2dkcldsycb1.png?auto=webp&s=bb368797cc8a2eaf491f4cb589d9bc9f7732af4c', 'width': 1920}, 'variants': {}}]}
A fork of Llama 2 that runs on the CPU
1
Hi everyone, I've made a quick adaptation of Llama2 to run on the CPU to run a few tests. In case anyone is interested here is the link: [https://github.com/krychu/llama](https://github.com/krychu/llama) The instructions for installation and usage are the same. I get 1 word per \~1.5 secs on the Mac Book Pro M1 + the warm up time. Unfortunately, I don't have other machines to test it so if you happen to run into any issues it'd be great to hear the feedback.
2023-07-19T18:34:11
https://www.reddit.com/r/LocalLLaMA/comments/15430u8/a_fork_of_llama_2_that_runs_on_the_cpu/
krychu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15430u8
false
null
t3_15430u8
/r/LocalLLaMA/comments/15430u8/a_fork_of_llama_2_that_runs_on_the_cpu/
false
false
self
1
{'enabled': False, 'images': [{'id': 'K2X9aA2F0ECkbT1Go5wn7EgiC2ntheAcqsMmIggaS_M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?width=108&crop=smart&auto=webp&s=f27078557554b58d4f848f7224d3e4e12a2c3782', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?width=216&crop=smart&auto=webp&s=08ef0a197902e3a3bf1c809bc5f5a9067e50ab4c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?width=320&crop=smart&auto=webp&s=45cfb32bb3337e417f17095ee8d48be86e02de9b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?width=640&crop=smart&auto=webp&s=af9e0defd5106036e2c0d69683218023ff49a2e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?width=960&crop=smart&auto=webp&s=d04bbffe131e1a65588c8141130202746498a978', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?width=1080&crop=smart&auto=webp&s=261ec9a03754bdad157fe250da9617ff0e6631c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dvJev1bY_DzSLbyjuiNkjmziRHoVCxqHyw7njyawLfo.jpg?auto=webp&s=958ce1c2ea85378a38dfbba362446497fa36b7d7', 'width': 1200}, 'variants': {}}]}
Totally useless, llama 70b refuses to kill a process
1
They had over-lobotomized it, this is llama 70b https://preview.redd.it/m8j3xaa31zcb1.png?width=981&format=png&auto=webp&s=63f49b58068333af4ba096e3a67695b9d997a05c
2023-07-19T19:15:04
https://www.reddit.com/r/LocalLLaMA/comments/15442iy/totally_useless_llama_70b_refuses_to_kill_a/
Killerx7c
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15442iy
false
null
t3_15442iy
/r/LocalLLaMA/comments/15442iy/totally_useless_llama_70b_refuses_to_kill_a/
false
false
https://b.thumbs.redditm…LJipFNJAKVrg.jpg
1
null
LLM like Akinator?
1
[removed]
2023-07-19T19:18:23
https://www.reddit.com/r/LocalLLaMA/comments/15445lu/llm_like_akinator/
chocolatebanana136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15445lu
false
null
t3_15445lu
/r/LocalLLaMA/comments/15445lu/llm_like_akinator/
false
false
self
1
null
Poor thing having trauma from SFT/RLHF (Llama-2-chat)
1
2023-07-19T20:09:02
https://i.redd.it/1xj72jzp9zcb1.png
m_m_m_j
i.redd.it
1970-01-01T00:00:00
0
{}
1545h1a
false
null
t3_1545h1a
/r/LocalLLaMA/comments/1545h1a/poor_thing_having_trauma_from_sftrlhf_llama2chat/
false
false
https://b.thumbs.redditm…gOd8uP5kodzY.jpg
1
{'enabled': True, 'images': [{'id': '8SVdWrQrs6kQzPYp5SCI7V0M4NNHl3V0mi5_o1E8YYE', 'resolutions': [{'height': 13, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?width=108&crop=smart&auto=webp&s=1536585dced36bfda2960f0cb202f290c384681f', 'width': 108}, {'height': 26, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?width=216&crop=smart&auto=webp&s=98e76a5cd95f26143e8d4304426b7ef007b16812', 'width': 216}, {'height': 38, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?width=320&crop=smart&auto=webp&s=50467c626a428a2326fc45a53c83c4b6d0dcca63', 'width': 320}, {'height': 77, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?width=640&crop=smart&auto=webp&s=5d9978aa9d7acf39fc02111c75076cc2b9d47ddc', 'width': 640}, {'height': 116, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?width=960&crop=smart&auto=webp&s=9b91a49c5af8e4cd7f6760b740d5f0df7ac6b533', 'width': 960}, {'height': 131, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?width=1080&crop=smart&auto=webp&s=e69365dfb4bfaff6461ace4dc9904b71546c2e49', 'width': 1080}], 'source': {'height': 358, 'url': 'https://preview.redd.it/1xj72jzp9zcb1.png?auto=webp&s=3226e24013607ecd1e85a4fc80c35bc1be694bc9', 'width': 2946}, 'variants': {}}]}
24GB vram on a budget
1
Recently I felt an urge for a GPU that allows training of modestly sized and inference of pretty big models while still staying on a reasonable budget. Got myself an old Tesla P40 Datacenter-GPU (GP102 like GTX1080-silicon but with 24GB ECC vram, 2016) for 200€ from ebay. It's the best of the affordable; terribly slow compared to today's RTX3xxx / 4xxx but big. K80 (Kepler, 2014) and M40 (Maxwell, 2015) are far slower while P100 is a bit better for training but still more expensive and only has 16GB and Volta-Class V100 (RTX2xxx) is far above my price point. Tesla Datacenter-GPUs don't have their own fans because they get cooled by the case, so you have to print an adapter and mount an an radial blower which cools more than enough. Take care that you buy one that doesn't sound like an airplane. Also, it's a bit tricky to get it up an running because it has no display connector (HDMI etc) since it is technically a GPU but it is not intended as a desktop graphics card but either as vGPU for vServers (one physical system, up to 8 virtual servers) or as a pure Cuda Accelerator (TCC mode). So you need a second card or just a CPU with onboard graphics. For those of you running Windows, really; don't run Windows when doing ML stuff. But OK if you do anyway, there is a nice [hack](https://github.com/toAlice/NvidiaTeslaP40forGaming) how to set a P40 from TCC to WDDM mode so you can use it as an actual graphics card. hope this helps!
2023-07-19T20:44:45
https://www.reddit.com/gallery/1546dvc
philipgutjahr
reddit.com
1970-01-01T00:00:00
0
{}
1546dvc
false
null
t3_1546dvc
/r/LocalLLaMA/comments/1546dvc/24gb_vram_on_a_budget/
false
false
https://b.thumbs.redditm…3tvxvlo2T_bE.jpg
1
null
xgen 7B 8k context finetuned on Guanaco
16
2023-07-19T20:51:57
https://huggingface.co/ethanhs/xgen-7b-8k-guanaco
ethanhs
huggingface.co
1970-01-01T00:00:00
0
{}
1546kiv
false
null
t3_1546kiv
/r/LocalLLaMA/comments/1546kiv/xgen_7b_8k_context_finetuned_on_guanaco/
false
false
https://b.thumbs.redditm…-ZSmc7jehPbw.jpg
16
{'enabled': False, 'images': [{'id': 'TQ8OV14Pyxm9Njqs1vWE68B8YcmZyV6fmyaECDc61C4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?width=108&crop=smart&auto=webp&s=c659e2f32bbe2ee3314a1247263b602975e183c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?width=216&crop=smart&auto=webp&s=6ca10f05dc9c20fcee1a79dfe834b7567436f257', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?width=320&crop=smart&auto=webp&s=130a7b545cca1a490ad217da8993fb9d9a9bf3bc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?width=640&crop=smart&auto=webp&s=666f8428462721d2ac4e2b1652ca9070cc4d04f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?width=960&crop=smart&auto=webp&s=1944fd6affd5f506e4940e8d3ed912fdea2ab4cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?width=1080&crop=smart&auto=webp&s=64353db1d49f8d3a7804c155f5f969e63a04977e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/62u_qmesEDfhfyQQ8wNhegWUB99SAo6d6nr1GZSPGow.jpg?auto=webp&s=60cd5053a61cff46a8c578643ece0aec5a3119e4', 'width': 1200}, 'variants': {}}]}
Any thoughts? 13B q2_K vs 7b q6_k
1
Consider that I have any two same models a = 13B q2\_K and b = 7B q6\_k. They weigh the same How do they differ as answers? Which you will prefer and why?
2023-07-19T20:56:56
https://www.reddit.com/r/LocalLLaMA/comments/1546p11/any_thoughts_13b_q2_k_vs_7b_q6_k/
domrique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1546p11
false
null
t3_1546p11
/r/LocalLLaMA/comments/1546p11/any_thoughts_13b_q2_k_vs_7b_q6_k/
false
false
self
1
null
Model size relevance when finetuning to learn specific data?
1
Lets say I want to finetune a model to answer questions based on a very specific set of information, e.g. corporate data. Everything else, like who is the president of france, is completely irrelevant. It would be nice if it would refuse to answer these question, but it's not necessary. How relevant is the model size then? Can I just pick a 3B model for finetuning or do bigger models somehow perform better in answering questions about that certain fine-tuned data?
2023-07-19T21:05:07
https://www.reddit.com/r/LocalLLaMA/comments/1546x33/model_size_relevance_when_finetuning_to_learn/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1546x33
false
null
t3_1546x33
/r/LocalLLaMA/comments/1546x33/model_size_relevance_when_finetuning_to_learn/
false
false
self
1
null
Why are references to u/The-Bloke getting deleted here?
1
[removed]
2023-07-19T21:24:07
https://www.reddit.com/r/LocalLLaMA/comments/1547eyd/why_are_references_to_uthebloke_getting_deleted/
InvalidCharacters
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1547eyd
false
null
t3_1547eyd
/r/LocalLLaMA/comments/1547eyd/why_are_references_to_uthebloke_getting_deleted/
false
false
self
1
null
Why aren't we using highly efficient int8 calcualtions in quants? (maybe eli14?)
1
**TL;DR:** There's 8 bit precision operations (dp4a/dp2a) that are highly efficient on cards as far back as the P40, and across most every current graphics card as well. The drawback seems to be "precision is limited to 8 bits", but we're already quantizing the underlying data we're operating on to a lower precision than this. Is there a reason DP4A or DP2A isn't commonly used for calculations on quantized transformer based LLMs today, given what appears to be substantial potential for performance improvement across the board? **Note:** This post specifically relates to int8 *\_calculation precision\_* (vs current 16bit/32bit, even on quantized models) and not int8 *\_storage precision\_* (vs current 4bit/etc. in quantized models) ​ **Full Post:** There was recently another post on this subreddit that reraised the spector of this question for me, so I decided to finally put it to the community and see if you can help me sleep easier that it's been checked out and discarded by someone with more experience. For reference, I have 2xP40, 1x3090, and some other small cards I'm running inference on just for fun at the moment (hope to make it useful, but just trying to keep up with advancements for the moment). I'll admit I wish there were significantly more performance friendly mechanisms for running transformers on the P40 cards, so I did quite a bit of digging into why these cards originally slated for ML work are just \_bad\_ at much of what we're doing currently. Basically, it seems to boil down to "we're nearly always doing fp16 and/or fp32 operations rather than int8 operations, and p40s are objectively bad at the fp16 side, and only somewhat okay at fp32". Thing is, it \_seems\_ like, from an outside observer, we're quantizing these models to pack them into smaller space (VRAM/etc.) vs fp16 or fp32, but then doing very similar fp16 (or even fp32) based calculations, rather than a significantly a more efficient int8 operation which could quite likely be used to perform far better not only on p40 cards, but also on more recent cards. So far as I can tell, the int8 operations still leverage cores that are highly efficient on newer cards, so I'd also see a performance increase on my 3090 as well. ​ **What I came up with digging into int8 operations in a nutshell:** * P40 cards are built to be \_very\_ fast at performing INT8 calculations, particularly using the DP4A call, from cuda code among other libraries. (at least 3x faster than fp32 on Pascal, seems to be 2-3x faster vs fp16 on a recent card in volta/turing and later, though my testing here is shakier because I'm not starting with code optimized for the card). * Support for performing this operation efficiently isn't a market differentiator for NVidia anymore, though it i*s* still built in, well supported and increasingly fast on newer NVidia cards. (this is a guess why it's missing from the CUDA math operation documentation I found, even though it compiles and works fine in tensorrt, cudnn, cublas and explicitly included since Cuda 8: [https://developer.nvidia.com/blog/mixed-precision-programming-cuda-8/](https://developer.nvidia.com/blog/mixed-precision-programming-cuda-8/)) * This operation \_seems\_ to be a foundation for at least some portion of "AI upscaling" work that's highly efficient for the CV realm (i.e. DLSS/FSA/XeSS). It seems established how to efficiently quantize a CNN to do this rather than fp32 for example ([https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf](https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)) * Given the output of the DP4A function is an int32, future input into the DP4A function may require requantizing (using scaling factors that presumably include floating point values), but this seems to have been resolved in arXiv:2207.01405v3 ([https://arxiv.org/pdf/2207.01405.pdf](https://arxiv.org/pdf/2207.01405.pdf)) section 3.2 by converting to a dyadic number and performing integer only math and bit shifting. * Note, this paper might actually solve a number of issues involved with transformers using int math, a not insignificant amount of it was above my head. Its focus is on vision transformers, but the math involved appears to be the same. * It looks like this optimization improves far more than p40 cards. It'd mean running highly efficient inference on AMD cards and Intel cards also, including cards prior to the latest generation. These operations have been optimized across those as well, and it doesn't appear this would be a significant lift to make work across architectures given it's a commonly implemented pattern. For example, it's built into the following: * AMD: Vega VII and later as V\_DOT4\_I32\_I8/V\_DOT4\_U32\_U8 * Intel: Gfx12, DG2, Xe and later like ARC cards, including integrated GPUs since gen 11 * ARM has hardware support in Mali and Valhall as well * Nvidia: Anything Pascal and later, especially the P40 * There's even an implementation under discussion to have the dp4a instruction added to WebGPU ([https://github.com/gpuweb/gpuweb/issues/2677](https://github.com/gpuweb/gpuweb/issues/2677)) * See [https://www.reddit.com/r/intel/comments/p7ikre/dp4a\_support\_across\_gpu\_archs/](https://www.reddit.com/r/intel/comments/p7ikre/dp4a_support_across_gpu_archs/) and [https://www.reddit.com/r/Amd/comments/xqak61/testing\_xesss\_dp4a\_path\_on\_a\_6700xt/](https://www.reddit.com/r/Amd/comments/xqak61/testing_xesss_dp4a_path_on_a_6700xt/) for more discussion. * A paper I hoped might have context I understood on this (arXiv:2101.11748v1) actually talks about int4 and smaller type quants, though it references DP4A/DP2A instructions with "Since the Pascal architecture, Nvidia GPUs implement spatial decomposition via DP4A and DP2A instructions, where INT32 units are decomposed into 4-input INT8 or 2-input INT16 inner products". That said, some areas of the paper ([https://arxiv.org/pdf/2101.11748.pdf](https://arxiv.org/pdf/2101.11748.pdf)) may still be relevant, given a similar problem space, for someone who understands more of the math involved. ​ At first, I assumed we were using this mechanism and I was missing it somehow. For example, bitsandbytes reduces the dtype to int8, but it then performs fp32 calculations (not int8 calculations). I'm also seeing only fp16 and/or fp32 calculations throughout llama.cpp, koboldcpp, exllama, etc. Especially for quant forms like GGML, it seems like this should be pretty straightforward, though for GPTQ I understand we may be working with full 16 bit floating point values for some calculations. I'd think it may well still be applicable if these are mostly the bias (added in after the linear multiplication is performed, which also results in a 32bit int). It may involve some clever math like in the paper referenced above, since the bias is a 32bit int, but from what's admittedly a nieve perspective this seems like a surmountable problem. There's also dp2a which can do similar with int16 and int8, if greater precision is neccesary. I'd expect other quant methods, especially those using any variation of fixed fp4 or fixed fp8 would be able to leverage the same operations. Even without understanding the math involved in some things here, I do understand fixed precision floating point is easily translated to/from integer math. ​ **Additional notes:** * I'm \_not\_ from a mathematical background by any stretch. I'm a self taught coder who's written and maintained some pretty complicated systems in C/C++/Python/etc. over the course of several decades now, but I'm basically out of my depth with many of the conversations getting into math heavy side of papers I've read. I can grasp the general idea, and I know the hardware/firmware/microcode side of things plenty well enough to understand concepts like certain architectures having an efficient computational path for specific operations, as well as how we can use this in unintended ways by proving (mathmatically especially) that a separate problem can be solved via other means. * I've dug into the source of the major current libraries and tools mentioned here, rewriting bits of the code (cuda code especially) to try and understand them better (llama.cpp, koboldcpp, exllama, bitsandbytes/transformers, ctranslate, etc.) to one extent or another, at least teasing apart enough of it to make sure I'm not just completely wrong on the face of my question. I'm hoping our community can help me understand, and of course also silently crossing my fingers that maybe it's an area I'll get an answer like "the whole problem space has moved really fast, we need to put some blinders on so we aren't really paying close attention to work in other ML realms, no one at the level to implement it has looked hard enough at using that specific mechanism to solve our problems". * I've written and re-written this same post numerous times, discarding it every time because as I dig into it there's a lot of "optimize memory access processes" type conversation going on as I read into optimization work, so I assume much of this has to have been dug into by Occam/etc. That said, I saw another post on the p4/p40 come up on this subreddit, so at least I'm not the \_only\_ one wondering around about something similar. I'm hoping I can get an answer to the question here, at least to an extent I can have peace of mind there's not just some blind spot being missed that could drastically improve inference speed and open up really using the p40 cards for quantized transformers (also I can maybe stop writing and rewriting the same PoC code inside koboldcpp that breaks everything, but shows a dramatically increased efficiency with int8 operations vs fp16 operations by tracing) * Occ4m is certainly aware of the operation, not only mentioning it in a previous comment ([https://github.com/ggerganov/llama.cpp/discussions/915#discussioncomment-5769295](https://github.com/ggerganov/llama.cpp/discussions/915#discussioncomment-5769295)) but also in reference talking about pascal cards ([https://github.com/TimDettmers/bitsandbytes/issues/165#issuecomment-1465114943](https://github.com/TimDettmers/bitsandbytes/issues/165#issuecomment-1465114943)). That reference also mentions "when 4bit gets released", but I haven't investigated 4bit bitsandbytes enough to see if it uses int8 (this was on my list to investigate before posting, but that list hasn't been empty since I started, and I know I'm retreading ground others are deeply involved in at this point). I'm pretty sure digging through the codebase for bitsandbytes didn't turn up any references to dp4a/dp2a however. * This is the first time I've \_ever\_ read someone else's homework notes to understand part of anything, so I have to include a reference: [https://github.com/huangrt01/CS-Notes/blob/master/Notes/Output/nvidia.md](https://github.com/huangrt01/CS-Notes/blob/master/Notes/Output/nvidia.md)
2023-07-19T21:35:03
https://www.reddit.com/r/LocalLLaMA/comments/1547ox6/why_arent_we_using_highly_efficient_int8/
dragonfyre13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1547ox6
false
null
t3_1547ox6
/r/LocalLLaMA/comments/1547ox6/why_arent_we_using_highly_efficient_int8/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Mt24qlEQixxXdzuDxTY-IWgWUxfAXLBBwXjLsxkpDE0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6YHTzda_-QmQ0jfyA5yhgS61epntT6sRXeMUT9VaJqw.jpg?width=108&crop=smart&auto=webp&s=ea8dcfa00d872afa74e581d6d94308e0cf2d9591', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/6YHTzda_-QmQ0jfyA5yhgS61epntT6sRXeMUT9VaJqw.jpg?width=216&crop=smart&auto=webp&s=016cebcac7a9c76088b59cd64101dfc62841df1d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/6YHTzda_-QmQ0jfyA5yhgS61epntT6sRXeMUT9VaJqw.jpg?width=320&crop=smart&auto=webp&s=c5820725dd61129c341a237c5b55b5030e8aa814', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/6YHTzda_-QmQ0jfyA5yhgS61epntT6sRXeMUT9VaJqw.jpg?width=640&crop=smart&auto=webp&s=178d4b4bff63cde1f7fb223d65b8f77189568323', 'width': 640}], 'source': {'height': 920, 'url': 'https://external-preview.redd.it/6YHTzda_-QmQ0jfyA5yhgS61epntT6sRXeMUT9VaJqw.jpg?auto=webp&s=49320394e9208627a024b75494ae11892244c302', 'width': 920}, 'variants': {}}]}
Running Llama 2 locally in <10 min using XetHub
1
I wanted to play with Llama 2 right after its release yesterday, but it took me \~4 hours to download all 331GB of the 6 models. So I brought them into XetHub, where it’s now available for anyone to use: [https://xethub.com/XetHub/Llama2](https://xethub.com/XetHub/Llama2). By using xet mount you can get started in seconds, and within a few minutes, you’ll have the model generating text without needing to download everything or make an inference API call. `# From a g4dn.8xlarge instance in us-west-2:` `Mount complete in 8.629213s` `# install model requirements, and then ...` `(venv-test) ubuntu@ip-10-0-30-1:~/Llama2/code$ torchrun --nproc_per_node 1 example_chat_completion.py \` `--ckpt_dir ../models/llama-2-7b-chat/ \` `--tokenizer_path ../models/tokenizer.model \` `--max_seq_len 512 --max_batch_size 4` `> initializing model parallel with size 1` `> initializing ddp with size 1` `> initializing pipeline with size 1` `Loaded in 306.17 seconds` `User: what is the recipe of mayonnaise?` `> Assistant: Thank you for asking! Mayonnaise is a popular condiment made from a mixture of egg yolks, oil, vinegar or lemon juice, and seasonings. Here is a basic recipe for homemade mayonnaise:` `...` Detailed instructions here: [https://xethub.com/XetHub/Llama2](https://xethub.com/XetHub/Llama2). I'll also add the -GGML variants next for the folks using llama.cpp. Don’t forget to register with Meta to accept the license and acceptable use policy for these models!
2023-07-19T21:40:26
https://www.reddit.com/r/LocalLLaMA/comments/1547u2x/running_llama_2_locally_in_10_min_using_xethub/
rajatarya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1547u2x
false
null
t3_1547u2x
/r/LocalLLaMA/comments/1547u2x/running_llama_2_locally_in_10_min_using_xethub/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mSR0ayVuJYefoqO8_vAKaoNpzQ2ZUo0p8N23InnJohE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/KhGGztR0__MrPTnQetOBKzrPlRNPRNo_lHxgzUNZlCs.jpg?width=108&crop=smart&auto=webp&s=f69244d8b613c1437c97abe71449c2c396acb634', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/KhGGztR0__MrPTnQetOBKzrPlRNPRNo_lHxgzUNZlCs.jpg?width=216&crop=smart&auto=webp&s=5abf4234534fdd3ad42d0e5ec5b5dfe3f06cfa81', 'width': 216}], 'source': {'height': 290, 'url': 'https://external-preview.redd.it/KhGGztR0__MrPTnQetOBKzrPlRNPRNo_lHxgzUNZlCs.jpg?auto=webp&s=db0c1876efc332e13e498f96008fafa736b8bdaf', 'width': 290}, 'variants': {}}]}
Re-uploaded models from other user + removed posts
1
[removed]
2023-07-19T21:43:48
https://www.reddit.com/r/LocalLLaMA/comments/1547xd5/reuploaded_models_from_other_user_removed_posts/
RemarkableAd66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1547xd5
false
null
t3_1547xd5
/r/LocalLLaMA/comments/1547xd5/reuploaded_models_from_other_user_removed_posts/
false
false
self
1
null
Ahem Ahem
1
2023-07-19T21:44:17
https://i.redd.it/blshu5x1szcb1.png
gijeri4793
i.redd.it
1970-01-01T00:00:00
0
{}
1547xs4
false
null
t3_1547xs4
/r/LocalLLaMA/comments/1547xs4/ahem_ahem/
false
false
https://b.thumbs.redditm…GW-9yI0v03cE.jpg
1
{'enabled': True, 'images': [{'id': 'GPMjbOkLy8VB_9C_BDIPArBsV8Dwy8VMLjeDUWjc-Jg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?width=108&crop=smart&auto=webp&s=64adf6f1220d96a5875c2b916f73b26867120afd', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?width=216&crop=smart&auto=webp&s=8d1abc45eabaa3dabefb64369c3bacf24cdbc1f0', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?width=320&crop=smart&auto=webp&s=743369f6993b765f4a476348e2f8106b63351808', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?width=640&crop=smart&auto=webp&s=fb03b8fe2f3144c1764b97ff191bcc24f9055abb', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?width=960&crop=smart&auto=webp&s=c989b0c328b4abbb773c4f49192534e6ec8801bb', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?width=1080&crop=smart&auto=webp&s=1949b0458d374721918c53ea07294bf17b1b81cd', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/blshu5x1szcb1.png?auto=webp&s=4899f53a70bc4e30a16d7c5b64e17590c5b78be2', 'width': 1080}, 'variants': {}}]}
Meta's alignment makes even a 70B sized model look ridiculous
1
I mean, okay, I would have expected this from a 7B *aligned* model.. but 70B and it can’t understand the meaning of killing a process?
2023-07-19T21:51:09
https://www.reddit.com/gallery/1548435
Evening_Ad6637
reddit.com
1970-01-01T00:00:00
0
{}
1548435
false
null
t3_1548435
/r/LocalLLaMA/comments/1548435/metas_alignment_makes_even_a_70b_sized_model_look/
false
false
https://b.thumbs.redditm…ovzwbzfkmJ5Q.jpg
1
null
One of my GGML based models initializes in less than a second, all the others take minutes (despite being the same size and type.)
1
Why would this be? pygmalion-7b-superhot-8k.ggmlv3.q3\_K\_S.bin loads in .14 seconds while all the others are slow. I'm using oobabooga + llama.cpp
2023-07-19T21:52:20
https://www.reddit.com/r/LocalLLaMA/comments/154856e/one_of_my_ggml_based_models_initializes_in_less/
Ai_is_unethical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154856e
false
null
t3_154856e
/r/LocalLLaMA/comments/154856e/one_of_my_ggml_based_models_initializes_in_less/
false
false
self
1
null
Upstage AI's 30M Llama 1 Outshines 70B Llama2, Dominates #1 Spot in OpenLLM Leaderboard!
1
2023-07-19T21:57:03
https://www.reddit.com/r/MachineLearning/comments/153yfry/n_upstage_ais_30m_llama_1_outshines_70b_llama2/
jd_3d
reddit.com
1970-01-01T00:00:00
0
{}
1548989
false
null
t3_1548989
/r/LocalLLaMA/comments/1548989/upstage_ais_30m_llama_1_outshines_70b_llama2/
false
false
https://b.thumbs.redditm…_NG1NG9MtGEw.jpg
1
null
Upstage AI's 30B Llama 1 Reaches top of OpenLLM leaderboard (beating 70B Llama2)
1
2023-07-19T22:01:04
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
jd_3d
huggingface.co
1970-01-01T00:00:00
0
{}
1548cuw
false
null
t3_1548cuw
/r/LocalLLaMA/comments/1548cuw/upstage_ais_30b_llama_1_reaches_top_of_openllm/
false
false
https://a.thumbs.redditm…e3IovQf0l8F4.jpg
1
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]}
Petals 2.0 runs Llama 2 (70B) and Guanaco-65B from Colab at 4-6 tokens/sec
1
**TL;DR:** Petals is a "BitTorrent for LLMs". Our today's release adds support for **Llama 2 (70B, 70B-Chat)** and **Guanaco-65B** in 4-bit. You can inference/fine-tune them [**right from Google Colab**](https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing) or try our [**chatbot web app**](https://chat.petals.dev). Inference runs at **4-6 tokens/sec** (depending on the number of users). **Update:** We've fixed the domain issues with the chat app, now you can use it at [https://chat.petals.dev](https://chat.petals.dev) Hi everyone! Petals is a system for running LLMs collaboratively - you load a part of the model to your consumer-grade GPU, then team up with people serving the other parts to run inference or fine-tuning with a decent speed (much faster than offloading or running on CPU locally). It was discussed on this subreddit just a few days ago. [People build chains through servers like this and can run inference\/fine-tuning for different tasks at the same time.](https://preview.redd.it/ptmxlj4gozcb1.png?width=2600&format=png&auto=webp&s=c6514a23051b7d656a891401d705ccfd9bba3aab) Today we've released a huge [2.0.0 update](https://github.com/bigscience-workshop/petals/releases/tag/v2.0.0.post1) with several exciting features: * **🦙 Support for Llama 2.** The [public swarm](https://health.petals.dev) now hosts Llama 2 (70B, 70B-Chat) and Llama-65B out of the box, but you can also load any other model with Llama architecture. The inference speed depends on the number of users and distance to servers, reaches **6 tokens/sec** in the best case. * **🔌 Pre-loading LoRA adapters (e.g. Guanaco).** We added an option to pre-load large LoRA adapters, which will be activated on a client's request. Thanks to this, we host instruction-finetuned [Guanaco-65B](https://huggingface.co/timdettmers/guanaco-65b) on the same set of machines that host standard Llama-65B. Also, you can do your own fine-tuning since servers support backward passes (see the [Colab tutorial](https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing)). * 🛣️ **Shortest-path routing for inference.** Now the inference client builds an actual graph with all client-server and server-server latencies and compute times, then looks for the fastest path - this makes the system pretty fast even when we have many servers spread across countries/continents. * **🗜️ 4-bit quantization.** We load models in 4-bit (NF4) using `bitsandbytes` \- so that you don't need too many GPUs and get a decent inference speed with relatively small quality loss. Here's a GitHub repo with all documentation: [https://github.com/bigscience-workshop/petals](https://github.com/bigscience-workshop/petals) Chatbot web app: [https://chat.petals.dev](https://chat.petals.dev) I hope this would be useful for people interested in running the largest variants of LLaMA! What do you think?
2023-07-19T22:12:36
https://www.reddit.com/r/LocalLLaMA/comments/1548npz/petals_20_runs_llama_2_70b_and_guanaco65b_from/
hx-zero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1548npz
false
null
t3_1548npz
/r/LocalLLaMA/comments/1548npz/petals_20_runs_llama_2_70b_and_guanaco65b_from/
false
false
https://b.thumbs.redditm…AB2NttA9poVM.jpg
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
Step-by-Step Guide: Installing and Using Llama 2 Locally
1
**Master Llama 2: Local Installation Guide!**
2023-07-19T22:53:31
https://www.reddit.com/r/LocalLLaMA/comments/1549no7/stepbystep_guide_installing_and_using_llama_2/
Small_Championship_2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1549no7
false
null
t3_1549no7
/r/LocalLLaMA/comments/1549no7/stepbystep_guide_installing_and_using_llama_2/
false
false
self
1
null
How to install llama 2 locally
1
[removed]
2023-07-19T22:57:10
https://www.reddit.com/r/LocalLLaMA/comments/1549qpx/how_to_install_llama_2_locally/
Small_Championship_2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1549qpx
false
null
t3_1549qpx
/r/LocalLLaMA/comments/1549qpx/how_to_install_llama_2_locally/
false
false
self
1
{'enabled': False, 'images': [{'id': 'p-v-r7A9-uKeXHZM6TTnj21Ox8BLGnFZjVJsQkqBPYs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_TO7zXPvGOgi3dMg0meY3u33EGd0v8gZzvYpnLSiTGc.jpg?width=108&crop=smart&auto=webp&s=a613f61fefc74241ad6243740c7a0b28ba79ab4c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_TO7zXPvGOgi3dMg0meY3u33EGd0v8gZzvYpnLSiTGc.jpg?width=216&crop=smart&auto=webp&s=d5acbd7706e421adedf2ef99bfff7b2b7d688358', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_TO7zXPvGOgi3dMg0meY3u33EGd0v8gZzvYpnLSiTGc.jpg?width=320&crop=smart&auto=webp&s=ed177bdda7a0e73da6a4896a056a84ffdb65a58a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_TO7zXPvGOgi3dMg0meY3u33EGd0v8gZzvYpnLSiTGc.jpg?auto=webp&s=48428758c971dd2ec38d7559755aa8ddc5e24d72', 'width': 480}, 'variants': {}}]}
Anyone know the nuances of running a model metal performance shader?
1
Mac Computer: 2.3 GHz 8 core intel i9 . 16 GB Ram. GPU: AMD Radeon Pro 5600M 8 GB, and a derpy 1536 MB Intel graphics card. The only way to find out if it a model will work or not is to set the device to mps. No cuda obviously because no NVIDIA GPU. It isn't clear what bit models are more likely to work, and it is common that the model doesn't work because some function wasn't implemented in mps. Mixed results using os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" *** Heck, I wouldn't even mind a suit of <1 Billion parameter text generation models with relevant, testable parameters (i.e bit count, other included operations.) to show what works and what doesn't work. Any recommendations?
2023-07-19T23:11:34
https://www.reddit.com/r/LocalLLaMA/comments/154a2yx/anyone_know_the_nuances_of_running_a_model_metal/
HatLover91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154a2yx
false
null
t3_154a2yx
/r/LocalLLaMA/comments/154a2yx/anyone_know_the_nuances_of_running_a_model_metal/
false
false
self
1
null
Video which goes over the basics of Machine Learning. In this case, it's specifically NLP (Natural Language Processing), which is the basis of all the text-based models we discuss here. TRIGGER WARNING: Apple's 2019 WWDC Video
1
2023-07-19T23:14:45
https://developer.apple.com/videos/play/wwdc2019/232/
jayfehr
developer.apple.com
1970-01-01T00:00:00
0
{}
154a5kv
false
null
t3_154a5kv
/r/LocalLLaMA/comments/154a5kv/video_which_goes_over_the_basics_of_machine/
false
false
default
1
null
Is more VRam or Ram better?
1
As the title says, I am confused as to what I need more of. I know image generation requires VRam, but what about llms? Is 64gb ram better than 16, or is it not worth the investment? Any help would be greatly appreciated!
2023-07-19T23:43:45
https://www.reddit.com/r/LocalLLaMA/comments/154atsc/is_more_vram_or_ram_better/
MrNimbuss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154atsc
false
null
t3_154atsc
/r/LocalLLaMA/comments/154atsc/is_more_vram_or_ram_better/
false
false
self
1
null
Help troubleshooting oobabooga Mac Silicon performance
1
When I use llama.cpp I compiled from scratch, I get full Apple Metal support at around 10 tokens per second on my MacBook Air. When I use oobabooga, I see around 3 tokens per second. It is much slower, even with the threads configured the same. What common performance settings should I look at to optimize oobabooga's Python bindings for llama.cpp? How do I configure it to support Metal?
2023-07-19T23:44:47
https://www.reddit.com/r/LocalLLaMA/comments/154aum0/help_troubleshooting_oobabooga_mac_silicon/
crashj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154aum0
false
null
t3_154aum0
/r/LocalLLaMA/comments/154aum0/help_troubleshooting_oobabooga_mac_silicon/
false
false
self
1
null
Easily run Llama 2 on an A100
1
We made a template to run Llama 2 on a cloud GPU. Brev provisions a GPU from AWS, GCP, and Lambda cloud (whichever is cheapest), sets up the environment and loads the model. You can connect your AWS or GCP account if you have credits you want to use.
2023-07-19T23:53:08
https://www.reddit.com/r/LocalLLaMA/comments/154b1fb/easily_run_llama_2_on_an_a100/
nlikeladder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154b1fb
false
null
t3_154b1fb
/r/LocalLLaMA/comments/154b1fb/easily_run_llama_2_on_an_a100/
false
false
self
1
null
Model as good as gpt-4 (used to be) on code problems?
1
What is the best code only-oriented open source model that I can run for example if I set up a cloud gpu server for that express purpose and am willing to pay for it on an hourly basis? Alternatively, the best code oriented model in any medium, including other paid competitors to openAI?
2023-07-20T00:16:35
https://www.reddit.com/r/LocalLLaMA/comments/154bkin/model_as_good_as_gpt4_used_to_be_on_code_problems/
SkyTemple77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154bkin
false
null
t3_154bkin
/r/LocalLLaMA/comments/154bkin/model_as_good_as_gpt4_used_to_be_on_code_problems/
false
false
self
1
null
Model Hosting System Recommendations
1
I'm just someone trying to mess around with LLaMA GPTQ Models with Langchain, LlamaIndex, and Guidance, and I was wondering -- although there are shown GPU requirements, I'm still hesitant to download a large model (I have poor experience downloading large files for hobby-coding with my previous computer) and run it on my computer locally. Instead, I've been using Google Colab for everything. However, I've found Colab to be a hassle since I have to reinstall a lot of libraries and experiment each new time my runtime times out (and also be wary of overall GPU limits). So, I have a few questions 1. Would it be reasonable to just install a model locally on my MacBook Pro? Since I'm not too experienced with all of this, I don't want to accidentally download any big unnecessary files and mess up my computer. 2. Could there be issues with high storage overheads since the model would have to cache many tokens? 3. Or is there a more optimal way for setting up the Colab environment that I'm missing that doesn't require manual runtime resets, constant rerunning of the model-loading cell (which takes a while), etc.? An example notebook would help a lot!
2023-07-20T00:30:25
https://www.reddit.com/r/LocalLLaMA/comments/154bvha/model_hosting_system_recommendations/
nzha_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154bvha
false
null
t3_154bvha
/r/LocalLLaMA/comments/154bvha/model_hosting_system_recommendations/
false
false
self
1
null
Ways and types of loading LLMs ?i
1
hi, I'm trying to figure how many types/kinds of LLM models there are that can be loaded in python. So far I found the following : 1. llamacpp 2. C transformers 3. .... &#x200B; Once I have the list I want to figure out what is the exact code to load and interact with them. Is there such a resource/tutorial that enumerates them ? &#x200B;
2023-07-20T00:50:31
https://www.reddit.com/r/LocalLLaMA/comments/154cbfl/ways_and_types_of_loading_llms_i/
Double-Lavishness-77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154cbfl
false
null
t3_154cbfl
/r/LocalLLaMA/comments/154cbfl/ways_and_types_of_loading_llms_i/
false
false
self
1
null
Llama 2 Scaling Laws
82
The Llama 2 paper gives us good data about how models scale in performance at different model sizes and training duration. [The road to hell is paved with inappropriate extrapolation.](https://preview.redd.it/xgopjloc90db1.png?width=1166&format=png&auto=webp&s=e5dc1af41a00769e74275076c2d384a0233b1966) Small models scale better in performance with respect to training compute, up to a point that has not yet been reached in the LLM literature. The Chinchilla paper underestimated the optimal ratio of tokens seen to model parameters. This is good news for us: Since smaller models seeing more tokens is the cheapest established way for a company to train a model that reaches a given level of performance, those companies are incentivized to train models that require less compute at inference time. &#x200B; Long version: I took the Llama 2 loss curves from the paper, and traced the curves with a [this tool](http://www.graphreader.com/v2): (4) [For a given performance level \(loss\), how many tokens have each of the models seen?](https://preview.redd.it/65xr3cygszcb1.png?width=832&format=png&auto=webp&s=75496ac49f8da22f3c8beea2452f9c470151672b) Training compute cost is proportional to model\_size X tokens\_seen. We know how big the models are. The loss curves tell us how well each model performed over the course of its training. Other nerds (5) have already worked out how much compute costs on A100s. So, we can estimate the compute cost required to train each model to different levels of performance: [Training cost for each Llama 2 model at a given PPL](https://preview.redd.it/0xghtzxdvzcb1.png?width=652&format=png&auto=webp&s=da8c9e8025c60b8e846cdb47a8f92d64d39534ff) Smaller models are cheaper to train to a given level of performance! (5) [The road to hell is paved with inappropriate extrapolation.](https://preview.redd.it/xgopjloc90db1.png?width=1166&format=png&auto=webp&s=e5dc1af41a00769e74275076c2d384a0233b1966) At some point the small models will presumably saturate --take the trendlines with all due salt!-- and there are only so many not-totally-garbage tokens readily available, maybe around 8-10 trillion (3)(7), . But the takeaway here is we don't know what that point will be from presently public data, the authors of the Llama 2 paper didn't seem to either, and the trends I see point to "moar tokens pls" on medium-sized models for optimal training (6). &#x200B; &#x200B; Footnotes: 1. Technically, 20 T/P optimum is what Chinchilla paper is widely construed to have claimed. In actuality, the Chinchilla paper presented three methods for estimating this optima, and per [Susan Zhang](https://twitter.com/suchenzang)'s [careful read](https://twitter.com/suchenzang/status/1616752494608007171?s=20) of the paper, these ranged from \~1 to \~100 tokens/parameter. Even given this unhelpfully broad 'optimal range', Llama 2 loss curves provide strong evidence that the Chinchilla paper is wrong. 2. One could guild the lily here and look at A100 vs. H100 costs, or factor in the small non-linearity with training at scale, interconnect costs, w/ DeepSpeed n or no, etc. but imo this is a reasonable first approximation for looking at scaling laws. 3. The RefinedWeb (/Falcon) folks found they could get 5TT from CommonCrawl, after filtering and de-duplication. Anna's Archive is the leading shadow library, which, on the back of my napkin, looked like 3TT in books and papers (my napkin ignored the periodicals and comic books sorry), so on the order of 8TT in 'text you can just f'in download'. The Stack is another \~1TT of code, which is after filtering copyleft and unlicensed github code. There are more sources, but my point is we're talking at least \~8 Trillion tokens --4x what Meta used on Llama 2-- readily available to train models before doing anything super computationally intensive like transcribing podcasts and whatnot. 4. I'm omitting values for losses above 1.9 because curve tracing is imprecise where the lines in the chart overlap. 5. I took my scalar for cost from [semianalysis](https://www.semianalysis.com/p/the-ai-brick-wall-a-practical-limit), and rounded it off to the nearest dollar ($14 per billion parameters \* billion tokens seen). Putting a finer point on just how wrong 'chinchilla optimal' is: ['Chinchilla Optimal' training cost vs. achieving the same loss w\/ the next smaller model.](https://preview.redd.it/k2tn61c710db1.png?width=1054&format=png&auto=webp&s=53a5b61e5d082b7c73dfb0127f66cfa4712f6197) A couple notes: * I extrapolated out the 34B model another 100B tokens to make the cost comparison; none of this is super precise (I'm tracing curves after all) but I think it's close enough. * 13B @ 260BT vs. 7B @ 700BT is an exception that proves the rule: 13B is actually cheaper here at its 'Chinchilla Optimal' point than the next smaller model by a significant margin, BUT the 7B model catches up (becomes cheaper than 13B) again at 1.75 PPL. * Similarly, the 34B model is the cheapest model of the family to train to 1.825 - 1.725 PPL, but then the 13B overtakes it again from 1.7-1.675 PPL. 6. Incidentally, word around the AI researcher campfire is gpt-3.5-turbo model is around 20B parameters, trained on a boatload of tokens; idk if this true, but it feels more true to me in light of the Llama 2 scaling laws. 7. Or a lot less as one's threshold for garbage goes up. My view is that Phi-1 validated the data pruning hypothesis for text, and it's highly likely we'll see better smaller models come out of smaller better datasets trained on more epochs.
2023-07-20T01:05:59
https://www.reddit.com/r/LocalLLaMA/comments/154cnvf/llama_2_scaling_laws/
georgejrjrjr
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
154cnvf
false
null
t3_154cnvf
/r/LocalLLaMA/comments/154cnvf/llama_2_scaling_laws/
false
false
https://b.thumbs.redditm…5pkxS9bZSEHs.jpg
82
null
I got a great epitaph!
1
Playing with the 'Search Web' on https://huggingface.co/chat/ I tried a question that would need some juggling of information, but source material for which I'm pretty sure is available online: "When did [my name] first post to the [name of list] mailing list?". The response included : "...The archives of the list are publicly available, but they only go back to 2004, and [my name] passed away in 2003." Eek! (I'm fairly sure the archives will go back to the early days of the web). It then went on to flood me with glowing praise for my technical achievements. All kind-of plausible, all factually incorrect. Totally coherent but with no basis in reality (unless, woooo...). It's winding up included : "Despite his untimely passing, [my name]'s' work continues to have a lasting impact on the field of artificial intelligence...". !!!!! The LLMs do seem to get out of being clueless by hallucinating. The session I just had seemed like it thought it knew more than it actually did, so slipped into the canyon of nonsense very easily. (Kinda reminds me of being in Sri Lanka some years ago. Many times when I asked a local for directions I was pointed the wrong way. The best explanation I could come up with was that it was a cultural thing. Their aim was to please you *in that interaction*, regardless of the 'truth' or whatever happened later.) So anyway I'm now contemplating a memorial fund to pay for ChatGPT 4.
2023-07-20T01:33:56
https://www.reddit.com/r/LocalLLaMA/comments/154d95g/i_got_a_great_epitaph/
danja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154d95g
false
null
t3_154d95g
/r/LocalLLaMA/comments/154d95g/i_got_a_great_epitaph/
false
false
self
1
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
[MLC] Running 70B Llama-2 on M2 Max at 6-10 token/sec
1
https://twitter.com/junrushao/status/1681828325923389440
2023-07-20T02:58:37
https://www.reddit.com/r/LocalLLaMA/comments/154f1xw/mlc_running_70b_llama2_on_m2_max_at_610_tokensec/
yzgysjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154f1xw
false
null
t3_154f1xw
/r/LocalLLaMA/comments/154f1xw/mlc_running_70b_llama2_on_m2_max_at_610_tokensec/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm-x-8xLyv-IFEgP8nmxRw6kiPmk41dNiNO04khY94kY', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/mzY8AzetVFWnNzaCqcN91JvkFEvlF9On3S6ZI65aZ70.jpg?width=108&crop=smart&auto=webp&s=ab1750844f640ed5bea44b2c24e46368b566977c', 'width': 108}], 'source': {'height': 98, 'url': 'https://external-preview.redd.it/mzY8AzetVFWnNzaCqcN91JvkFEvlF9On3S6ZI65aZ70.jpg?auto=webp&s=463ec434438cafaa3e683c2e4a247617ddcf4031', 'width': 140}, 'variants': {}}]}
I just found Llama 2's system prompt on Hugging face (anyone else seen this?)
1
I just discovered the system prompt for the new Llama 2 model that Hugging Face is hosting for everyone to try for free: https://huggingface.co/chat Found this because I noticed this [tiny button](https://i.imgur.com/zbjRvUI.jpg) under the chat response that took me to here and [there was the system prompt](https://i.imgur.com/PeUopFQ.jpg)! Here is it is: > Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.
2023-07-20T03:37:26
https://www.reddit.com/r/LocalLLaMA/comments/154fuch/i_just_found_llama_2s_system_prompt_on_hugging/
IversusAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154fuch
false
null
t3_154fuch
/r/LocalLLaMA/comments/154fuch/i_just_found_llama_2s_system_prompt_on_hugging/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZeAdSxxH5bjoWZmdqa44cBYQ_PLEmgQiG654Y1HM8C0', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/SakdgBUdllle0-3Xgh2DTbwABdjqL9Ck11tjE6eByzo.jpg?width=108&crop=smart&auto=webp&s=741d71e53496ee4b9ce2f9616c065ba4d80c06c0', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/SakdgBUdllle0-3Xgh2DTbwABdjqL9Ck11tjE6eByzo.jpg?width=216&crop=smart&auto=webp&s=9862d974aed56e3b38d573721fb7936ef51779a2', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/SakdgBUdllle0-3Xgh2DTbwABdjqL9Ck11tjE6eByzo.jpg?width=320&crop=smart&auto=webp&s=92096a1851f1aac62a4e09cf1f66ddf776956406', 'width': 320}], 'source': {'height': 217, 'url': 'https://external-preview.redd.it/SakdgBUdllle0-3Xgh2DTbwABdjqL9Ck11tjE6eByzo.jpg?auto=webp&s=66e0ce92fc64c3d7aa333aa6bbdbada52dfcdf12', 'width': 421}, 'variants': {}}]}
Good local model for summarization
1
Can someone recommend a good uncensored model for summarizing text. My specs are- laptop 32 gb ram 3070 8 gb amd 5800 I want to summarize novels. Usually I would prefer Claude 100k but it refuses to work even at the mention of certain topics (like gladiator novels - goes off telling not comfortable with slavery)
2023-07-20T04:03:31
https://www.reddit.com/r/LocalLLaMA/comments/154gcq6/good_local_model_for_summarization/
hihajab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154gcq6
false
null
t3_154gcq6
/r/LocalLLaMA/comments/154gcq6/good_local_model_for_summarization/
false
false
self
1
null
A Llama 2 fine-tune called Puffin! Possibly the first Llama 2 based model fine-tuned on GPT-4 conversations.
1
This is one of the worlds first llama-2 based, fine-tuned language models afaik. Training went for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4. Additional data came from carefully curated examples from STEM related datasets such as CamelAI's Physics, Chemistry, Biology and Math. This includes advanced questions and responses in subjects such as particle physics, quantum mechanics, neurobiology, differential geometry, calculus, logic, optimization problems and more. In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from such curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please check the model card details for who to reach out to! How to use: The method I most recommend is using LM Studio for a code-free seamless way of inferencing reliably! Link to that here: https://lmstudio.ai/ - After LM Studio installs, just go to the search, type puffin, click the first result and then just click download on the right. - Once downloaded you can open the chat tab and select the model in the top model dropdown, now just type and enter. Notable Features: - The first Llama-2 based fine-tuned model released by Nous Research (Same org that released Hermes) - Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021) - Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's) - Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit. - The first commercially available language model released by Nous Research. Please give feedback on what can be improved and ask any questions!
2023-07-20T04:45:33
https://huggingface.co/NousResearch/Redmond-Puffin-13B-V1.3
dogesator
huggingface.co
1970-01-01T00:00:00
0
{}
154h5fu
false
null
t3_154h5fu
/r/LocalLLaMA/comments/154h5fu/a_llama_2_finetune_called_puffin_possibly_the/
false
false
https://b.thumbs.redditm…ZmJC31hOHRgs.jpg
1
{'enabled': False, 'images': [{'id': 'rNBjlmDOA7Vo6WpLGhUeJqcrPt4c4WG6bcYXKWB7FNE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?width=108&crop=smart&auto=webp&s=ca4a116c8613a348da506c5d6ee848a9b1d50eb8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?width=216&crop=smart&auto=webp&s=069135bb6a1e21a973f989f665a3ea39210a7592', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?width=320&crop=smart&auto=webp&s=522a9d6c2bd26a6bc480163489823c2ef7a9740b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?width=640&crop=smart&auto=webp&s=07cc8d0758ccf18c2edb13b4071e910663214a88', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?width=960&crop=smart&auto=webp&s=c2bf77f4a0a740c841e39290dd808ae20d6d0215', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?width=1080&crop=smart&auto=webp&s=ed09b982359f3595aafcc1adc4cfe338c6fb42bd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YXc5nPaKvvsS1FQ4MKjxRbjEX55bNmfPpJOXZ9Od6zk.jpg?auto=webp&s=61aed5c36295e14bcef556a3557c266538195146', 'width': 1200}, 'variants': {}}]}
MiniGPT4.cpp
1
2023-07-20T04:57:51
https://github.com/Maknee/minigpt4.cpp
makneeee
github.com
1970-01-01T00:00:00
0
{}
154hdbm
false
null
t3_154hdbm
/r/LocalLLaMA/comments/154hdbm/minigpt4cpp/
false
false
https://b.thumbs.redditm…DXNY7Zm1Lq8s.jpg
1
{'enabled': False, 'images': [{'id': 'XzjEU3RFE9LSRhgIfYHQdns1CsX7uiq4iGcUSutBZdA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?width=108&crop=smart&auto=webp&s=f8acd11265ffc21ea91c9b738c942599df19b74a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?width=216&crop=smart&auto=webp&s=61dcc9aea82a33be5f979a0a1ce7aefa210fce68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?width=320&crop=smart&auto=webp&s=2af7df8ba02035e54efb2e837c05bc8fe5558fe1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?width=640&crop=smart&auto=webp&s=98b4afe644ae9f249016306d6ff7153fa7ae4234', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?width=960&crop=smart&auto=webp&s=8f42949ce4a921241767c5ac0861bcba1e4ae936', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?width=1080&crop=smart&auto=webp&s=01f5d062db26a3a5869d684f3b0025f7ce1b30f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yXyPzVlFk6LOZ5fvdbYjZ5kUhrCzPtDdHd_SHmVi3n0.jpg?auto=webp&s=d8566ca9b8719ed0ae506f74a46df3da12660339', 'width': 1200}, 'variants': {}}]}
Mixture of LoRA
1
2023-07-20T05:27:30
https://twitter.com/aicrumb/status/1681846805959528448
Spare_Side_5907
twitter.com
1970-01-01T00:00:00
0
{}
154hwpu
false
{'oembed': {'author_name': 'RuntimeError: CUDA out of memory. Tried to allo', 'author_url': 'https://twitter.com/aicrumb', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I have the first proof of concept MoLora (Mixture of Experts / LoRA) done and working! Here&#39;s a colab notebook to inference it (keep in mind, it&#39;s not fully trained, but it is working!) Details below..<a href="https://t.co/CMthoRm9h0">https://t.co/CMthoRm9h0</a> <a href="https://t.co/3dumc9LOon">https://t.co/3dumc9LOon</a> <a href="https://t.co/OmeSQDA91W">pic.twitter.com/OmeSQDA91W</a></p>&mdash; RuntimeError: CUDA out of memory. Tried to allo (@aicrumb) <a href="https://twitter.com/aicrumb/status/1681846805959528448?ref_src=twsrc%5Etfw">July 20, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/aicrumb/status/1681846805959528448', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_154hwpu
/r/LocalLLaMA/comments/154hwpu/mixture_of_lora/
false
false
https://a.thumbs.redditm…5H47wg-fClp8.jpg
1
{'enabled': False, 'images': [{'id': '1vV6qZmSWBgKwW86b2JnzcGmUbnJNBzb01mBvPEyxkQ', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/_8ga7ITWrQlfWwIlyRn3AFy3l2P1qax_Xi4j3a_X5FE.jpg?width=108&crop=smart&auto=webp&s=5b184053af1e4625f7c2a5c16e64244eeb94b5cd', 'width': 108}], 'source': {'height': 85, 'url': 'https://external-preview.redd.it/_8ga7ITWrQlfWwIlyRn3AFy3l2P1qax_Xi4j3a_X5FE.jpg?auto=webp&s=581c0a6178b849d2f4633c49d3ca518f6969c279', 'width': 140}, 'variants': {}}]}
Is Llama 2 Worth Using Before Fine-tuning?
1
[removed]
2023-07-20T06:11:09
https://www.reddit.com/r/LocalLLaMA/comments/154inx9/is_llama_2_worth_using_before_finetuning/
Fantastic-Air8513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154inx9
false
null
t3_154inx9
/r/LocalLLaMA/comments/154inx9/is_llama_2_worth_using_before_finetuning/
false
false
self
1
null
[Project] Prompt-Promptor: An Autonomous Agent for Prompt Engineering
1
Hi, folks. I am happy to debut my open source project Prompt-Promptor. Prompt-Promptor(or shorten for ppromptor) is a Python library with a web UI designed to automatically generate and improve prompts for LLMs. It was inspired from autonomous agents like AutoGPT and consists of three agents: Proposer, Evaluator, and Analyzer. These agents work together with human experts to continuously improve the generated prompts. One of the goals of this project is to simplify prompt tuning for open-source models. Please give it a try/look. Any feedbacks are welcome! Github: [https://github.com/pikho/ppromptor](https://github.com/pikho/ppromptor)
2023-07-20T06:20:48
https://www.reddit.com/r/LocalLLaMA/comments/154itph/project_promptpromptor_an_autonomous_agent_for/
pikhotan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154itph
false
null
t3_154itph
/r/LocalLLaMA/comments/154itph/project_promptpromptor_an_autonomous_agent_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'f44BHl2nQJnf6bmjzLkxPVABak256fYAeYIq3yJ_EdM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=108&crop=smart&auto=webp&s=39d3e384f5a6fac942a3929bc75e9361383456c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=216&crop=smart&auto=webp&s=7f313e5a59c3b13e1f571a6b849ff714850f4f2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=320&crop=smart&auto=webp&s=084c538312d90a1f06c32e6739e4b1d642872483', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=640&crop=smart&auto=webp&s=3c24609714864be1fe416146025d4e9a5af2ac97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=960&crop=smart&auto=webp&s=86e9db6441385dff49436ccbca554eae780b4d3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=1080&crop=smart&auto=webp&s=9970608318326e88876568addcac20fab303af6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?auto=webp&s=4b60b9f65ebce616fdd747ae72fe9fe6f9f5aecc', 'width': 1200}, 'variants': {}}]}
How to install the new Llamma 2?
1
I am a complete newbie. I got the download link. But the instructions are so unclear. I hoped it's just click and download a program, but they are giving me these command lines and everything is so confusing..
2023-07-20T06:51:17
https://www.reddit.com/r/LocalLLaMA/comments/154jcio/how_to_install_the_new_llamma_2/
190cm_Lithuanian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154jcio
false
null
t3_154jcio
/r/LocalLLaMA/comments/154jcio/how_to_install_the_new_llamma_2/
false
false
self
1
null
experiments on llama2
1
I tried llama2 on huggingface chat which uses the 70b model, but the results were disappointing. I tried for 2 cases answering normal questions in arabic it fails badly and asking the model to generate results about a lesson and asked for the results to be in English and in JSON format it failed to even understand the objective has anyone tried things out and got some better results
2023-07-20T07:20:36
https://www.reddit.com/r/LocalLLaMA/comments/154juvf/experiments_on_llama2/
Difficult_Head_5441
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154juvf
false
null
t3_154juvf
/r/LocalLLaMA/comments/154juvf/experiments_on_llama2/
false
false
self
1
null
What would cause a single word to be repeated over and over and over again?
1
[removed]
2023-07-20T07:30:17
https://www.reddit.com/r/LocalLLaMA/comments/154k0kd/what_would_cause_a_single_word_to_be_repeated/
Parogarr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154k0kd
false
null
t3_154k0kd
/r/LocalLLaMA/comments/154k0kd/what_would_cause_a_single_word_to_be_repeated/
false
false
self
1
null
Meta teams up with Microsoft to unveil its latest AI model Llama 2
1
2023-07-20T07:45:13
https://www.ibtimes.co.uk/meta-teams-microsoft-unveil-its-latest-ai-model-llama-2-1717819
vinaylovestotravel
ibtimes.co.uk
1970-01-01T00:00:00
0
{}
154k9f0
false
null
t3_154k9f0
/r/LocalLLaMA/comments/154k9f0/meta_teams_up_with_microsoft_to_unveil_its_latest/
false
false
https://b.thumbs.redditm…Y1lqVfyRDNCQ.jpg
1
{'enabled': False, 'images': [{'id': 'QgFrYIlj5mAJRPy6HbNsDEIvQUv5XUUTrH2atcJsOnE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?width=108&crop=smart&auto=webp&s=f8ca9c38846f3fcb02339469010427d207719568', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?width=216&crop=smart&auto=webp&s=dd877df94492f60b41b75accf66706cc8268398b', 'width': 216}, {'height': 201, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?width=320&crop=smart&auto=webp&s=fa68d1face5b2c5af723ce2c8b739ac7e7dc2c0c', 'width': 320}, {'height': 402, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?width=640&crop=smart&auto=webp&s=df746ffbff1122194a68324329f02834cd68e83c', 'width': 640}, {'height': 603, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?width=960&crop=smart&auto=webp&s=88b6e8afc2e39bb6c6de8e92d93761b537f31698', 'width': 960}, {'height': 678, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?width=1080&crop=smart&auto=webp&s=538a9c5c152ca03de90e229e70a1c8dc9f02c789', 'width': 1080}], 'source': {'height': 754, 'url': 'https://external-preview.redd.it/pabMlpkDgPw2THXbOY6a62aPFw9kuafUMw6_C2pzIw8.jpg?auto=webp&s=49da70e6f014d1f631583f80289672c44aef4831', 'width': 1200}, 'variants': {}}]}
I like we finally have a glance of real AI and the first thing we do is to teach it political correctness, and censorship
1
Title
2023-07-20T08:05:44
https://www.reddit.com/r/LocalLLaMA/comments/154kmm0/i_like_we_finally_have_a_glance_of_real_ai_and/
skillerpsychobunny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154kmm0
false
null
t3_154kmm0
/r/LocalLLaMA/comments/154kmm0/i_like_we_finally_have_a_glance_of_real_ai_and/
false
false
self
1
null
LLM for Engineering and Physics chat?
1
I am currently doing some personal engineering and electronics projects, including a deep diving ROV. I wanted to use a local LLM to chat with to help me figure out stuff, but have found all the ones I tried are terrible at physics, getting even the most basic stuff wrong. For example, I tried asking what the pressure at certain depth would be, and not a single one has managed to figure out a decent answer, many are totally wrong, suggesting the pressure decreases as we get deeper! So my question is, are there any LLMs that have been trained on Physics and Engineering specifically? I check though HuggingFace but didn't find anything specific. &#x200B; &#x200B; &#x200B;
2023-07-20T08:12:39
https://www.reddit.com/r/LocalLLaMA/comments/154kqus/llm_for_engineering_and_physics_chat/
BohemianCyberpunk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154kqus
false
null
t3_154kqus
/r/LocalLLaMA/comments/154kqus/llm_for_engineering_and_physics_chat/
false
false
self
1
null
What is the best way for finetuning llama 2?
1
For example, I have a text summarization dataset and I want to fine-tune a llama 2 model with this dataset. I am wandering what the best way is for finetuning. 1. Llama-2 base or llama 2-chat. lt seems that llama 2-chat has better performance, but I am not sure if it is more suitable for instruct finetuning than base model. 2. If llama 2-chat is better, do I need to delete DEFAULT\_SYSTEM\_PROMPT? I am afraid after deleting this, the performance will drop a lot.
2023-07-20T08:16:41
https://www.reddit.com/r/LocalLLaMA/comments/154ktdm/what_is_the_best_way_for_finetuning_llama_2/
Financial_Stranger52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154ktdm
false
null
t3_154ktdm
/r/LocalLLaMA/comments/154ktdm/what_is_the_best_way_for_finetuning_llama_2/
false
false
self
1
null
Puffin! Might be the first Llama 2 derived model fine-tuned on GPT-4 conversations.
1
This is one of the worlds first llama-2 based, fine-tuned language models afaik. Training went for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4. Additional data came from carefully curated examples from STEM related datasets such as CamelAI's Physics, Chemistry, Biology and Math. This includes advanced questions and responses in subjects such as particle physics, quantum mechanics, neurobiology, differential geometry, calculus, logic, optimization problems and more. In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from such curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please check the model card details for who to reach out to! How to use: The method I most recommend is using LM Studio for a code-free seamless way of inferencing reliably! Link to that here: https://lmstudio.ai/ - After LM Studio installs, just go to the search, type puffin, click the first result and then just click download on the right. - Once downloaded you can open the chat tab and select the model in the top model dropdown, now just type and enter. Notable Features: - The first Llama-2 based fine-tuned model released by Nous Research (Same org that released Hermes) - Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021) - Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's) - Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit. - The first commercially available language model released by Nous Research. Please give feedback on what can be improved and ask any questions!
2023-07-20T08:32:35
https://huggingface.co/NousResearch/Redmond-Puffin-13B
dogesator
huggingface.co
1970-01-01T00:00:00
0
{}
154l2wt
false
null
t3_154l2wt
/r/LocalLLaMA/comments/154l2wt/puffin_might_be_the_first_llama_2_derived_model/
false
false
https://b.thumbs.redditm…WQaQRiqskuhA.jpg
1
{'enabled': False, 'images': [{'id': '1IrQkLG1sye0uaDQ867TB6BdNrVVIW0EdV6HjPByqTo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=108&crop=smart&auto=webp&s=68d33150ff3e585a90055f967049a4c60288e0d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=216&crop=smart&auto=webp&s=a2be19b078439167112d8258dfc123f173664e05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=320&crop=smart&auto=webp&s=c873c21de35ea6378d7f3c6bc08bf2cc212ea278', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=640&crop=smart&auto=webp&s=6aff8a79a3d5bcad15af65bc634dd962ce4e88a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=960&crop=smart&auto=webp&s=fd2a3f2fa2276743a706fe2b96321aab6fbe9eee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=1080&crop=smart&auto=webp&s=04d2084799981ea935654e4f92550b2aa087e7ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?auto=webp&s=a1d724e10a21e03c1b4054bccf86b5562af2229f', 'width': 1200}, 'variants': {}}]}
Trying to install llama 2. I got the authorization from meta, but it says I don't. Using oogabooga webui
1
2023-07-20T08:40:37
https://i.redd.it/aufplid013db1.png
190cm_Lithuanian
i.redd.it
1970-01-01T00:00:00
0
{}
154l7pt
false
null
t3_154l7pt
/r/LocalLLaMA/comments/154l7pt/trying_to_install_llama_2_i_got_the_authorization/
false
false
https://a.thumbs.redditm…uWDxxm_0xEi0.jpg
1
{'enabled': True, 'images': [{'id': 'oizY7nsB48YsMd3PMoFuW8OPnA14UwfqOxWMjHWOyMM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/aufplid013db1.png?width=108&crop=smart&auto=webp&s=e7b137385ab50a6bae5725caa441bdc31113d923', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/aufplid013db1.png?width=216&crop=smart&auto=webp&s=3621255c7271b7c90160a7c8439784166a494c30', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/aufplid013db1.png?width=320&crop=smart&auto=webp&s=c41c31e503d9c2cab2081b8a99e28ab9863057d1', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/aufplid013db1.png?width=640&crop=smart&auto=webp&s=20978b2165f97aef1dc9cf13c050b44322b6e96a', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/aufplid013db1.png?width=960&crop=smart&auto=webp&s=62f036f81f8ed6a8f380dd659d7764785109fddf', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/aufplid013db1.png?width=1080&crop=smart&auto=webp&s=75012d3db63f6ee3921757cc42094130b8fabda9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/aufplid013db1.png?auto=webp&s=32ccc728cfba45a1cf25826698fc4baaab700f55', 'width': 1920}, 'variants': {}}]}
RuGPT 3.5 13B - a new model for Russian language with MIT license.
1
A [new release](https://huggingface.co/ai-forever/ruGPT-3.5-13B) of model tuned for Russian language. The samples from the developer look very good. Russian language features a lot of grammar rules influenced by the meaning of the words, which had been a pain ever since I tried making games with TADS 2. This model gets them right. Another big problem is that Russian uses non-ASCII letters, which means that most of the time 1 letter = 1 token. u/The-Bloke could you please add this to your queue to make GGML? It is a rather unique model. There are very few open models for Russian that don't fail at grammar miserably. Would be cool if people can run them on their potatoes. Maybe then our kids would stop failing at grammar miserably :).
2023-07-20T08:48:05
https://www.reddit.com/r/LocalLLaMA/comments/154lcbg/rugpt_35_13b_a_new_model_for_russian_language/
Barafu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154lcbg
false
null
t3_154lcbg
/r/LocalLLaMA/comments/154lcbg/rugpt_35_13b_a_new_model_for_russian_language/
false
false
self
1
{'enabled': False, 'images': [{'id': 'M_2rrfNA3cOEkLLsHvkoQHR3OwIGFEOxQzY1gjISmPY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?width=108&crop=smart&auto=webp&s=b40d585f2203e9cb1af9b06f50e004b6dd960517', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?width=216&crop=smart&auto=webp&s=30343d0e2d258095e98dfd0831cf4e793f8c019c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?width=320&crop=smart&auto=webp&s=fbd6a11d572bc45f0f77f1e1d88c8e8bcdd641b0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?width=640&crop=smart&auto=webp&s=7ca709e5d2c574476bf68b6fd685bbb719cc430e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?width=960&crop=smart&auto=webp&s=11065516d809fe3c9b7c95f6401bbeb624d0abe7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?width=1080&crop=smart&auto=webp&s=21776a4a4abfe607f19e7358e25492c188fe45e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jIAUzEwJfiXhU-B5w06oB22daknFHaathToDpWPbi8g.jpg?auto=webp&s=10bf8c64e0e6d323841425f0d455ae4e200405ba', 'width': 1200}, 'variants': {}}]}
Target Modules for Llama-2-7B
1
What target_modules are used in PEFT config for llama models?
2023-07-20T09:09:06
https://www.reddit.com/r/LocalLLaMA/comments/154lpxs/target_modules_for_llama27b/
Sufficient_Run1518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154lpxs
false
null
t3_154lpxs
/r/LocalLLaMA/comments/154lpxs/target_modules_for_llama27b/
false
false
self
1
null
How you people running Llama 2 70b in text-generation-webui?
1
I have been trying for more than 10 hours again and again I am using latest pull of text-generation-webui in runpod (exllama bumped to 0.0.7) I downloaded Llama-2-70B-GPTQ and used ExLLama When I click generate , nothing happened So I checked the logs, and found this stacktrace RuntimeError: shape '\[1, 5, 64, 128\]' is invalid for input of size 5120 I followed all settings such as "injuect\_fuesed\_attention" Used AutoGPTQ, Exllama , tried different models including fp16, [Panchovix/LLaMA-2-70B-GPTQ-transformers4.32.0.dev0](https://huggingface.co/Panchovix/LLaMA-2-70B-GPTQ-transformers4.32.0.dev0) etc, But all 70b throwing same errors \--- But I am seeing people running Lllama 2 70b? &#x200B; How you able to run 70b ? please help me
2023-07-20T09:11:06
https://www.reddit.com/r/LocalLLaMA/comments/154lr9j/how_you_people_running_llama_2_70b_in/
RageshAntony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154lr9j
false
null
t3_154lr9j
/r/LocalLLaMA/comments/154lr9j/how_you_people_running_llama_2_70b_in/
false
false
self
1
{'enabled': False, 'images': [{'id': '9SChMy77DY8uX1j6uPNCVG94VDqvd5nSNweBCfsX0Ow', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=108&crop=smart&auto=webp&s=382be2aac44ea31e81dd4929b1bb417651250989', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=216&crop=smart&auto=webp&s=5686bdc316c728b06acc8daf2907f58ed7300a14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=320&crop=smart&auto=webp&s=4dacb3c03b650948814a17ffd4c6f8ffb8c744f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=640&crop=smart&auto=webp&s=dd9166c0540dbf9915c4f0ed375ffbc1fb6cbdd7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=960&crop=smart&auto=webp&s=48c486a9ae3d5fbe50a02f0d529718ddae647ddc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=1080&crop=smart&auto=webp&s=1b122be311017726a2f8289aa871734b05ff306c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?auto=webp&s=8e55c78aa7a330b649f0ec12e14fec476fd79d3f', 'width': 1200}, 'variants': {}}]}
LLaMA debates ethics of synthetic meat
1
2023-07-20T09:14:50
https://www.reddit.com/gallery/154ltky
gijeri4793
reddit.com
1970-01-01T00:00:00
0
{}
154ltky
false
null
t3_154ltky
/r/LocalLLaMA/comments/154ltky/llama_debates_ethics_of_synthetic_meat/
false
false
https://b.thumbs.redditm…4V87qfROhuVI.jpg
1
null
Llama2 70B GPTQ full context on 2 3090s
1
Settings used are: split 14,20 max\_seq\_len 16384 alpha\_value 4 &#x200B; It loads entirely! &#x200B; Remember to pull the latest ExLlama version for compatibility :D
2023-07-20T10:52:47
https://www.reddit.com/r/LocalLLaMA/comments/154nmj9/llama2_70b_gptq_full_context_on_2_3090s/
ElBigoteDeMacri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154nmj9
false
null
t3_154nmj9
/r/LocalLLaMA/comments/154nmj9/llama2_70b_gptq_full_context_on_2_3090s/
false
false
self
1
{'enabled': False, 'images': [{'id': 'awWwBtkl3zbFCjVk9WKRpAn8SczIR_pvUshM77kEhy0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?width=108&crop=smart&auto=webp&s=bfbdb977c50938bcdcd4346bcb84b546aceab358', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?width=216&crop=smart&auto=webp&s=b2f5aeb4d71e78ceedd6396b40d7ab7967f11efb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?width=320&crop=smart&auto=webp&s=69a5601f77df8cf4252ed364131c3204b958193c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?width=640&crop=smart&auto=webp&s=b397eba5fec4c2d4c84919be9e62f5371e359e22', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?width=960&crop=smart&auto=webp&s=a029524c814365d33705ebf66280cd2673a095c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?width=1080&crop=smart&auto=webp&s=e598edd7c3523a69de59028413bf0e7fddfee1a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FP7mDimV_FA6AelpG7p_ROrx7ZujOBISzt51PnTsRc8.jpg?auto=webp&s=6ceef43adf24ff7baac32b4741ba3cf90f826816', 'width': 1200}, 'variants': {}}]}
Puffin V1.3, a Llama 2 model fine-tuned on curated GPT-4 conversations.
1
ERROR: type should be string, got "\nhttps://huggingface.co/NousResearch/Redmond-Puffin-13B\n\nTrained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples. Most of which are long context conversations between a real human and GPT-4. Additional data came from carefully curated examples from STEM related datasets such as CamelAI's Physics, Chemistry, Biology and Math.\n\nThis includes advanced questions and responses in subjects such as particle physics, quantum mechanics, neurobiology, differential geometry, calculus, logic, optimization problems and more.\n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from such curations.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please check the model card details for who to reach out to!\n\nHow to use:\n\nThe method I most recommend is using LM Studio for a code-free seamless way of inferencing reliably! Link to that here: https://lmstudio.ai/\n\n - After LM Studio installs, just go to the search, type puffin, click the first result and then just click download on the right.\n\n - Once downloaded you can open the chat tab and select the model in the top model dropdown, now just add in the system prompt and prefix suffix settings on the right (see model card for recommended settings) then start chatting!\n\nNotable Features:\n\n - The first Llama-2 based fine-tuned model released by Nous Research (Same org that released Hermes)\n\n - Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)\n\n - Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)\n\n - Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.\n\n - The first commercially available language model released by Nous Research.\n\nPlease give feedback on what can be improved and ask any questions!"
2023-07-20T11:09:21
https://www.reddit.com/r/LocalLLaMA/comments/154nxwh/puffin_v13_a_llama_2_model_finetuned_on_curated/
dogesator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154nxwh
false
null
t3_154nxwh
/r/LocalLLaMA/comments/154nxwh/puffin_v13_a_llama_2_model_finetuned_on_curated/
false
false
self
1
{'enabled': False, 'images': [{'id': '1IrQkLG1sye0uaDQ867TB6BdNrVVIW0EdV6HjPByqTo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=108&crop=smart&auto=webp&s=68d33150ff3e585a90055f967049a4c60288e0d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=216&crop=smart&auto=webp&s=a2be19b078439167112d8258dfc123f173664e05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=320&crop=smart&auto=webp&s=c873c21de35ea6378d7f3c6bc08bf2cc212ea278', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=640&crop=smart&auto=webp&s=6aff8a79a3d5bcad15af65bc634dd962ce4e88a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=960&crop=smart&auto=webp&s=fd2a3f2fa2276743a706fe2b96321aab6fbe9eee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?width=1080&crop=smart&auto=webp&s=04d2084799981ea935654e4f92550b2aa087e7ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/89Wvm1yQQ7uWCTKVBYREXVA8FbhdxWzc2U-i-TLXdjQ.jpg?auto=webp&s=a1d724e10a21e03c1b4054bccf86b5562af2229f', 'width': 1200}, 'variants': {}}]}
Llama-2-7B inference
1
I'm using Llama-2-7B and it's producing outputs like this. When I ask a question, it answers it in one line and then just repeats it until it reaches the token limit or it produces it's own sets of question and answers. Has anyone else encountered the same problem?
2023-07-20T12:03:24
https://i.redd.it/67tmop4c14db1.jpg
Alive_Effective9516
i.redd.it
1970-01-01T00:00:00
0
{}
154p1ae
false
null
t3_154p1ae
/r/LocalLLaMA/comments/154p1ae/llama27b_inference/
false
false
https://b.thumbs.redditm…rRZYM-SRd3eU.jpg
1
{'enabled': True, 'images': [{'id': 'ob5ClT_Q6nNEbY1TxfMm2QwqZjOkLCulSc3579mOaBw', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?width=108&crop=smart&auto=webp&s=ce37f9c9fd48ac2367d9e72e8b3d63df290a6ed9', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?width=216&crop=smart&auto=webp&s=123325a3fac97ca9bb4eadcabeadb173beafb3fa', 'width': 216}, {'height': 27, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?width=320&crop=smart&auto=webp&s=037428f8b212de9d22c805738bda5ad13afac55b', 'width': 320}, {'height': 54, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?width=640&crop=smart&auto=webp&s=ed738bec178b1976387b7c35816c661b6bb5ec5e', 'width': 640}, {'height': 81, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?width=960&crop=smart&auto=webp&s=f4754255ccc7e6276ac508559b9113ab604b0ba3', 'width': 960}, {'height': 91, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?width=1080&crop=smart&auto=webp&s=99b933b127d23f40645134d2ab05d04ec79b790c', 'width': 1080}], 'source': {'height': 136, 'url': 'https://preview.redd.it/67tmop4c14db1.jpg?auto=webp&s=92499f7c8115d4f56c368a91d8b68a28af78976d', 'width': 1600}, 'variants': {}}]}
Llama-2-7B inference
1
I'm using Llama-2-7B and it's producing outputs like this. When I ask a question, it answers it in one line and then just repeats it until it reaches the token limit or it produces it's own sets of question and answers. Has anyone else encountered the same problem?
2023-07-20T12:04:13
https://i.redd.it/ofhspqfh14db1.jpg
Alive_Effective9516
i.redd.it
1970-01-01T00:00:00
0
{}
154p1zp
false
null
t3_154p1zp
/r/LocalLLaMA/comments/154p1zp/llama27b_inference/
false
false
https://b.thumbs.redditm…OtPFPvjgb7ZE.jpg
1
{'enabled': True, 'images': [{'id': 'qwOea8k7BgiNcBiIKI22TyYVAV7WHRHR80kufS1nYG0', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?width=108&crop=smart&auto=webp&s=4af410b599185c556bc98caf7b0e57f3537db09e', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?width=216&crop=smart&auto=webp&s=90f5a0093ae9a815669b19443907e1e334885d52', 'width': 216}, {'height': 27, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?width=320&crop=smart&auto=webp&s=60a30d7b50b58684a12d2528d909af0b13b59f01', 'width': 320}, {'height': 54, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?width=640&crop=smart&auto=webp&s=cbb7df481fb2b903244d9e27790e51ca91418bd1', 'width': 640}, {'height': 81, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?width=960&crop=smart&auto=webp&s=cd44a8a75243db412710630396296b1349b79b7a', 'width': 960}, {'height': 91, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?width=1080&crop=smart&auto=webp&s=baac360f35b4cbb73ca5cc14e4dd92fe46c0aaef', 'width': 1080}], 'source': {'height': 136, 'url': 'https://preview.redd.it/ofhspqfh14db1.jpg?auto=webp&s=827c0a5e7f1bb3ffafc5e0a724b925657da8ccc3', 'width': 1600}, 'variants': {}}]}
Llama-2-7B inference
1
I'm using Llama-2-7B and it's producing outputs like this. When I ask a question, it answers it in one line and then just repeats it until it reaches the token limit or it produces it's own sets of question and answers. Has anyone else encountered the same problem?
2023-07-20T12:05:26
https://i.redd.it/6ngvbt9p14db1.jpg
Alive_Effective9516
i.redd.it
1970-01-01T00:00:00
0
{}
154p2we
false
null
t3_154p2we
/r/LocalLLaMA/comments/154p2we/llama27b_inference/
false
false
https://b.thumbs.redditm…fC-m6WJ-6R7M.jpg
1
{'enabled': True, 'images': [{'id': 't11svI3a2_nASLBEANweUgLZCnkmZ6em7k92xyQoC2s', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?width=108&crop=smart&auto=webp&s=6e5b7f9e1da2af39455c195ded59bdc74e3385f7', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?width=216&crop=smart&auto=webp&s=c6c91bb68f5b2fe678d050c62b52dabe092d8f09', 'width': 216}, {'height': 27, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?width=320&crop=smart&auto=webp&s=bfe1ca2776126612d268906ebedd9f31c1bed57c', 'width': 320}, {'height': 54, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?width=640&crop=smart&auto=webp&s=c5d504bf4d072393b67dd42594fed52876e3f3ff', 'width': 640}, {'height': 81, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?width=960&crop=smart&auto=webp&s=dd6b2d01bfb4998b7473270ed5574e10cc5d0142', 'width': 960}, {'height': 91, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?width=1080&crop=smart&auto=webp&s=9934d2c3d3a959c16045aed1933a333dab77fd7d', 'width': 1080}], 'source': {'height': 136, 'url': 'https://preview.redd.it/6ngvbt9p14db1.jpg?auto=webp&s=fc58a6c02ef8f21d67e7d2f67896f551980221c7', 'width': 1600}, 'variants': {}}]}
Is it possible to apply SuperHot on Llama 2 get 16K max context length?
1
SuperHot increased the max context length for the original Llama from 2048 to 8192. Can people apply the same technique on Llama 2 and increase its max context length from 4096 to 16384?
2023-07-20T12:23:22
https://www.reddit.com/r/LocalLLaMA/comments/154pgrv/is_it_possible_to_apply_superhot_on_llama_2_get/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154pgrv
false
null
t3_154pgrv
/r/LocalLLaMA/comments/154pgrv/is_it_possible_to_apply_superhot_on_llama_2_get/
false
false
self
1
null
SillyTavern Main release 1.9
1
## API - **Poe - removed and no longer supported.** - Updated KAI presets - Add k_euler_a sampler for StableHorde - Scale API support - OpenAI davinci model support - Randomization button of API generation settings - OpenRouter can be used independently of WindowAI - Claude 2 models via Chat Completion API - oobabooga mirostat support. ## UI - Improved moving UI (smoother, no more window overflowing) - Moving UI presets to save and load - Toggle to 'avoid character card spoilers' (hides the character defs from view) - Smooth fade transition when character sprites change - Optimized Extensions manager display - Unicode icons for colorblind users - i18n translations (Japanese WIP, Korean), and improved Chinese - New background to celebrate 10-thousand server members! by <@444454171249213461> - Group chat member list can be popped out for easy mute/force talk - Character list toggle to display it as a grid instead of a list - Chat width is now a slider ## FIXES - ChromaDB optimization - Better prompt token itemization - Fix chat window resize on Mac Safari - Author's Note is now a built-in function, not an extension. - Prompt bias is no longer used when Impersonating ## Slash Commands - /go slash command to open any character by name - /help is easier to read - /bgcol to auto-select UI colors based on the background - /sysgen command to prompt the AI to generate a response as the 'system' entity - /impersonate (/imp) to call an impersonation request - /delchat - deletes the current chat. - /cancel - deletes all messages from a certain character ## New Features - Statistic tracking for the user and characters (only local, not shared or tracked anywhere else) - Restyled World Info entry display and logic - probability is always on - the memo is always visible - selective is always on (but only active if the box has contents) - RegEx auto-substitute for almost anything in the chat/prompt - Retroactive bookmarking (create a bookmark from past messages) - API and model are now saved in the metadata of each chat message - each swipe now gets its own metadata - StableDiffusion prompt and Caption results refinement - customizable AI response stopping strings - Tokenizers can now use the API you are connected to - Option to keep chats when you delete a character - New character list sorting order: Random - Backgrounds can be renamed inside the UI - External extension installation via git download ## Macros - {{random}} macro to select a random item from a list (numbers, text, anything) - {{idle_duration}} shows the amount of time elapsed since the last message - {{input}} macro to add in whatever exists in the chat bar - {{roll}} macro to simulate dice rolling, which is sent to the prompt. https://github.com/SillyTavern/SillyTavern/releases/tag/1.9.0 ## How to update: https://docs.sillytavern.app/usage/update/#how-to-update-sillytavern
2023-07-20T12:27:22
https://www.reddit.com/r/LocalLLaMA/comments/154pjyc/sillytavern_main_release_19/
RossAscends
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154pjyc
false
null
t3_154pjyc
/r/LocalLLaMA/comments/154pjyc/sillytavern_main_release_19/
false
false
self
1
{'enabled': False, 'images': [{'id': '8GESQJza63Y9ELmkd08222Fs98-eYupCaUhjgCjVypc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?width=108&crop=smart&auto=webp&s=5e52c7997fb2ab170181e3d1c724dd163b01c3b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?width=216&crop=smart&auto=webp&s=0cda59eb9fc4176bce5496008d2c2745d7c91784', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?width=320&crop=smart&auto=webp&s=eb9a8aeaef1a95228cced7a12200b81c2a86700c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?width=640&crop=smart&auto=webp&s=a78c729710e65df420d490b3f58ee8529d31bbd4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?width=960&crop=smart&auto=webp&s=c4530442d0c12557cf1e6c43c3aee384cdc42ba7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?width=1080&crop=smart&auto=webp&s=e6e598a94e9afdec214b6b13e5ff31d96364ac76', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ncg70FICPVhuMbyGdas1aZqVcXS1mnLd6ttfmXy7598.jpg?auto=webp&s=805fb3c7e4882a2ba70d285b9cc4ab7209199fc4', 'width': 1200}, 'variants': {}}]}
Run Vicuna model without using webui
1
I know how to install models using text generation webui But is there a way to create my own UI interface for example? Thank you
2023-07-20T12:44:14
https://www.reddit.com/r/LocalLLaMA/comments/154px9f/run_vicuna_model_without_using_webui/
Alcali
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154px9f
false
null
t3_154px9f
/r/LocalLLaMA/comments/154px9f/run_vicuna_model_without_using_webui/
false
false
self
1
null
Llama-2-13B-chat generates emojis
1
I have been texting Llama-2-13B-chat using the oobabooga API, and I'm very happy with the performance so far. However, I'm getting emojis between text outputs. How can I get rid of that? I use python
2023-07-20T13:45:27
https://www.reddit.com/r/LocalLLaMA/comments/154rdj3/llama213bchat_generates_emojis/
mashimaroxc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154rdj3
false
null
t3_154rdj3
/r/LocalLLaMA/comments/154rdj3/llama213bchat_generates_emojis/
false
false
self
1
null
Any example scripts for a full fine tune?
1
I would like to try my hand at doing a full fine tune (where I change all the weights), not LORA. I’m planning on renting a 8xA100 server on runpod. Does anyone have an example script I can get started off on and learn from? I’ve been using the oogabooga GUI to train before, so very new to using scripts. I have good programming knowledge so I’m confident I can learn.
2023-07-20T13:49:38
https://www.reddit.com/r/LocalLLaMA/comments/154rh2s/any_example_scripts_for_a_full_fine_tune/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154rh2s
false
null
t3_154rh2s
/r/LocalLLaMA/comments/154rh2s/any_example_scripts_for_a_full_fine_tune/
false
false
self
1
null
What models do you use the most?
1
[removed]
2023-07-20T13:58:06
https://www.reddit.com/r/LocalLLaMA/comments/154rol3/what_models_do_you_use_the_most/
Fantastic-Air8513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154rol3
false
null
t3_154rol3
/r/LocalLLaMA/comments/154rol3/what_models_do_you_use_the_most/
false
false
self
1
null
Llama-2 7B uncensored - QLoRA fine-tune on wizard_vicuna_70k_unfiltered
1
Just ran a QLoRA fine-tune on Llama-2 with an uncensored conversation dataset: [georgesung/llama2\_7b\_chat\_uncensored · Hugging Face](https://huggingface.co/georgesung/llama2_7b_chat_uncensored) &#x200B; * The dataset used was [ehartford/wizard\_vicuna\_70k\_unfiltered · Datasets at Hugging Face](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) * I ran QLoRA with these settings [llm\_qlora/configs/llama2\_7b\_chat\_uncensored.yaml at main · georgesung/llm\_qlora (github.com)](https://github.com/georgesung/llm_qlora/blob/main/configs/llama2_7b_chat_uncensored.yaml) * I fine-tuned for 1 epoch (\~35k conversations), and it took 19 hours on a single A10G GPU (24 GB VRAM) * The model card includes instructions on how to reproduce the fine-tuning I set up a simple HuggingFace space to test it out, running on an A10G (will pause this space after a day or so to save $$): [https://huggingface.co/spaces/georgesung/llama2\_7b\_uncensored\_chat](https://huggingface.co/spaces/georgesung/llama2_7b_uncensored_chat) Note this doesn't use any inference optimizations (e.g. vllm) so the responses are pretty slow. From the results though, if you make some "controversial" requests, the model will respond, but it will also add a bit of moral lecturing at the end. Have fun!
2023-07-20T14:00:09
https://www.reddit.com/r/LocalLLaMA/comments/154rqay/llama2_7b_uncensored_qlora_finetune_on_wizard/
georgesung
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154rqay
false
null
t3_154rqay
/r/LocalLLaMA/comments/154rqay/llama2_7b_uncensored_qlora_finetune_on_wizard/
false
false
self
1
{'enabled': False, 'images': [{'id': 'N3sSmDs0mOl8uSIztYWPwXoZGxCMEGVBxdTd4yBBE5Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?width=108&crop=smart&auto=webp&s=4029d080a665570e05e2ad423c961f5d4ca0f581', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?width=216&crop=smart&auto=webp&s=1c1597d998fa0348b60928965ceb16b6b49fe84b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?width=320&crop=smart&auto=webp&s=0d12b5e787316dae95acfe08786c098fde0b86c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?width=640&crop=smart&auto=webp&s=f2f84b1c39d63788ce215196b0e23bb3b162d40a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?width=960&crop=smart&auto=webp&s=8fff1f2a4926123d0e68822aa9bf79351606c7a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?width=1080&crop=smart&auto=webp&s=5c22889bd18c819310778fd38d1e88c024b3886c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fm6xTZx1efBH_26urtL9TH5zuzKcceC-VlJ50FNTeS8.jpg?auto=webp&s=aea3487962d5ae7cc70d548c82453d3642fc540b', 'width': 1200}, 'variants': {}}]}
Wierd Behavior
1
[removed]
2023-07-20T14:03:28
https://www.reddit.com/r/LocalLLaMA/comments/154rtsg/wierd_behavior/
7ozzam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154rtsg
false
null
t3_154rtsg
/r/LocalLLaMA/comments/154rtsg/wierd_behavior/
false
false
self
1
null
Tips for personal AI coach with ability to hold me accountable?
1
In short, I have many plans, but due to my ADHD and inherent laziness, I accomplish far less than I aim for, often only 10%. I am tired of this, and the bar for improvement is set low, as literally any extra project completed is a major victory for me at this point. I want an AI that can act as a boss/coach, with the ability to motivate and follow up on my progress. It would be great if it had a sense of time and could message me on Telegram to stay on track. It could run 24/7, as I have spare computer parts and an Nvidia RTX 3080 10GB GPU available. What are your thoughts on this? I'm looking for something that can hold me accountable and keep me focused on my current project. I wish there was a system, even just a drone, that could be around to make sure I stay on track.
2023-07-20T14:11:03
https://www.reddit.com/r/LocalLLaMA/comments/154s0sw/tips_for_personal_ai_coach_with_ability_to_hold/
nodating
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154s0sw
false
null
t3_154s0sw
/r/LocalLLaMA/comments/154s0sw/tips_for_personal_ai_coach_with_ability_to_hold/
false
false
self
1
null
2 bits for your thoughts
1
[removed]
2023-07-20T14:41:22
https://www.reddit.com/r/LocalLLaMA/comments/154stdy/2_bits_for_your_thoughts/
The_Hardcard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154stdy
false
null
t3_154stdy
/r/LocalLLaMA/comments/154stdy/2_bits_for_your_thoughts/
false
false
self
1
null
I trained the 65b model on my texts so I can talk to myself. It's pretty useless as an assistant, and will only do stuff you convince it to, but I guess it's technically uncensored? I'll leave it up for a bit if you want to chat with it.
1
2023-07-20T15:13:07
https://airic.serveo.net/
LetMeGuessYourAlts
airic.serveo.net
1970-01-01T00:00:00
0
{}
154to1w
false
null
t3_154to1w
/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/
false
false
default
1
null
Does quantization harm results?
1
(noob here) almost every "run llama locally" tut I see uses quantization, but is that used when results are reported in the original papers? do we know how much of a negative impact there is, if any? like, should I prefer a 7B non-quantized model to a 13B quantized model? ty in advance
2023-07-20T15:17:49
https://www.reddit.com/r/LocalLLaMA/comments/154tsux/does_quantization_harm_results/
knight_of_mintz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154tsux
false
null
t3_154tsux
/r/LocalLLaMA/comments/154tsux/does_quantization_harm_results/
false
false
self
1
null
Can we expect a Llama 2.5?
1
2023-07-20T15:23:32
https://www.threads.net/@yannlecun/post/Cu6O_89O9ku?igshid=NTc4MTIwNjQ2YQ%3D%3D
eunumseioquescrever
threads.net
1970-01-01T00:00:00
0
{}
154tyf1
false
null
t3_154tyf1
/r/LocalLLaMA/comments/154tyf1/can_we_expect_a_llama_25/
false
false
default
1
null
PC Shopping to Run Llama 2 in July 2023
1
I have a need to buy a new Windows PC and I want to be able to run Llama 2. I'm trying to understand my price points and options. I'm a total noob with respect to hardware stuff plz help. I'll brain dump my thoughts below and would appreciate an education from yall :D I have an M2 that can run 13B so in my mental model a Windows machine with equivalent processing should be cheaper. I'm also thinking Nvidia, gpu, and cuda things work better on Windows compared to Mac, but I don't know: 1. The cheapest nvidia equipment that will support 13B locally 2. Should I be thinking about non-Nvidia stuff? 3. What about the cheapest setup to run 70B? Specifically, is there a reasonable setup to run 70B within \~$500 USD of running 13B? that's kind of my current budget thinking; i'll stick with the 13B tier for the new PC unless i can bump up to 70B within that extra spend
2023-07-20T15:25:20
https://www.reddit.com/r/LocalLLaMA/comments/154u036/pc_shopping_to_run_llama_2_in_july_2023/
knight_of_mintz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154u036
false
null
t3_154u036
/r/LocalLLaMA/comments/154u036/pc_shopping_to_run_llama_2_in_july_2023/
false
false
self
1
null
LLongMA 2: A Llama-2 8k model
1
Releasing LLongMA-2, a suite of Llama-2 models, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with u/emozilla of NousResearch and u/kaiokendev. [https://huggingface.co/conceptofmind/LLongMA-2-7b](https://huggingface.co/conceptofmind/LLongMA-2-7b) We worked directly with u/kaiokendev, to extend the context length of the Llama-2 7b model through fine-tuning. The models pass all our evaluations and maintain the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies. https://preview.redd.it/medk4ic905db1.png?width=1060&format=png&auto=webp&s=3698ebae2385bf02d5e163b9de40c2810c2fca87 The model has identical performance to LLaMA 2 under 4k context length, performance scales directly to 8k, and works out-of-the-box with the new version of transformers (4.31) or with \`trust\_remote\_code\` for <= 4.30. A Llama-2 13b model trained at 8k will release soon on huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor. The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: [https://github.com/jquesnelle/scaled-rope](https://github.com/jquesnelle/scaled-rope) https://preview.redd.it/1akp3u1b05db1.png?width=4176&format=png&auto=webp&s=95010b9c5cb9ffc65798b7a739b581e3f195915e If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: [https://kaiokendev.github.io/](https://kaiokendev.github.io/) A PR to add scaled rotary embeddings to u/huggingface transformers has been added by u/joao_gante and merged: [https://github.com/huggingface/transformers/pull/24653](https://github.com/huggingface/transformers/pull/24653) The model was trained for \~1 billion tokens on u/togethercompute's Red Pajama dataset. The context length of the examples varies: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) The pre-tokenized dataset will be available here for you to use soon: [https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked](https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked) I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: [https://arxiv.org/abs/2108.12409](https://arxiv.org/abs/2108.12409) It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: [https://arxiv.org/pdf/2212.10554.pdf](https://arxiv.org/pdf/2212.10554.pdf) We previously trained the first publicly available model with rotary embedding scaling here: [https://twitter.com/EnricoShippole/status/1655599301454594049?s=20](https://twitter.com/EnricoShippole/status/1655599301454594049?s=20) A Llama-2 13b model trained at 8k will release soon. As well as a suite of Llama-2 models trained at 16k context lengths will be released soon. You can find out more about the NousResearch organization here: [https://huggingface.co/NousResearch](https://huggingface.co/NousResearch) The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. This is not an official StabilityAI product. If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly. The previous suite of LLongMA model releases can be found here: [https://twitter.com/EnricoShippole/status/1677346578720256000?s=20](https://twitter.com/EnricoShippole/status/1677346578720256000?s=20) All of the models can be found on Huggingface: [https://huggingface.co/conceptofmind](https://huggingface.co/conceptofmind)
2023-07-20T15:53:33
https://www.reddit.com/r/LocalLLaMA/comments/154us99/llongma_2_a_llama2_8k_model/
EnricoShippole
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154us99
false
null
t3_154us99
/r/LocalLLaMA/comments/154us99/llongma_2_a_llama2_8k_model/
false
false
https://b.thumbs.redditm…jyHYm381pTFA.jpg
1
{'enabled': False, 'images': [{'id': '9-rI0cvPZ4eqrPyEsuLEOmTi0ZvsqJnItMESdMdmoUg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?width=108&crop=smart&auto=webp&s=9ed099640027ab49faa11c0bf5fec503cb3c5f58', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?width=216&crop=smart&auto=webp&s=db07787a1704813eb39856e2f21ab75ab0615ef9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?width=320&crop=smart&auto=webp&s=554e3e6c4f956fc5b5c85d178dca5caa2bea3b81', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?width=640&crop=smart&auto=webp&s=f9d1400e3d356300f9b041a3095d3683ee8a7570', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?width=960&crop=smart&auto=webp&s=52c861d15452ec1e9ab43c45ebfbd9f24a94f29c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?width=1080&crop=smart&auto=webp&s=e517a54ebe934c2bd8fed28dfd16801a0261d29e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/b9l3-GOe4QcJUtbzpUqUa-2gbN1qiwESKh9rN1ZCLwo.jpg?auto=webp&s=7d120e9837310354d43476102689399dede43a33', 'width': 1200}, 'variants': {}}]}
Question about Llama 2
1
Do I understand correctly that Llama 2 (perhaps 1 also) uses the strings: "User:\\n" and "Assistant:\\n" for what some systems call "<|USER|>" and "<|ASSISTANT|>" Isn't that incredibly error prone? What if it is writing a movie script and a character called "User:" needs to say something? &#x200B;
2023-07-20T15:56:37
https://www.reddit.com/r/LocalLLaMA/comments/154uvbv/question_about_llama_2/
Smallpaul
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154uvbv
false
null
t3_154uvbv
/r/LocalLLaMA/comments/154uvbv/question_about_llama_2/
false
false
self
1
null
API for trending topics
1
Hey, I'm working on a project that lets me get trending topics from different niches along with a small description. For example, if the niche is Machine Learning, then you could have a bunch of trending topics like AppleGPT or LLaMA2. The best part about this imo is that you could also narrow it down to topics trending on certain platforms like reddit or twitter. Let me know if you'd be interested in such a thing!
2023-07-20T17:02:59
https://www.reddit.com/r/LocalLLaMA/comments/154wp2d/api_for_trending_topics/
04RR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154wp2d
false
null
t3_154wp2d
/r/LocalLLaMA/comments/154wp2d/api_for_trending_topics/
false
false
self
1
null
Local LLMs on Windows 2022 Server
1
I just recently got myself a server running windows 2022 and 96 gb of ram. My first thought was to use the ram to run huggingface models locally, but I can't seem to get it to work. (currently just trying to get my network card to function properly) Are there any examples of this being done elsewhere? I tried installing normal windows 11 and running GPT4ALL but it kept on crashing out with errors, and running huggingface normally through python never actually did anything. Has anyone else ever done this before?
2023-07-20T17:07:33
https://www.reddit.com/r/LocalLLaMA/comments/154wtn0/local_llms_on_windows_2022_server/
EternalDuskGaming
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154wtn0
false
null
t3_154wtn0
/r/LocalLLaMA/comments/154wtn0/local_llms_on_windows_2022_server/
false
false
self
1
null
Run Llama 2 Locally in 7 Lines! (Apple Silicon Mac)
1
[removed]
2023-07-20T18:23:56
https://www.reddit.com/r/LocalLLaMA/comments/154ywsl/run_llama_2_locally_in_7_lines_apple_silicon_mac/
InevitableSky2801
self.LocalLLaMA
2023-07-20T19:07:14
0
{}
154ywsl
false
null
t3_154ywsl
/r/LocalLLaMA/comments/154ywsl/run_llama_2_locally_in_7_lines_apple_silicon_mac/
false
false
default
1
{'enabled': False, 'images': [{'id': 'mq0RdVEsfsBVZDgG1ml22AnyLbr11b15p8kZL09YjjI', 'resolutions': [{'height': 27, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?width=108&crop=smart&auto=webp&s=9d58a6e538c82d2ad192ec7664c1c57dc3d74c71', 'width': 108}, {'height': 55, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?width=216&crop=smart&auto=webp&s=e55c855f48a7d41f400443d73699a3f3a3983628', 'width': 216}, {'height': 81, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?width=320&crop=smart&auto=webp&s=90bae43f1f2f0338e86c67f2e9a514eca01af55b', 'width': 320}, {'height': 163, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?width=640&crop=smart&auto=webp&s=28d367a47d4b31723a5445b31fc674a5455cb01f', 'width': 640}, {'height': 244, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?width=960&crop=smart&auto=webp&s=9d706424b0da05af718cd3255721d99e1d0d9d6c', 'width': 960}, {'height': 275, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?width=1080&crop=smart&auto=webp&s=f8320f9c46956b56b7b268673b5fdab98e8a299e', 'width': 1080}], 'source': {'height': 306, 'url': 'https://external-preview.redd.it/ZNis6dUUjN_8bdqb0icFPwNc13ad1s5UoY4gpBL9at8.jpg?auto=webp&s=8f758a2780563b13f0b0f0e7d0f9a3f53a28ddb2', 'width': 1200}, 'variants': {}}]}
Introducing starcoder.js: Web Browser port of starcoder.cpp
1
Hi guys, I've been exploring on how to run ML models on browser and came across some great work in the community like [transformers.js](https://github.com/rahuldshetty/starcoder.js). Taking inspiration from this and after few hours of research on wasm & web documentations, I was able to port starcoder.cpp project and run it on browser. **starcoder.js** You can now port and run any of the starcoder series models in browser with starcoder.js framework. The framework uses emscripten project to build starcoder.cpp into WASM/HTML formats generating a bundle that can be executed on browser. starcoder.js uses Web Workers to initialize and run the model for inference. [Demo](https://preview.redd.it/ewi6s4btx5db1.png?width=973&format=png&auto=webp&s=63f4a2430caea81778ae46bb593de9c24a9308b0) [Example Generation](https://preview.redd.it/2domo3btx5db1.png?width=947&format=png&auto=webp&s=667bf570e37dca30c2d7c5baa7bb8ccd2168ca4b) [Browser Performance](https://preview.redd.it/xpjey4btx5db1.png?width=544&format=png&auto=webp&s=986793ae7d62188b273d8032e27573d26e88ece3) Source Code: [https://github.com/rahuldshetty/starcoder.js](https://github.com/rahuldshetty/starcoder.js) Demo: [https://rahuldshetty.github.io/starcoder.js/](https://rahuldshetty.github.io/starcoder.js/)
2023-07-20T18:35:05
https://www.reddit.com/r/LocalLLaMA/comments/154z7rh/introducing_starcoderjs_web_browser_port_of/
AnonymousD3vil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154z7rh
false
null
t3_154z7rh
/r/LocalLLaMA/comments/154z7rh/introducing_starcoderjs_web_browser_port_of/
false
false
https://b.thumbs.redditm…e7qS5NxSMM7U.jpg
1
{'enabled': False, 'images': [{'id': 'HlUVyzuzjbcDxEIBFj0YGgHpLmxJeOyTlaKm4Hx6aXE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?width=108&crop=smart&auto=webp&s=4357c438c2e56303aca6d6f5d28f59236e6b893d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?width=216&crop=smart&auto=webp&s=d9bd74d3eb613c465b429b6eddb7892641004abf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?width=320&crop=smart&auto=webp&s=6ef11cbb451a5cd174f9d8df9947e762c668077f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?width=640&crop=smart&auto=webp&s=a840b2256c2d03fbd64c39fd67cad8178d3b917b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?width=960&crop=smart&auto=webp&s=91f2a5bd8020bb20f2da83f000128d45127f4420', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?width=1080&crop=smart&auto=webp&s=942451026c6507c0ef31f6d67a3f80bbd47427fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GWlvKGkZtoBxCyfmzxx_uu44lWzUDkBorVXpoqouc5U.jpg?auto=webp&s=f4ff3f20fa087610fca4442894e1198a30655e80', 'width': 1200}, 'variants': {}}]}
OpenKlyde - A Self Hosted AI Bot for a popular chat app
1
OpenKlyde is an AI bot that connects to a koboldcpp instance by API calls. Have a more inteliegent Clyde Bot of your own making! OpenKlyde incorporates an AI Large Language Model (LLM) into a bot by making API calls to a Koboldcpp instance. It can also work with Oobabooga (Oobabooga still a work in progress). You will need an instance of Koboldcpp running on your machine. In theory, you should also be able to connect it to the Horde, but I haven't tested the implementation yet. As of now this bot is only a chat bot, but it can also generate images with Automatic1111 Stable Diffusion [https://github.com/badgids/OpenKlyde.git](https://github.com/badgids/OpenKlyde.git) Cheers!
2023-07-20T18:48:34
https://www.reddit.com/r/LocalLLaMA/comments/154zkry/openklyde_a_self_hosted_ai_bot_for_a_popular_chat/
Slight-Living-8098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154zkry
false
null
t3_154zkry
/r/LocalLLaMA/comments/154zkry/openklyde_a_self_hosted_ai_bot_for_a_popular_chat/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OlwJ0bILthTShjzDmasJ_EcbNCFoKAvQvCT7PC1KUIg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?width=108&crop=smart&auto=webp&s=daf13c142fb80d64cb71e2f7e0ec42048ddfc8a5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?width=216&crop=smart&auto=webp&s=de5cf65d0411880ff25e53b8c8717e4f9f5fdea6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?width=320&crop=smart&auto=webp&s=1e4c8c6dfcb48e6ac2d2c7e885a68e4a8573bddb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?width=640&crop=smart&auto=webp&s=f2edab1e97620e3f9c2ab9209a240292a9c0914a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?width=960&crop=smart&auto=webp&s=81ace3b6be19e12645bf442b23b00079112cd2f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?width=1080&crop=smart&auto=webp&s=cab82af21c7c182f1319aa30c31e73ee70fcea97', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fs39xl53PEgFCkkoi4D2LDVwYhMki47w6FGidcZK25Q.jpg?auto=webp&s=182bea657b7767b73d541e7c7a23a738812fa9b3', 'width': 1200}, 'variants': {}}]}
KoboldCPP v1.3.6 - 8k context for GGML models.
1
KoboldCPP is a roleplaying program that allows you to use GGML AI models, which are largely dependent on your CPU+RAM. The current version of KoboldCPP now supports 8k context, but it isn't intuitive on how to set it up. Take the following steps for basic 8k context usuage. 0 - Get the program. https://github.com/LostRuins/koboldcpp/releases 1 - Download an 8k context model, or a 16k edition. Kobold can't unlock the full potential of 16k yet. 2 - Place KoboldCPP in a folder somewhere. 3 - Move your 8k GGML model into the folder. 4 - Create a shortcut of KoboldCPP. 5a - Edit your shortcut with the configuration below. At the model section of the example below, replace the model name. 5b - koboldcpp.exe" --ropeconfig 0.125 10000 --launch --unbantokens --contextsize 8192 --smartcontext --usemlock --model airoboros-33b-gpt4-1.4.1-lxctx-PI-16384.ggmlv3.q6_K 6 - Those with useful GPUs will have to add further arguments to use your GPU effectively. You can't change the RopeConfig with the launcher yet, which is why the edited shortcut is used. You will have to research the GPU options yourself, because I don't have a GPU that works well for AI. 6a - Pick your preset, then replace the sequence order with 6,0,1,3,4,2,5 6b - You will have to change the order every time you change to a different preset. By doing the above, your copy of Kobold can use 8k context effectively for models that are built with it in mind. Advanced users should look into a pipeline consisting of Kobold-->SimpleProxyTavern-->Silly Tavern, for the greatest roleplaying freedom.
2023-07-20T18:52:25
https://www.reddit.com/r/LocalLLaMA/comments/154zon3/koboldcpp_v136_8k_context_for_ggml_models/
Sabin_Stargem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
154zon3
false
null
t3_154zon3
/r/LocalLLaMA/comments/154zon3/koboldcpp_v136_8k_context_for_ggml_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FjVltwS3zYsMg9BJs0UuMBpgfZE-asPvjSr6AvInQ6w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?width=108&crop=smart&auto=webp&s=30dd4c488eb3544f4c92efe1c245bb5e943eb2c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?width=216&crop=smart&auto=webp&s=3c3ac388b8e67cbd621de8dc086f23416dfb2afa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?width=320&crop=smart&auto=webp&s=73ab7454ebff8488877d640d6e2b4c19f23dd837', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?width=640&crop=smart&auto=webp&s=6f111357519caf7b0865bb99226d2d8d89ae8f1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?width=960&crop=smart&auto=webp&s=101521c1cb6ff0a3e018c5d715cf4be5f09df27c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?width=1080&crop=smart&auto=webp&s=7dab33cced7f6d8e948beef0db537aa278e9632f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QqgrSFcMWF8yxHWUt_OZGdGgcyb8Liq7ZcOTfzK5gu8.jpg?auto=webp&s=cb6771ba5d43696b14b3611cecf472868276c8ed', 'width': 1200}, 'variants': {}}]}
Amusing myself - fine-tuning LLama 2 on old Bing Sydney conversions....
1
2023-07-20T19:23:15
https://www.reddit.com/gallery/1550iid
FPham
reddit.com
1970-01-01T00:00:00
0
{}
1550iid
false
null
t3_1550iid
/r/LocalLLaMA/comments/1550iid/amusing_myself_finetuning_llama_2_on_old_bing/
false
false
https://b.thumbs.redditm…uQ0-RLWFo99o.jpg
1
null