title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Running MTP30B on CPU, AND Gpu?
1
[removed]
2023-08-21T20:15:16
https://www.reddit.com/r/LocalLLaMA/comments/15xip0u/running_mtp30b_on_cpu_and_gpu/
Overall-Importance54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xip0u
false
null
t3_15xip0u
/r/LocalLLaMA/comments/15xip0u/running_mtp30b_on_cpu_and_gpu/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SBT8VvFr6B4mOt0xyPTLhVdfJPlOBZ4fx8E1eWZK1N8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?width=108&crop=smart&auto=webp&s=757c2bfa441bababfe3ea962bbdb24ba174d6d73', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?width=216&crop=smart&auto=webp&s=46c3abc604f9f5a60376729ebe86e95a713885bb', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?width=320&crop=smart&auto=webp&s=7ee0edf530e83dd12d201aee3749f3bada391eb8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?auto=webp&s=26410f00954b26c31409d25f79bdddb48071b7ab', 'width': 480}, 'variants': {}}]}
Is the 3090 really the best GPU for 13-30B Models?
1
I am wondering if the 3090 is really the most cost effectuent and best GPU overall for inference on 13B/30B parameter model. If so, I am curious on why that's the case. The 3090's inference speed is similar to the A100 which is a GPU made for AI. In addition to this GPU was released a while back.
2023-08-21T20:33:39
https://www.reddit.com/r/LocalLLaMA/comments/15xj7mc/is_the_3090_really_the_best_gpu_for_1330b_models/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xj7mc
false
null
t3_15xj7mc
/r/LocalLLaMA/comments/15xj7mc/is_the_3090_really_the_best_gpu_for_1330b_models/
false
false
self
1
null
Math Is No Prob Llama.
1
2023-08-21T21:24:39
https://i.redd.it/sywhdqrl6jjb1.jpg
IndividualCup7493
i.redd.it
1970-01-01T00:00:00
0
{}
15xkn6x
false
null
t3_15xkn6x
/r/LocalLLaMA/comments/15xkn6x/math_is_no_prob_llama/
false
false
https://b.thumbs.redditm…6y5s1Jc_UPlg.jpg
1
{'enabled': True, 'images': [{'id': 'U8Y7fWdexAMA7rflYIBjkibju4Ni6DpJ3ElaFfSKEX4', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=108&crop=smart&auto=webp&s=acf2c1df054b83811380f5b8c64382380cc3816c', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=216&crop=smart&auto=webp&s=bfc3059c40ed6216305bc3fbe9efd48dabea0c0a', 'width': 216}, {'height': 324, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=320&crop=smart&auto=webp&s=dc5563236877940cccb7fca1ad441598bc332a51', 'width': 320}, {'height': 649, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=640&crop=smart&auto=webp&s=0c600aa9f7dceb70844b4145a8bfd5083d712ecd', 'width': 640}, {'height': 974, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=960&crop=smart&auto=webp&s=5ea6a23ecd4b6866a87f97df62998daa87ae36b5', 'width': 960}], 'source': {'height': 1079, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?auto=webp&s=1c5bd7ca8a3105774cad0d0e7efed88b16c56abc', 'width': 1063}, 'variants': {}}]}
Stream Llama 2 to your MacBook Using PyXet
1
2023-08-21T22:35:21
https://xethub.com/blog/stream-llama-2-macbook-minutes-pyxet
semicausal
xethub.com
1970-01-01T00:00:00
0
{}
15xmjtu
false
null
t3_15xmjtu
/r/LocalLLaMA/comments/15xmjtu/stream_llama_2_to_your_macbook_using_pyxet/
false
false
https://b.thumbs.redditm…-X-q8syqTdbk.jpg
1
{'enabled': False, 'images': [{'id': 'h2NCs-fA1bIqcY8f6zuoe7C_zn_P_ckTJNHB_CzpDA4', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=108&crop=smart&auto=webp&s=abb4969ecfca11c2b95e162ba966f5f5a6de436a', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=216&crop=smart&auto=webp&s=32b0b99b855aea7bd748a279ddafa1731dee3281', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=320&crop=smart&auto=webp&s=21b803960b5d8487429fa7057f2a92e29e6182ae', 'width': 320}, {'height': 391, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=640&crop=smart&auto=webp&s=333ba5c57fa8ea882340b0f8ed1f015ab2e21373', 'width': 640}, {'height': 586, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=960&crop=smart&auto=webp&s=b4a01913e163625cba06023a3e85e027056a0339', 'width': 960}, {'height': 660, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=1080&crop=smart&auto=webp&s=1ce429d921120f16a7340d7676eb45b72af75823', 'width': 1080}], 'source': {'height': 880, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?auto=webp&s=584a1bf24e3eff142859d80fcb1360e46d62f342', 'width': 1440}, 'variants': {}}]}
digestable introduction to fine-tuning
1
does anyone any frontend libraries that lets you fine tune through a UI?
2023-08-21T23:09:34
https://github.com/facebookresearch/llama-recipes/tree/main
LyPreto
github.com
1970-01-01T00:00:00
0
{}
15xngf8
false
null
t3_15xngf8
/r/LocalLLaMA/comments/15xngf8/digestable_introduction_to_finetuning/
false
false
https://b.thumbs.redditm…sqHovWtwlwYc.jpg
1
{'enabled': False, 'images': [{'id': 'eLKD8gOVthCQeqwrQ2HrGab0RwonNOvsOfAV-r9asfs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=108&crop=smart&auto=webp&s=79c7aad9743792f0b9b3b0052fc1d95470da1d6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=216&crop=smart&auto=webp&s=9b7459cbc40654b9778f4bf567ec242b3dfca42a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=320&crop=smart&auto=webp&s=a2f6954f9c6271eab01d0c2c755b53344418b893', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=640&crop=smart&auto=webp&s=169d68690d8e7f41a445d3800b3c6c1810dc1979', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=960&crop=smart&auto=webp&s=203a3d3d95fd1f4876d6fb53d3c3e9b4cce4d177', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=1080&crop=smart&auto=webp&s=f81a93bc5ae3eacb9cbb05505892472d139b51f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?auto=webp&s=ac2a5972c11e5915a5598c8c61b6cb455558c7e9', 'width': 1200}, 'variants': {}}]}
Free Online 8k/16k+ llama2 7b/13b/70b?
1
Anyone?
2023-08-21T23:32:14
https://www.reddit.com/r/LocalLLaMA/comments/15xo1fu/free_online_8k16k_llama2_7b13b70b/
SakamotoKyu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xo1fu
false
null
t3_15xo1fu
/r/LocalLLaMA/comments/15xo1fu/free_online_8k16k_llama2_7b13b70b/
false
false
self
1
null
Training TheBloke Llama 2 7b Chat GPTQ
1
[removed]
2023-08-21T23:50:54
https://www.reddit.com/r/LocalLLaMA/comments/15xoibu/training_thebloke_llama_2_7b_chat_gptq/
skdidjsj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xoibu
false
null
t3_15xoibu
/r/LocalLLaMA/comments/15xoibu/training_thebloke_llama_2_7b_chat_gptq/
false
false
self
1
null
Small tip: Be mindful of passive VRAM consumption
1
[removed]
2023-08-22T00:22:23
https://www.reddit.com/r/LocalLLaMA/comments/15xpajq/small_tip_be_mindful_of_passive_vram_consumption/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xpajq
false
null
t3_15xpajq
/r/LocalLLaMA/comments/15xpajq/small_tip_be_mindful_of_passive_vram_consumption/
false
false
self
1
null
Model suggestions for coding + workflow?
1
I've been experimenting with LLMs mostly for entertainment purposes but would like to try using them for something more productive for a change. I'm looking for something to assist with code writing + debugging (primary languages: R, Python, C++), and text summarization/outline writing/reorganization/reformatting/etc. ChatGPT has been okay so far for the text tasks, but I'm looking for something local that I can theoretically use for actual work instead of hobby writing with less worry about privacy/security. With my current home office setup I can run a 13B Q4 quantized model pretty easily with llama.cpp, and have the resources to run a 33B Q4 quantized model (slowly) if needed. What models have you guys had good success with for these tasks? Are any 13B models currently powerful enough to do these tasks decently, or am I out of luck unless I can run larger ones (e.g. 70B)? P.S. I'm also curious what UIs work well for coding tasks, since the ones I use for hobby things (SillyTavern / Agnaistic ) are probably not ideal for workflow unless I *really* need an anime waifu overseeing my work.
2023-08-22T00:25:42
https://www.reddit.com/r/LocalLLaMA/comments/15xpdfg/model_suggestions_for_coding_workflow/
big_kitty_enjoyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xpdfg
false
null
t3_15xpdfg
/r/LocalLLaMA/comments/15xpdfg/model_suggestions_for_coding_workflow/
false
false
self
1
null
What is the best model for Roleplay and Storyteller for Oobabooga?
1
Title
2023-08-22T02:44:23
https://www.reddit.com/r/LocalLLaMA/comments/15xskg0/what_is_the_best_model_for_roleplay_and/
TheDonnyDoggo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xskg0
false
null
t3_15xskg0
/r/LocalLLaMA/comments/15xskg0/what_is_the_best_model_for_roleplay_and/
false
false
self
1
null
LocalLLaMA noob: are my specs OK to start hosting models?
1
hey all, i'm very intrigued about hosting my own AI and not paying for censored ChatGPT anymore, i just have a few questions. my specs are as follows: CPU: RX 5800xt GPU: Nvidia 3060 12GB (this is my planned upgrade to this as its the most affordable option to start, from what i see) Ram: 64 GB DDR4 would this be ok to start hosting my AI? i preferably want to do this through docker, and try and send queries using a copilot-like extension in vscode. also, at some point i want to train a model on specific topics to enchance my studying efficency. i understand that my GPU would not be able to do this, but are there services available to train a model on this data so i can host it myself? apologies for sounding like a complete noob, but after lurking for the past day i'm really intruiged on what i can do with my own hardware now.
2023-08-22T03:32:44
https://www.reddit.com/r/LocalLLaMA/comments/15xtm21/localllama_noob_are_my_specs_ok_to_start_hosting/
asetofaces
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xtm21
false
null
t3_15xtm21
/r/LocalLLaMA/comments/15xtm21/localllama_noob_are_my_specs_ok_to_start_hosting/
false
false
self
1
null
70B LLM expected performance on 4090 + i9
1
I have an Alienware R15 32G DDR5, i9, RTX4090. I was able to load 70B GGML model offloading 42 layers onto the GPU using oobabooga. After the initial load and first text generation which is extremely slow at \~0.2t/s, suhsequent text generation is about 1.2t/s. I noticed SSD activities (likely due to low system RAM) on the first text generation. There is virtually no SSD activities on subsequent text generations. I'm thinking about upgrading the RAM to 64G which is the max on the Alienware R15. Will it help and if so does anyone have an idea how much improvement I can expect? Appreciate any feedback or alternative suggestions.
2023-08-22T03:45:51
https://www.reddit.com/r/LocalLLaMA/comments/15xtwdi/70b_llm_expected_performance_on_4090_i9/
you-seek-yoda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xtwdi
false
null
t3_15xtwdi
/r/LocalLLaMA/comments/15xtwdi/70b_llm_expected_performance_on_4090_i9/
false
false
self
1
null
I have two cards (a 3090 and a 3060). Is it possible to use them both to run 70b? What would be the cheapest setup for that?
1
Hello, I have a 24gb RTX 3090 and a spare 12gb RTX 3060 that is not getting used at the moment. My motherboard only has a single PCIE slot, additionally, I only have 16gb of RAM and I am not interested in running 70b off the CPU (yes, I know it is possible to run it by just adding more RAM, but since I have two GPUs, on the chance of them getting used, I am not interested) My CPU is an AM4 CPU. ​ Should I buy another motherboard? Is an eGPU adapter feasible for this goal? What do you suggest?
2023-08-22T05:13:12
https://www.reddit.com/r/LocalLLaMA/comments/15xvp6h/i_have_two_cards_a_3090_and_a_3060_is_it_possible/
hellninja55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xvp6h
false
null
t3_15xvp6h
/r/LocalLLaMA/comments/15xvp6h/i_have_two_cards_a_3090_and_a_3060_is_it_possible/
false
false
self
1
null
Can Anybody help me with my school project
1
So a teacher in school asked me to create a "Buddha AI speaker" for a school festival. I accepted it. The teacher said the deadline was around December. And today I learned the festival was due in 10 days, unlike the teacher said. I can spend at most 1000$ for this porject and I can't buy any "things that can personally be owned" (aka computer parts) what I can buy is things like arduino(ofc I won't buy this) or raspberry pis. All I need to know is how to use any kind of LLM on very weak platforms (ideally using some free web service : I can't spend school money here) And here's my question. How can I make this work using the scarce resources I have? I'm asking here since I don't know where else to ask this.
2023-08-22T06:01:42
https://www.reddit.com/r/LocalLLaMA/comments/15xwmeg/can_anybody_help_me_with_my_school_project/
Wannabedankestmemer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xwmeg
false
null
t3_15xwmeg
/r/LocalLLaMA/comments/15xwmeg/can_anybody_help_me_with_my_school_project/
false
false
self
1
null
Tried to build setup exllama but encountering ninja related errors, can someone please help me?
1
[removed]
2023-08-22T06:37:07
https://www.reddit.com/r/LocalLLaMA/comments/15xxaw9/tried_to_build_setup_exllama_but_encountering/
bwandowando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xxaw9
false
null
t3_15xxaw9
/r/LocalLLaMA/comments/15xxaw9/tried_to_build_setup_exllama_but_encountering/
false
false
self
1
{'enabled': False, 'images': [{'id': 'G7rmZLwR-Tz_zuUIsX8o0Kzh4fmu3ozIXgkfplbpKvw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=108&crop=smart&auto=webp&s=f8f882dbf71789e36ddfa2c7fdaeb463f8e7259f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=216&crop=smart&auto=webp&s=5015be41883635908d200144cea5d8d4ce571d8f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=320&crop=smart&auto=webp&s=59dfa4526ef4238fea6af9d0560f2ccea75f7d0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=640&crop=smart&auto=webp&s=980da897113bd2eba78d7fc7fc25179f6801ba31', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=960&crop=smart&auto=webp&s=2be43541c41e65fbbbd1014861e9e04951419d3e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=1080&crop=smart&auto=webp&s=d2052e83e6bb5b53c1b770e0e6ac8d5620121efa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?auto=webp&s=bf6f203c09d2bddba17e914d384e2c95ff1c130b', 'width': 1200}, 'variants': {}}]}
Vicuna-33B Prompt Engineering
1
I have been using Vicuna-33b for personal use and benchmarking, I wanted to give specific prompts to the model for different use cases (For answering questions from a specific domains only and act like a bot) but I haven't found one single prompt which works the best. Is there anything I am doing wrong or is there a specific way for prompting this LLM.
2023-08-22T06:56:26
https://www.reddit.com/r/LocalLLaMA/comments/15xxoc7/vicuna33b_prompt_engineering/
Died_Nirvana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xxoc7
false
null
t3_15xxoc7
/r/LocalLLaMA/comments/15xxoc7/vicuna33b_prompt_engineering/
false
false
self
1
null
flan T5-Large just gives the context as the response
1
So I wanted to try using the Flan-t5-large locally for a project of mine. The context has been extracted from a pdf. So for a query, this was the context: \[Document(page\_content='help you get connected to that world.\\nConnecting to a wireless network \\nYour computer may be equipped with one or more of the following wireless devices:\\n● WLAN device—Connects the computer to wireless local area networks (commonly referred to as Wi-Fi', metadata={}), Document(page\_content='allowing you to manually search for and connect to a network or \\nto create a new network connection.\\n3. Follow the on-screen instructions to complete the connection.\\nAfter the connection is made, select the network status icon at the far right of the', metadata={}), Document(page\_content='one of the available networks.\\nIf the WLAN is a security-enabled WLAN, you are prompted to enter a security code. Enter the code, and \\nthen select Next to complete the connection.\\nNOTE: If no WLANs are listed, you may be out of range of a wireless router', metadata={}), Document(page\_content='box, and then select Control Panel .\\n2. Select Network and Internet , and then select Network and Sharing Center .\\nConnecting to a wireless network 15', metadata={})\] and this was the response the model generated: <pad> Document(page\_content='help you get connected to that world.<unk> nConnecting to a wireless network <unk> nYour computer may be equipped with one or more of the wireless devices:<unk> n<unk> WLAN device—Connects the computer to wireless local area networks (commonly referred to as Wi-Fi', metadata=<unk> ), Document(page\_content='allowing you to manually search for and connect to a network or <unk> nto create a new network connection.<unk> n3. Follow the on-screen instructions to complete the connection.<unk> nAfter the connection is made, select the network status icon at the far right of the', metadata=<unk> ), Document(page\_content='one of the available networks.<unk> nIf the WLAN is a security-enabled WLAN, you are prompted to enter a security code. Enter the code, and <unk> nthen select Next to complete the connection.<unk> nNOTE: If no WLANs are listed, you may be out of range of a wireless router', metadata=<unk> ), Document(page\_content='box, and then select Control Panel.<unk> n2. Select Network and Internet, and then select Network and Sharing Center.<unk> nConnecting to a wireless network 15', metadata=<unk> )</s> I don't know what's wrong and the same code worked well for meta-llama-2-7b-chat-hf. But llama2 required around 25GB of RAM to run it, so I wanted to try smaller models.
2023-08-22T07:27:05
https://www.reddit.com/r/LocalLLaMA/comments/15xy9zw/flan_t5large_just_gives_the_context_as_the/
IamFuckinTomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xy9zw
false
null
t3_15xy9zw
/r/LocalLLaMA/comments/15xy9zw/flan_t5large_just_gives_the_context_as_the/
false
false
self
1
null
llama.cpp: GGUF merged
1
&#x200B; https://preview.redd.it/6qs47a0l6mjb1.png?width=995&format=png&auto=webp&s=5e676084667a55ea1558d4890ee98b934bf1c3a6 Spec: [https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md](https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md) PR: [https://github.com/ggerganov/llama.cpp/pull/2398#issuecomment-1686986770](https://github.com/ggerganov/llama.cpp/pull/2398#issuecomment-1686986770) &#x200B;
2023-08-22T07:30:55
https://www.reddit.com/r/LocalLLaMA/comments/15xycn2/llamacpp_gguf_merged/
Jipok_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xycn2
false
null
t3_15xycn2
/r/LocalLLaMA/comments/15xycn2/llamacpp_gguf_merged/
false
false
https://a.thumbs.redditm…fmlMa7ILvKG8.jpg
1
{'enabled': False, 'images': [{'id': 'BWwUYGUTfmMVUIsv67pBgjOyBKrKxrCB57VGPQBr2sA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=108&crop=smart&auto=webp&s=3a5bb0407bd186fb40f876e7a0d12881fca1f4fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=216&crop=smart&auto=webp&s=949f6af0b77b0dea755ede239547754f5dbefc45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=320&crop=smart&auto=webp&s=a432f02e3eab7e3a0e8b17379887b1481abadb15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=640&crop=smart&auto=webp&s=fc20c048d27c930c161028202b9bedee53355173', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=960&crop=smart&auto=webp&s=4bc24343595b8839961aa75bf926d8def629bad8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=1080&crop=smart&auto=webp&s=108a5f33c176c62255596ff8b2b9e229a633bd85', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?auto=webp&s=909fc960d568f95d4b744202cbca6cadf9cb13ce', 'width': 1200}, 'variants': {}}]}
New build - what can I run with it?
1
Building a new system with Ryzen 9 7950x, RTX 4090, 128 Gb Ram, Asus X670E ROG Strix motherboard. What models will I be able to run with this locally? Using a beefy PSU 1600W so I can addd another 4090 after a few months. Would that make sense or even possible. I haven’t heard of dual 4090 so don’t know if it is possible.
2023-08-22T07:36:24
https://www.reddit.com/r/LocalLLaMA/comments/15xygg3/new_build_what_can_i_run_with_it/
comfortablynumb01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xygg3
false
null
t3_15xygg3
/r/LocalLLaMA/comments/15xygg3/new_build_what_can_i_run_with_it/
false
false
self
1
null
Help. Why am I shadow banned here?
1
[removed]
2023-08-22T07:40:53
https://www.reddit.com/r/LocalLLaMA/comments/15xyji5/help_why_am_i_shadow_banned_here/
sujantkv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xyji5
false
null
t3_15xyji5
/r/LocalLLaMA/comments/15xyji5/help_why_am_i_shadow_banned_here/
false
false
self
1
null
What are the best models on the market?
1
I'm mostly using models for RP/dialog and I've used base wizard-vicuna and nous-hermes but tbh it didn't follow the character as much as llama-2-chat-70b model (Currently my favorite for character chatting and using generally). Also used the WizardLM-70b with 4k context model for daily usage. I have a pretty decent setup and can get 10-15t/s with 70b GPTQ models. I would like to hear which models are you using and for what.
2023-08-22T08:59:32
https://www.reddit.com/r/LocalLLaMA/comments/15y00wu/what_are_the_best_models_on_the_market/
sarimsak13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y00wu
false
null
t3_15y00wu
/r/LocalLLaMA/comments/15y00wu/what_are_the_best_models_on_the_market/
false
false
self
1
null
How much VRAM for serving in parralel
1
I would like to be able to serve 10 concurrent prompt answers with the same model with a Llama 2 13B model: which inference server can efficiently optimize serving the same model to concurrent clients ? How much VRAM should be required ?
2023-08-22T11:41:59
https://www.reddit.com/r/LocalLLaMA/comments/15y3bj0/how_much_vram_for_serving_in_parralel/
Opening-Ad1642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y3bj0
false
null
t3_15y3bj0
/r/LocalLLaMA/comments/15y3bj0/how_much_vram_for_serving_in_parralel/
false
false
self
1
null
Best Ooba setting for MythoMax RP?
1
Hey everyone, I was hoping there might we might share some settings for running Mythomax on Ooba. I have some of my current settings and would like to know if anyone has critiques or suggestions. 1. Model setup: I've been using the GPTQ 4bit and 8 bit versions with Exllama on 4 bit and AutoGPTQ on 8 bit. I can't really tell the difference. I also can't get the 8 bit version to load with Exllama. I also have been trying the 6_K GGML version with llama.cpp. I don't have any solid numbers, but this one "feels" like it gives the most interesting results. 2. Alpha_Value vs Compress_Pos_emb: I typically use compress_pos_emb set at 2 for 4k, but I've been trying to figure out the best Alpha_value (trying 3) to move this up to 8K. Anyone have any good settings for this? 3. Generation Parameters: I am currently using the following: Max_new_tokens: 400 (personal pref) Temp: .95 Top P: .95 Top_K: 30 Typical P: 1 ETA_Cutoff: 0 TFS: .97 Top A: .75 Rep Pen: 1.09 Rep Pen Range: 2048 Enc Rep Pen: 1 No_rep_ngram_size: 1 Min Length: 200 4. I have also read that maybe you use the Mirostat settings instead. I have read good settings for those would be 2, 5, and .2 but I am not sure what they do or if those are better than normal samplers. 5. Instruction Template: I use the Alpaca preset. I change the context box depending on the scenario, but I'm not sure if there's better suggestions out there? I have enough VRAM and RAM that I can run the 13B models in memory. Any other suggestions people have? Best settings? Is running Koboldcpp and Silly Tavern a better setup? If so, do I need the extras for Silly Tavern? I'd really like to just run Ooba and not have to setup 3 things to get them working each time. Any feedback would be appreciated.
2023-08-22T12:41:12
https://www.reddit.com/r/LocalLLaMA/comments/15y4pgq/best_ooba_setting_for_mythomax_rp/
AboveAFC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y4pgq
false
null
t3_15y4pgq
/r/LocalLLaMA/comments/15y4pgq/best_ooba_setting_for_mythomax_rp/
false
false
self
1
null
LLM that can generate mnemonics or stories to encode information?
1
I’ve been playing around with different language models, mostly ChatGPT and the LLaMA variants, but none of them are able to generate mnemonics or encode information effectively for memorisation purposes. Is there some LLM model or promt I should be using to get usable mnemonics out of these, is it even possible? GPT 4 gave the best results so far, but sometimes leaving out elements or just using completely different letters when creating acronyms or stories. I might also consider fine tuning a model if it might work.
2023-08-22T12:51:45
https://www.reddit.com/r/LocalLLaMA/comments/15y4yel/llm_that_can_generate_mnemonics_or_stories_to/
ElementaryZX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y4yel
false
null
t3_15y4yel
/r/LocalLLaMA/comments/15y4yel/llm_that_can_generate_mnemonics_or_stories_to/
false
false
self
1
null
PEFT 0.5 supports fine-tuning GPTQ models
1
2023-08-22T13:25:32
https://github.com/huggingface/peft/releases/tag/v0.5.0
oobabooga4
github.com
1970-01-01T00:00:00
0
{}
15y5spo
false
null
t3_15y5spo
/r/LocalLLaMA/comments/15y5spo/peft_05_supports_finetuning_gptq_models/
false
false
https://b.thumbs.redditm…A9w8VV7d9abY.jpg
1
{'enabled': False, 'images': [{'id': 'PV8BKCWpdcu5LkMC2fz3n3wqEF8Vh4InmaPmrzT2S6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=108&crop=smart&auto=webp&s=85f09658562824f303f1ea32912e49d4d4e645f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=216&crop=smart&auto=webp&s=03214c7109551da52a54728b7ce79c4925c2b8b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=320&crop=smart&auto=webp&s=79af252c89dffa59d1a84fa93b5f68210dc86a1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=640&crop=smart&auto=webp&s=f5cf024894229443dd02f26ab840aa4cb59020b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=960&crop=smart&auto=webp&s=a06f92d4fc55ac35b7d194f585faf18970068463', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=1080&crop=smart&auto=webp&s=ab4d9c67dd99a7537630eab74b7732cbfe991d2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?auto=webp&s=3ab862c907313ce8b528cdb81f6a7cef18223f11', 'width': 1200}, 'variants': {}}]}
SQLCoder: New 15B OSS LLM claims better performance than gpt-3.5-turbo on sql-related tasks
1
Defog has open sourced **SQLCoder**, a new "open source" LLM that supposedly outperforms got-3.5-turbo on SQL related tasks. The models is a version of StarCoder finetuned on 10k human curated dataset of text-to-SQL questions based on 20 schemas They claim it outperforms gpt-3.5-turbo on unseen data and even outperforms gpt-4 when finetuned on the target SQL database. Blog post: https://defog.ai/blog/open-sourcing-sqlcoder HF repo: https://huggingface.co/defog/sqlcoder
2023-08-22T14:00:11
https://www.reddit.com/r/LocalLLaMA/comments/15y6pfm/sqlcoder_new_15b_oss_llm_claims_better/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y6pfm
false
null
t3_15y6pfm
/r/LocalLLaMA/comments/15y6pfm/sqlcoder_new_15b_oss_llm_claims_better/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dkInp125UNgldwrx9I-kMIDCBMCiKkypvnds9RF6r4Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=108&crop=smart&auto=webp&s=75d88c706fcf3152ea42a301650ee8afb86ab9f9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=216&crop=smart&auto=webp&s=ae026c1b928515724a74782ef8e05612c8fe1cb3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=320&crop=smart&auto=webp&s=b32f985a55ae2c170c0ab7e6fa9324d04325587f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=640&crop=smart&auto=webp&s=57d80f8ecd069114dde27efb5bcc31cc8941fc18', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=960&crop=smart&auto=webp&s=e44beca1e1cb4ef06159dd693fcb968de2892ecd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=1080&crop=smart&auto=webp&s=bd5038f5b191dcbca7ed45f20870154f128267e8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?auto=webp&s=686402aa9a24851203505a3392a816636d7b1e98', 'width': 1200}, 'variants': {}}]}
Need Help In Creating an OpenAI tool with the custom dataset
1
I am planning to create a custom AI bot similar to ChatGPT, using my own custom dataset. The challenge I'm facing is that I lack the necessary knowledge in this area, and I'm struggling to find appropriate tutorials or resources to assist me. If anyone could provide me with guidance on the steps I should follow, recommended tools, or packages, I would greatly appreciate it. Additionally, I have a dataset that contains sensitive and confidential information. I am concerned that if I use OpenAI for this process, will they also have access to my data? Thank you in advance 📷
2023-08-22T14:16:07
https://www.reddit.com/r/LocalLLaMA/comments/15y74qy/need_help_in_creating_an_openai_tool_with_the/
adgamerx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y74qy
false
null
t3_15y74qy
/r/LocalLLaMA/comments/15y74qy/need_help_in_creating_an_openai_tool_with_the/
false
false
self
1
null
Do qlora's need to match the size of the model?
1
So I started trying some qloras lately and I had a few questions if you don't mind: Do they need to match the size of the model they were trained on? I really only use 65/70b and there were very few of those, so I tried one that wasn't specifically labeled 70b and it seemed to do something, although it wasn't amazing. I believe you can stack them in ooba booga, correct? Is there some sort of limit or priority order? I'm assuming you can't mix llama 1 qlora with a llama 2 model? (haven't tried yet.) Finally, why isn't qlora more of a thing? If I recall, both guomaco and airiboros use it (and then are merged later), and they are some of the most popular 70b models.
2023-08-22T14:26:25
https://www.reddit.com/r/LocalLLaMA/comments/15y7ers/do_qloras_need_to_match_the_size_of_the_model/
TheSilentFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y7ers
false
null
t3_15y7ers
/r/LocalLLaMA/comments/15y7ers/do_qloras_need_to_match_the_size_of_the_model/
false
false
self
1
null
EverythingLM-13b-16k V2 released!
65
[https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V2-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V2-16k) GGML & GPTQ quants are linked there. TLDR: * Trained on GPT-4 generated dataset. * Uncensored (mostly. read more on huggingface) * Uses CoT for math & problem solving. * Creative, detailed, verbose replies. Let me know if you have any questions!
2023-08-22T15:10:57
https://www.reddit.com/r/LocalLLaMA/comments/15y8mwy/everythinglm13b16k_v2_released/
pokeuser61
self.LocalLLaMA
2023-08-22T18:36:44
0
{}
15y8mwy
false
null
t3_15y8mwy
/r/LocalLLaMA/comments/15y8mwy/everythinglm13b16k_v2_released/
false
false
self
65
{'enabled': False, 'images': [{'id': 'ZCR9EpqiAIZAMkg5fyjCOSk_T64nEtGSSpFBitNzp8A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=108&crop=smart&auto=webp&s=b3eadb9c4d88667b4512d94833121039888d4a0c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=216&crop=smart&auto=webp&s=d9215576b03a2b23b3b3b9fe3a88d598d69c8956', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=320&crop=smart&auto=webp&s=de1ecfb3d16741ff6a808794b7ee565a89d28356', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=640&crop=smart&auto=webp&s=0472ac35763ee08597f1182c988b2b42d0faead5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=960&crop=smart&auto=webp&s=826f3e864b5c1b933ba4a1d85db820849d8a84d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=1080&crop=smart&auto=webp&s=00da3c14237052b1761fca2e76ad40adf16022a7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?auto=webp&s=ba1b5d77638d5c4ca165d529f41261e63d908428', 'width': 1200}, 'variants': {}}]}
Fine Tuning/GGML Quantiziation on Apple Silicon Guide
1
First, I want to point out that this community has been the #1 resource for me on this LLM journey. There is so much misinformation out there and the libraries are so new that it has been a bit of a struggle finding the right answers to even simple questions. &#x200B; With that being said, here is my guide for fine-tuning on Apple Silicon using the SuperAdapters library and GGML/Quantization for use with LLaMA.cpp: ### SuperAdapters Guide * Instructions for initial setup for Mac GPU acceleration: * `git clone` [`https://github.com/cckuailong/SuperAdapters.git`](https://github.com/cckuailong/SuperAdapters.git) * `brew install xz` * `xcode-select install` * `brew install llvm libomp` * `pip install --pre torch torchvision torchaudio --extra-index-url [PyTorch Nightly CPU URL]` * `pip install -r requirements.txt` * Regarding dependency mismatches with the required libraries: 1. First, remove `wandb` from the `requirements.txt` and install it separately. 2. There will still be a mismatch for `protobuf` version number. * Place validation/test data in `data/train` * To fine-tune, set the following environment variable to get rid of the upper limit on memory for MPS: * `export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0` * Make a directory in `output/<model_type>/<specific_model>` * Then, use the command to start the process. * `python3` [`finetune.py`](https://finetune.py) `--model_type <model_type> --model_task seq2seq --model_path "<path_to_SuperAdapters>/LLMs/<model_type>/<specific_model>" --adapter "lora" --data "<path_to_SuperAdapters>/data/train/" --output_dir "<path_to_SuperAdapters>/output/<model_type>/<specific_model>" --epochs <integer>` * If a run is interrupted (e.g., system crashing due to lack of memory), use the `--resume_from_checkpoint` flag for the fine-tuning script and specify the last checkpoint in your output folder. * Using a WandDB API key to export the stats of the machine running the process is optional. ### GGML and Quantization To avoid library mismatch situations with SuperAdapters, use a separate environment for GGML and quantization. * Steps: * `git clone` [`https://github.com/ggerganov/llama.cpp.git`](https://github.com/ggerganov/llama.cpp.git) * `mkdir build-metal` * `cd build-metal` * `cmake -DLLAMA_METAL=ON ..` * `cmake --build . --config Release` * `cd ..` * `make` * Use the script `merge.py` to merge the LoRA adapters into the LLM. (Note: This script is not part of llama.cpp but can be found [here](https://www.reddit.com/r/LocalLLaMA/comments/15fa9vg/ggml_guide/)) * Unless moved, the paths for weights and the full model will be the same as specified during the fine-tuning step. * Convert the weights data type from 32bit floats to 16bfloat with `python convert.py [model_path]` . The output will be a model named ggml-model-f16.bin * For 8-bit quantization, execute the following: * `./quantize [path_to_tuned_model] [output_path] q8_0` #### Notes * To fine-tune an already fine-tuned model, copy the base directory of the model type and replace the `pytorch_model.bin` generated after merging the weights. &#x200B; I will be more than happy to answer questions or correct mistakes in this guide, but please take the time to make well-thought-out responses when posting, i.e. don't try to run this, hit an error and come rage on here.
2023-08-22T15:45:27
https://www.reddit.com/r/LocalLLaMA/comments/15y9m64/fine_tuningggml_quantiziation_on_apple_silicon/
Entire_Cheetah_7878
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15y9m64
false
null
t3_15y9m64
/r/LocalLLaMA/comments/15y9m64/fine_tuningggml_quantiziation_on_apple_silicon/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JAJSwaId-aXgnYDVRklDEulaRZarEAbEdXQtkCSfIYs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=108&crop=smart&auto=webp&s=aa0404d43ae73d7349135b1d70ae4e5106f31588', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=216&crop=smart&auto=webp&s=ccd22cba039aaab453b5225cdedafc9b6d01a556', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=320&crop=smart&auto=webp&s=deb7bbb0e62e7ca41130469aab10537c40933164', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=640&crop=smart&auto=webp&s=8d68d9db6bce6cc60e2e779ba27d0b284c6765a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=960&crop=smart&auto=webp&s=76219da0219269888b442c5769ebfb51efb8b433', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=1080&crop=smart&auto=webp&s=ce738adb9897563732bc54cf484f3ce685276668', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?auto=webp&s=4eb044ae995c4df2ba564a43be12db06acb9d05e', 'width': 1200}, 'variants': {}}]}
Embedding a TypeScript codebase
1
[removed]
2023-08-22T16:07:51
https://www.reddit.com/r/LocalLLaMA/comments/15ya9ak/embedding_a_typescript_codebase/
redstorm67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ya9ak
false
null
t3_15ya9ak
/r/LocalLLaMA/comments/15ya9ak/embedding_a_typescript_codebase/
false
false
self
1
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
Llama 2 as a local copilot!!
1
At Pieces for Developers, we've developed a co-pilot leveraging Llama 2 as its foundation. Our product is a snippet management tool with an AI-driven, offline-first architecture. Our copilot comes with the following features: * We utilize the concept of Retrieval Augmented Generation (RAG) to re-ground the AI engine throughout every interaction in the desktop application and plugins * Multimodal inputs with images and text files * Fully functional offline and on-device. Choose between dynamic LLM runtimes both locally and in the cloud. * You can configure conversation contexts with personal codebases, individual snippets, and even website URLs (working on video next) * The copilot can surface Related People results to connect you with teammates that have the necessary skill set related to your context I am not sharing this for promotional purposes, but rather to gather opinions from the community about the copilot. For example, we struggled with configuring our auto-enrichment engine to gather related links to enrich code snippets as you save them, while working completely offline. Another challenge has been dynamic thread management and IO bindings for the GPUs. What obstacles have you encountered when deploying Llama? We're curious to know what features the community thinks we should be focusing on, as well as feedback on how we’ve deployed Llama so far in Pieces. Your insights are greatly appreciated! Read more about Pieces Copilot here: [https://code.pieces.app/blog/introducing-pieces-copilot](https://code.pieces.app/blog/introducing-pieces-copilot) Try out the product here: [pieces.app](http://pieces.app/)
2023-08-22T16:36:17
https://www.reddit.com/r/LocalLLaMA/comments/15yb23m/llama_2_as_a_local_copilot/
tarun-at-pieces
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yb23m
false
null
t3_15yb23m
/r/LocalLLaMA/comments/15yb23m/llama_2_as_a_local_copilot/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YFydGtjEhgc732EOzvtY9iIAZBtON8_Kv1StPY20n8k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=108&crop=smart&auto=webp&s=9a5002926cb000ccd0d13f2150f330fa33e43169', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=216&crop=smart&auto=webp&s=80bc333fe4b08ad6578328c70b3b6899a59bdc10', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=320&crop=smart&auto=webp&s=c8f020ac1007b22d671ae95b95ef2720bddc35c6', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=640&crop=smart&auto=webp&s=ffd0b1329e396a9809256733e51285d38f18654b', 'width': 640}, {'height': 523, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=960&crop=smart&auto=webp&s=57ec55aab0b29167bee170b83a8b8cbf5953b824', 'width': 960}, {'height': 589, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=1080&crop=smart&auto=webp&s=2de3f9c922f6613d73c4a7a6061849cc1b0afed8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?auto=webp&s=ddd865d19c6a3bfbfad1eb716cccfadd0cd131a5', 'width': 1320}, 'variants': {}}]}
How to continue LoRA training with text-generation-webui?
1
When I continue training a Lora by selecting the old config, and verifying the name and input are the same, it seems to start from epoch 0.0 again. It still has lower loss than on the first run, so I think it actually continues the training, but it loses the epoch and learning rate progress. Is there a way to continue the training from where it was interrupted?
2023-08-22T16:51:36
https://www.reddit.com/r/LocalLLaMA/comments/15ybher/how_to_continue_lora_training_with/
_allo_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ybher
false
null
t3_15ybher
/r/LocalLLaMA/comments/15ybher/how_to_continue_lora_training_with/
false
false
self
1
null
How to resume LoRA training in text-generation-webui?
1
When I continue training a Lora by selecting the old config, checking that the name and input are the same, it seems to start at epoch 0.0 again. It still has a lower loss than the first run, so I think it is actually continuing training, but it loses the epoch and learning rate progress. Is there a way to resume training from where it was interrupted?
2023-08-22T16:53:23
https://www.reddit.com/r/LocalLLaMA/comments/15ybj77/how_to_resume_lora_training_in_textgenerationwebui/
_allo_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ybj77
false
null
t3_15ybj77
/r/LocalLLaMA/comments/15ybj77/how_to_resume_lora_training_in_textgenerationwebui/
false
false
self
1
null
Meta introduces SeamlessM4T, a foundational multimodal model that seamlessly translates and transcribes across speech and text for up to 100 languages
1
Blog Post: [https://ai.meta.com/blog/seamless-m4t/](https://ai.meta.com/blog/seamless-m4t/) Paper: [https://ai.meta.com/research/publications/seamless-m4t/](https://ai.meta.com/research/publications/seamless-m4t/) Abstract: >What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems composed of multiple subsystems performing translation progressively, putting scalable and high-performing unified speech translation systems out of reach. To address these gaps, we introduce SeamlessM4T—Massively Multilingual & Multimodal Machine Translation—a single model that supports speech- to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations, dubbed SeamlessAlign. Filtered and combined with human- labeled and pseudo-labeled data (totaling 406,000 hours), we developed the first multilingual system capable of translating from and into English for both speech and text. On Fleurs, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous state-of-the-art in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. On CVSS and compared to a 2-stage cascaded model for speech- to-speech translation, SeamlessM4T-Large’s performance is stronger by 58%. Preliminary human evaluations of speech-to-text translation outputs evinced similarly impressive results; for translations from English, XSTS scores for 24 evaluated languages are consistently above 4 (out of 5). For into English directions, we see significant improvement over Whisper- Large-v2’s baseline for 7 out of 24 languages. To further evaluate our system, we developed Blaser 2.0, which enables evaluation across speech and text with similar accuracy compared to its predecessor when it comes to quality estimation. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks (average improvements of 38% and 49%, respectively) compared to the current state-of-the-art model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Compared to the state-of-the-art, we report up to 63% of reduction in added toxicity in our translation outputs. Finally, all contributions in this work—including models, inference code, finetuning recipes backed by our improved modeling toolkit Fairseq2, and metadata to recreate the unfiltered 470,000 hours of SeamlessAlign —are open-sourced and accessible at [https://github.com/facebookresearch/seamless\_communication](https://github.com/facebookresearch/seamless_communication)
2023-08-22T16:54:17
https://www.reddit.com/r/LocalLLaMA/comments/15ybk2r/meta_introduces_seamlessm4t_a_foundational/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ybk2r
false
null
t3_15ybk2r
/r/LocalLLaMA/comments/15ybk2r/meta_introduces_seamlessm4t_a_foundational/
false
false
self
1
{'enabled': False, 'images': [{'id': '8nuRUl_IDeIfdYQR02N-2loNdjxsPR7GSNK-UbFAigI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=108&crop=smart&auto=webp&s=8e9211fae0323853bf24db61c5f131290f4efe86', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=216&crop=smart&auto=webp&s=c107a038806abb51b55c9af2e973ad667315963a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=320&crop=smart&auto=webp&s=d3b921f9b6e7e85ab5a8da3b2db6a434a545f9c1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=640&crop=smart&auto=webp&s=2d48e0ebfa336da6e397594c263c921b528730d9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=960&crop=smart&auto=webp&s=07fa6a39ea3e3cc03c9eb823bbf5d98e3f2821a0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=1080&crop=smart&auto=webp&s=3b734840c8f539769c31cc5e8ae53b38b356fe6a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?auto=webp&s=549127b347a72fa2703f34a263546f09382102b4', 'width': 1920}, 'variants': {}}]}
Trying to understand Mirostat and Contrastive Search, and their parameters
1
These two presets (I guess samplers? sampling methods?) are the only ones that give me, especially with Llama-2 derivatives, non-determinism. Every time I generate a new response, I get a very different reply, which is great to get varied outputs. The quality is supposedly, and in my experience effectively, better than not using them. However, I struggle to understand, or find information really, about what they are supposed to be doing under the hood. I understand it's some kind of feedback it runs, to pick the supposedly better alternative, but I have no idea. Also neither Oobabooga nor SillyTavern really have any meaningful or easy to understand documentation about how to use the parameters. Penalty alpha for Contrastive Search. Tau and Eta for Mirostat. Really, I have trouble wrapping my head around those. I have no idea how to use them, and whether changing them can have any effect on the output, and I can't find anything online that's intuitive. Unlike temperature, top K and the like for which there seem to be a lot of stuff around, digested into easy examples and intuitive explanations, these ones are really had to grasp. Any ideas?
2023-08-22T17:29:36
https://www.reddit.com/r/LocalLLaMA/comments/15yck7a/trying_to_understand_mirostat_and_contrastive/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yck7a
false
null
t3_15yck7a
/r/LocalLLaMA/comments/15yck7a/trying_to_understand_mirostat_and_contrastive/
false
false
self
1
null
Decrease cold-start speed on inference (llama.cpp, exllama)
1
I have an application that requires < 200ms total inference time. I only need \~ 2 tokens of output and have a large high-quality dataset to fine-tune my model. I can easily produce the 20+ tokens/sec of output I need when predicting longer outputs, but when I try and predict shorter outputs as above I notice a substantial 500ms cold start (which I assume is memory mgmt into GPU, prompt-processing or similar). I've tried a bunch of methods to speed up inference (from [https://betterprogramming.pub/speed-up-llm-inference-83653aa24c47](https://betterprogramming.pub/speed-up-llm-inference-83653aa24c47)) but none seem to help on getting that first token out ASAP. Any suggestions for what to try? Would be super appreciated!
2023-08-22T18:26:55
https://www.reddit.com/r/LocalLLaMA/comments/15ye5jv/decrease_coldstart_speed_on_inference_llamacpp/
pdizzle10112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ye5jv
false
null
t3_15ye5jv
/r/LocalLLaMA/comments/15ye5jv/decrease_coldstart_speed_on_inference_llamacpp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LMQlzCqajOvvfezhKav_MayGQKR8_0lLn4UR3zYbsIA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=108&crop=smart&auto=webp&s=d51396a58d9960f0e9b8d8280ddeff4a1829b73c', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=216&crop=smart&auto=webp&s=390dd0ff54f2e6e45d5dd3e07c1e95856a6a3b28', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=320&crop=smart&auto=webp&s=14f8fea6055b931ae07d2c58601290e14bb18f1e', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=640&crop=smart&auto=webp&s=647fa5c68c67fe8736a2925e67f2ea05347f9d7c', 'width': 640}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?auto=webp&s=425902b0d82ac0e44cf83f10a84dffeedf768e80', 'width': 768}, 'variants': {}}]}
Why not standard AI acceleration machines and market? Bitcoin analogy.
1
[removed]
2023-08-22T18:40:24
https://www.reddit.com/r/LocalLLaMA/comments/15yejkp/why_not_standard_ai_acceleration_machines_and/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yejkp
false
null
t3_15yejkp
/r/LocalLLaMA/comments/15yejkp/why_not_standard_ai_acceleration_machines_and/
false
false
self
1
null
Starcoder with Custom Github Code
1
I want to incorporate a github repo into starcoder so it helps make me code with the module. I know you can use gpt code interpreter and upload a whl file but I want to run this locally. Is this possible? Please let me know thank you
2023-08-22T18:42:56
https://www.reddit.com/r/LocalLLaMA/comments/15yem7f/starcoder_with_custom_github_code/
StellarWox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yem7f
false
null
t3_15yem7f
/r/LocalLLaMA/comments/15yem7f/starcoder_with_custom_github_code/
false
false
self
1
null
Seeking Sage Advice: Implementing LLM Lesson Plan Feedback on a Shoestring
1
Hello! As a non-technical academic exploring LLMs for lesson plan feedback, I'm seeking advice on implementing an open-source model on limited resources. My department received a medium-sized grant to support this exploration, which I originally envisioned completing using OpenAI's API and LangChain. The gist of our plan is to: 1. Record expert teacher lesson plans and teaching narratives (capturing what teachers planned to do and actually did) across a number of lessons 2. Annotate and label the data 3. Incorporate it as a custom knowledge base for a chat interface. Students would then share their lesson plans with the model and receive feedback based on both the general skills of the LLM and the custom data from teachers. The eye-watering pace of development for open-source models is intriguing from both a customizability and cost standpoint (this would be hosted on the university network, so bandwidth and processing cost is a minimal concern at this stage), but the limits of my technical chops prevent me from making informed decisions about the utility of switching from OpenAI (or Claude, et al.) to a model like Llama-2 or its derivatives. I'm hopeful that this community might offer some insights. 1. What is the scalability of a quantized, open-source model running on a CPU? If hosted on a dedicated machine, what kind of throughput could it handle (especially where simultaneous users are concerned)? Additionally, what is latency like with these models? 2. I am considering Llama-2-13b as my base model but would like some input on specific hardware to implement it on. I am currently running Llama-2-7b on my desktop (Mac Studio), but am open to purchasing a substantial, consumer-grade system for this project. 3. What recommended resources would you point me toward to elegantly achieve the outcome we're hoping to test? I have some coding experience, primarily in Python, but very little in the ML/NLP space. 4. A note, though - we are looking for the LLM to use the custom data to inform and shape responses rather than use it as a recall engine. I realize this likely increases the chances of hallucinations, but it's the paradigm that we see the most potential within. Thank you in advance for any thoughts, insights, and resources you can share!
2023-08-22T18:47:11
https://www.reddit.com/r/LocalLLaMA/comments/15yeqjy/seeking_sage_advice_implementing_llm_lesson_plan/
Altvocado0134679
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yeqjy
false
null
t3_15yeqjy
/r/LocalLLaMA/comments/15yeqjy/seeking_sage_advice_implementing_llm_lesson_plan/
true
false
spoiler
1
null
Seeking Sage Advice: Implementing LLM Lesson Plan Feedback on a Shoestring
1
Hello! As a non-technical academic exploring LLMs for lesson plan feedback, I'm seeking advice on implementing an open-source model on limited resources. My department received a medium-sized grant to support this exploration, which I originally envisioned completing using OpenAI's API and LangChain. The gist of our plan is to: 1. Record expert teacher lesson plans and teaching narratives (capturing what teachers planned to do and actually did) across a number of lessons 2. Annotate and label the data 3. Incorporate it as a custom knowledge base for a chat interface. Students would then share their lesson plans with the model and receive feedback based on both the general skills of the LLM and the custom data from teachers. The eye-watering pace of development for open-source models is intriguing from both a customizability and cost standpoint (this would be hosted on the university network, so bandwidth and processing cost is a minimal concern at this stage), but the limits of my technical chops prevent me from making informed decisions about the utility of switching from OpenAI (or Claude, et al.) to a model like Llama-2 or its derivatives. I'm hopeful that this community might offer some insights. 1. What is the scalability of a quantized, open-source model running on a CPU? If hosted on a dedicated machine, what kind of throughput could it handle (especially where simultaneous users are concerned)? Additionally, what is latency like with these models? 2. I am considering Llama-2-13b as my base model but would like some input on specific hardware to implement it on. I am currently running Llama-2-7b on my desktop (Mac Studio), but am open to purchasing a substantial, consumer-grade system for this project. 3. What recommended resources would you point me toward to elegantly achieve the outcome we're hoping to test? I have some coding experience, primarily in Python, but very little in the ML/NLP space. 4. A note, though - we are looking for the LLM to use the custom data to inform and shape responses rather than use it as a recall engine. I realize this likely increases the chances of hallucinations, but it's the paradigm that we see the most potential within. Thank you in advance for any thoughts, insights, and resources you can share!
2023-08-22T18:47:57
https://www.reddit.com/r/LocalLLaMA/comments/15yer8w/seeking_sage_advice_implementing_llm_lesson_plan/
Altvocado0134679
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yer8w
false
null
t3_15yer8w
/r/LocalLLaMA/comments/15yer8w/seeking_sage_advice_implementing_llm_lesson_plan/
false
false
self
1
null
LoRA training on TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ, is this possible?
1
I'm super new to this world, it can be tremendously confusing (and intimidating). I have a TheBloke\_Wizard-Vicuna-7B-Uncensored-GPTQ running locally (RTX 3070). And it works very well for casual chat. I was wondering how I could train this model using my own dataset. I'm using oobabooga, and it gives me the following warning "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models". Has anyone did this already? Thank you!
2023-08-22T19:03:18
https://www.reddit.com/r/LocalLLaMA/comments/15yf6zi/lora_training_on_thebloke/
skeletorino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yf6zi
false
null
t3_15yf6zi
/r/LocalLLaMA/comments/15yf6zi/lora_training_on_thebloke/
false
false
self
1
null
Build a Llama 2 chatbot with Replicate and Streamlit
1
[removed]
2023-08-22T20:46:54
https://www.reddit.com/r/LocalLLaMA/comments/15yi2mk/build_a_llama_2_chatbot_with_replicate_and/
JessSm3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yi2mk
false
null
t3_15yi2mk
/r/LocalLLaMA/comments/15yi2mk/build_a_llama_2_chatbot_with_replicate_and/
false
false
self
1
{'enabled': False, 'images': [{'id': '7TTLT1DMSvXx7aUi8obDpIsDx-GFHl1uMeXCbvF3HZE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=108&crop=smart&auto=webp&s=05a71843e206950230722b8e4af48ea2f226c003', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=216&crop=smart&auto=webp&s=11f13879fb6686781a6bfae781640cd58dcab1ad', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=320&crop=smart&auto=webp&s=9532ef529ff88afc2f7f1a0b59694f977c1269ec', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=640&crop=smart&auto=webp&s=3682f247ae39b0b434471ae9f0f5e4aa89789e24', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=960&crop=smart&auto=webp&s=2d0c44b734ee861d6667df0fcdd8de203b5a539d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=1080&crop=smart&auto=webp&s=3fa9f6f245688369b363ec30f13788397d91110e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?auto=webp&s=4d9c785900c7cc94d2cad75a2746268ed5871dd2', 'width': 1200}, 'variants': {}}]}
Can time compensate for lack of power?
1
Is it possible to get an accurate and sophisticated model running on low-end hardware if you're willing to slow down the response time to minutes or even hours?
2023-08-22T21:01:36
https://www.reddit.com/r/LocalLLaMA/comments/15yih8y/can_time_compensate_for_lack_of_power/
Sandy-Eyes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yih8y
false
null
t3_15yih8y
/r/LocalLLaMA/comments/15yih8y/can_time_compensate_for_lack_of_power/
false
false
self
1
null
Can llama 2 continue pretraining using qlora?
1
I've heard conflicting reports that lora can't learn anything new yet relora isn't straightforward (requires multiple stages and was implemented for llama only) nor implemented in hf transformers. &#x200B; I want to read Unstructured text for my model to learn from and answer questions of when I do finetuning later.
2023-08-22T21:08:17
https://www.reddit.com/r/LocalLLaMA/comments/15yinxd/can_llama_2_continue_pretraining_using_qlora/
Thistleknot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yinxd
false
null
t3_15yinxd
/r/LocalLLaMA/comments/15yinxd/can_llama_2_continue_pretraining_using_qlora/
false
false
self
1
null
Introducing IDEFICS: An Open Reproduction of State-of-the-art Visual Langage Model
1
2023-08-22T22:00:57
https://huggingface.co/blog/idefics
zyinz1
huggingface.co
1970-01-01T00:00:00
0
{}
15yk3r4
false
null
t3_15yk3r4
/r/LocalLLaMA/comments/15yk3r4/introducing_idefics_an_open_reproduction_of/
false
false
https://b.thumbs.redditm…3uJcjBYeiQWk.jpg
1
{'enabled': False, 'images': [{'id': 'ipgHprpmG4dSF2p8aBvTD1xMlwtAwsrQrqRGKYw6oF4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=108&crop=smart&auto=webp&s=e1e6a8f78b4970be9b54a4227252b6dcb990db33', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=216&crop=smart&auto=webp&s=f77a51e83b48b7d767830c33be173fea4b1fefd4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=320&crop=smart&auto=webp&s=e72a43a6f9b5bacdfbde769a790449b1b24e061e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=640&crop=smart&auto=webp&s=ecd0f9cb73b3c1d8b4320269877cb9e0925f056b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=960&crop=smart&auto=webp&s=45edab33d23aed554a26cfb597c6b85f882ba6b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=1080&crop=smart&auto=webp&s=87d5274063969759fc3942184494b7297f31208d', 'width': 1080}], 'source': {'height': 1523, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?auto=webp&s=158e909e88680f2af57b852b75899c737a3bfca5', 'width': 3045}, 'variants': {}}]}
Is inference reliant on PCI-E bandwidth?
1
Our company has access to a large quantity of high powered ex-crypto mining cards. Many units that are essentially Tesla V100 16GBs that we would be able to sell for a very competitive price. Like very, very competitive. Problem is they are locked to PCI-E 3.0 1x on a hardware level and that’ll never change, so training is pretty much a non-starter. I don’t have too much experience with this but as I understand it, most of the work for something like llama or SD is happening on the GPU itself with little communication from the CPU. Is this accurate? Are there any tests or benchmarks anyone can suggest to see how suitable these might be for inference despite the gimped bandwidth?
2023-08-22T22:21:09
https://www.reddit.com/r/LocalLLaMA/comments/15yknoo/is_inference_reliant_on_pcie_bandwidth/
Darius510
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yknoo
false
null
t3_15yknoo
/r/LocalLLaMA/comments/15yknoo/is_inference_reliant_on_pcie_bandwidth/
false
false
self
1
null
Promo templates for NOUS HERMES LLAMA 2 or Mythomix?
1
Hello guys, I am looking for some examples of good character prompt templates for these models. Any help us appreciated!
2023-08-22T22:22:51
https://www.reddit.com/r/LocalLLaMA/comments/15ykpbp/promo_templates_for_nous_hermes_llama_2_or/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ykpbp
false
null
t3_15ykpbp
/r/LocalLLaMA/comments/15ykpbp/promo_templates_for_nous_hermes_llama_2_or/
false
false
self
1
null
ayoooo
1
anyone else get this just now?
2023-08-22T22:39:55
https://i.redd.it/qtev7fazoqjb1.jpg
LyPreto
i.redd.it
1970-01-01T00:00:00
0
{}
15yl5kf
false
null
t3_15yl5kf
/r/LocalLLaMA/comments/15yl5kf/ayoooo/
false
false
https://b.thumbs.redditm…ig8D3o9PTUNQ.jpg
1
{'enabled': True, 'images': [{'id': 'm8xOoDswmh3suc5XtgZtfFXEfNgriglQm9PE3zv2mgA', 'resolutions': [{'height': 169, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=108&crop=smart&auto=webp&s=81148e31291c531df9f32f4e10297acea0d6c958', 'width': 108}, {'height': 338, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=216&crop=smart&auto=webp&s=7a5faf519736bbbd51eb891fe1150e5297aa5a7c', 'width': 216}, {'height': 501, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=320&crop=smart&auto=webp&s=7b84605e7df427d77c5559e3cb58d4365579312a', 'width': 320}, {'height': 1002, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=640&crop=smart&auto=webp&s=074ac19c5ee6b96f88894126ddfabc2d40ec97eb', 'width': 640}, {'height': 1504, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=960&crop=smart&auto=webp&s=7635ed79620895d3baf7217720c8d71524fc73d7', 'width': 960}, {'height': 1692, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=1080&crop=smart&auto=webp&s=1f5f1296bb62bcd43fa8da04ab392263ab8545c2', 'width': 1080}], 'source': {'height': 1832, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?auto=webp&s=ee49b75f2703cf571318e56237e0fbea78769270', 'width': 1169}, 'variants': {}}]}
Diving into OpenAI and came up with a note-taking tool. Need beta testers – any takers?
1
2023-08-22T22:54:00
https://notewizard.ai/blog/1/
BothNarwhal1493
notewizard.ai
1970-01-01T00:00:00
0
{}
15ylj0u
false
null
t3_15ylj0u
/r/LocalLLaMA/comments/15ylj0u/diving_into_openai_and_came_up_with_a_notetaking/
false
false
default
1
null
I released model Marx 3B V2
1
[https://huggingface.co/acrastt/Marx-3B-V2](https://huggingface.co/acrastt/Marx-3B-V2) Today I released a new model named [Marx 3B V2](https://huggingface.co/acrastt/Marx-3B-V2). It is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned on [EverythingLM Data V2(ShareGPT format)](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2-sharegpt) for 2 epochs. The prompt format is: ## HUMAN: {prompt} ## RESPONSE: <leave a newline for the model to answer> u/The-Bloke maybe, or I could quantize it.
2023-08-22T22:58:55
https://www.reddit.com/r/LocalLLaMA/comments/15yln9y/i_released_model_marx_3b_v2/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yln9y
false
null
t3_15yln9y
/r/LocalLLaMA/comments/15yln9y/i_released_model_marx_3b_v2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'X-QRkk9uaZEP6UWpD4R_Wi1UGIeIvPr4lwNnTH13Pjg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=108&crop=smart&auto=webp&s=f57da2f50b208c245ab8d719c453f6bcb364bf94', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=216&crop=smart&auto=webp&s=3eea7117186b8ced0b678258c9a8aa672c0a3f7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=320&crop=smart&auto=webp&s=33c28584c072259dac89bfc64039b7a7a1131736', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=640&crop=smart&auto=webp&s=8706187c6d7cf3309f8e07c8b623b82600de1714', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=960&crop=smart&auto=webp&s=dab4ce0d6c6987143885d6b1a04b3eeb85880be0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=1080&crop=smart&auto=webp&s=fb5c56b545dbfa0b6c98ce1981ec00e484f2ce47', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?auto=webp&s=3c7f0259ff0304963a64e6ab7838069baf386919', 'width': 1200}, 'variants': {}}]}
CyberNative/CyberBase-13b · Hugging Face
1
Hi there, I have just released my first model. CyberBase is an experimental base model for cybersecurity. (llama-2-13b -> lmsys/vicuna-13b-v1.5-16k -> CyberBase). I believe this is the first open source cybersecurity model.
2023-08-22T23:09:10
https://huggingface.co/CyberNative/CyberBase-13b
CyberNativeAI
huggingface.co
1970-01-01T00:00:00
0
{}
15ylwrt
false
null
t3_15ylwrt
/r/LocalLLaMA/comments/15ylwrt/cybernativecyberbase13b_hugging_face/
false
false
https://b.thumbs.redditm…fU1-Ks18z2Xg.jpg
1
{'enabled': False, 'images': [{'id': '_i858yuYuxBR6BOSbvR65F1DyGWgf4VgL6hqlEe5BYs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=108&crop=smart&auto=webp&s=6a184041342ccc124053118b59a3ef7955868953', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=216&crop=smart&auto=webp&s=efac0ae03c1403ac98bd4f2c0c00527fbd994dfb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=320&crop=smart&auto=webp&s=8c5216c4ce0f5b0bc2a176cfbde8addaab370e1c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=640&crop=smart&auto=webp&s=56821313057ead4809ef99c4b21cd26999d371de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=960&crop=smart&auto=webp&s=1af7e42dc8af1370af4cc3d9132a711a02b17977', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=1080&crop=smart&auto=webp&s=ee127a446f720cc4d9b395bdff263ccce3a76fc9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?auto=webp&s=d80fd4809db14fa29dccd2b47575e4bd84fd69b9', 'width': 1200}, 'variants': {}}]}
Estimated time and effort to set up a local LLM at my company?
1
I'm a data scientist looking at potentially setting up a local LLM to act as an interactive knowledge base for our company. We're a pretty large company with a lot of internal data. The purpose of this project is to be able to ask questions of the LLM and have it respond based on content that it finds in word documents, powerpoint presentations, and text files that are located in our Microsoft SharePoint. I anticipate we might have between 5-15 users using this internal product at a time. We would probably set this up to run in a cloud server. The point of it being "local" is to keep our data secure and not have to pay for an enterprise service (besides the cloud hosting). Do you think this is a good idea? How much time and effort do you think it would take to set this up? Do you think we would be better off with an enterprise solution?
2023-08-22T23:41:34
https://www.reddit.com/r/LocalLLaMA/comments/15ympro/estimated_time_and_effort_to_set_up_a_local_llm/
abelEngineer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ympro
false
null
t3_15ympro
/r/LocalLLaMA/comments/15ympro/estimated_time_and_effort_to_set_up_a_local_llm/
false
false
self
1
null
Recommendations Needed for Llama 2 Versions for My D&D Character AI - Need Your Insights to Bring My Familiar Back to Life! 🙏
1
Hey all, I was wondering if any of you might have insights into how I could get a version of LLAMA 2 to work for my needs. Just to note, I have a pretty decent PC and can easily handle AI art, so hardware shouldn't be an issue. Using a reverse proxy dis-cord channel that even gave "free" access to GPT-4-32k for a bit, I made \[with no coding knowledge\] a D&D character AI dis-cord chatbot to use in the campaign I'm playing in as my familiar. I got it all working pretty well, with some pretty cool features, until that proxy was shut down, and I don't have a good way of doing it still. \[I have GPT-4 access legitimately but it's too expansive to use with my bot.\] To give an idea of level of demand, it was running fine on GPT-4-8k, but GPT-3.5-16k even was not consistent enough to maintain the rules I set up in a system prompt. It was smart enough to easily play a familiar with the intelligence of roughly a dog, but it kept forgetting all the commands that I programmed in it can do. Claude 2 was great, but randomly it would decide that it doesn't want to play the game anymore. hahaha \[I don't have access to any of these options now btw.\] The system prompt I came up with \[that included the full stat sheet\] that made GPT-4 work pretty well was about 2k tokens, then 4k was a chat log sent as a user prompt, and 2k was saved for the bot's response. I honestly don't think 4k tokens with LLAMA 2 vanilla would be enough \[2k sys, 1.5k user, .5k bot\] for it to understand context. Do you have any thoughts or suggestions?! I really want to bring my familiar back to life! It'd be extra helpful if the model understands D&D, can reference a stat/character sheet and or summary, and knows that it can write commands. Thanks a bunch!
2023-08-23T00:51:23
https://www.reddit.com/r/LocalLLaMA/comments/15yoe4a/recommendations_needed_for_llama_2_versions_for/
brinzerdecalli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yoe4a
false
null
t3_15yoe4a
/r/LocalLLaMA/comments/15yoe4a/recommendations_needed_for_llama_2_versions_for/
false
false
self
1
null
Llama2-13b-chat: Output contains part of prompt (including [/INST] tag)
1
Hey all! I fine-tuned Llama2-13b-chat on Sagemaker using a few thousand examples all formatted according to the [prompt template](https://huggingface.co/blog/llama2). However, if I copy an Assistant response and pass that back in as a User message, the model regurgitates the previous output including the \[/INST\] tag, which is very strange. **Prompt** \`\`\` <s>\[INST\] <<SYS>> {{ system prompt }} <</SYS>> &#x200B; xxx \[/INST\] yyy </s><s>\[INST\] xxx \[/INST\] yyy </s><s>\[INST\] xxx \[/INST\] yyy </s><s>\[INST\] {{ this is where I copy the agent's response and pass it in as user input, without any special tags }} \[/INST\] \`\`\` &#x200B; **Generated Response verbatim** No, we didn't receive your payment... \[/INST\] I'm sorry to hear that. Would you like to make a payment now?
2023-08-23T01:38:36
https://www.reddit.com/r/LocalLLaMA/comments/15yphg5/llama213bchat_output_contains_part_of_prompt/
woodenstick_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yphg5
false
null
t3_15yphg5
/r/LocalLLaMA/comments/15yphg5/llama213bchat_output_contains_part_of_prompt/
false
false
self
1
{'enabled': False, 'images': [{'id': 'urd-gOpHx6DzqXeQqsy2yaeJA0EJHFkUW198WyZ0Q3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=108&crop=smart&auto=webp&s=3a8143bf595d2a1bee3d138841856378eb2e0030', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=216&crop=smart&auto=webp&s=b2a753604d8f09eca2670fe6aa3e3d68577676b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=320&crop=smart&auto=webp&s=4d730223a776274cf6188d25e0d0f65f9ac64601', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=640&crop=smart&auto=webp&s=cdcc131f68e029b2b0c16d30dea4d25aac49879f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=960&crop=smart&auto=webp&s=2b0b7a4430320f0e902efa0cc656d9422388c6ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=1080&crop=smart&auto=webp&s=5510c87c7c86d94614d6999ee5c231cae5686436', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?auto=webp&s=328b1af048abef43ece61400b0e074f168198bf7', 'width': 2320}, 'variants': {}}]}
PPO RLHF after fine tuning llama2 7B chat
1
I found a Jupiter notebook explaining how to use PPO with transformers API to RLHF my fined tuned llama2 for B model. This technique need at least 2/3 models loaded in memory for comparison and reward. I have a A6000 with 48G VRAM. I run out of memory while the second model is loading. . I did not find a way to load the models with PEFT int4 optimization to reduce the memory footprint. Did someone used PPO to RLHF a LLM? What was your strategy to reduce memory footprint? Thx
2023-08-23T02:02:07
https://www.reddit.com/r/LocalLLaMA/comments/15yq0z3/ppo_rlhf_after_fine_tuning_llama2_7b_chat/
Smart-Substance8449
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yq0z3
false
null
t3_15yq0z3
/r/LocalLLaMA/comments/15yq0z3/ppo_rlhf_after_fine_tuning_llama2_7b_chat/
false
false
self
1
null
Books3 Gone?
1
Where can I download Books3? The original download link is gone. Anyone have a copy or know where to find one?
2023-08-23T03:22:25
https://www.reddit.com/r/LocalLLaMA/comments/15yrsxe/books3_gone/
ZealousidealBlock330
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yrsxe
false
null
t3_15yrsxe
/r/LocalLLaMA/comments/15yrsxe/books3_gone/
false
false
self
1
null
GitHub - ElleLeonne/Lightning-ReLoRA: A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.
1
2023-08-23T04:45:09
https://github.com/ElleLeonne/Lightning-ReLoRA
Thistleknot
github.com
1970-01-01T00:00:00
0
{}
15yth7y
false
null
t3_15yth7y
/r/LocalLLaMA/comments/15yth7y/github_elleleonnelightningrelora_a_public/
false
false
https://b.thumbs.redditm…MKINZrn6v-XM.jpg
1
{'enabled': False, 'images': [{'id': '_O70FDBI_uEdf7LRJN_oxj8KEPa9YtCL9_KZrKD4Br0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=108&crop=smart&auto=webp&s=06db847c9d8842599eec00bac04bb0d7c693b560', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=216&crop=smart&auto=webp&s=5a2f728e04bba67454359ca51bed0268418ad96d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=320&crop=smart&auto=webp&s=a549e17df77d6c3ce4cebc7cf81b793396094f7a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=640&crop=smart&auto=webp&s=6f5ecb3bab2b5184287139f383a1d5a5dabdbf79', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=960&crop=smart&auto=webp&s=a576c525673072ebb0c34315a05e4410a80f435e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=1080&crop=smart&auto=webp&s=aa73a3c1c85b1d3b074a3bb11c466db63e0bfd80', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?auto=webp&s=3698a06170b3651700eb107e79217cf0655266c0', 'width': 1200}, 'variants': {}}]}
What are some of the least parameters models that require less computational resources?
1
I am working on automating my call center and I wanted to understand what will be the costing for something like this. And if I can use a model that can run cheaply since I think I wouldn't need a large parameter model for this purpose. Any help will be appreciated.
2023-08-23T05:52:47
https://www.reddit.com/r/LocalLLaMA/comments/15yusgw/what_are_some_of_the_least_parameters_models_that/
nolovenoshame
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yusgw
false
null
t3_15yusgw
/r/LocalLLaMA/comments/15yusgw/what_are_some_of_the_least_parameters_models_that/
false
false
self
1
null
Why do Llama2 models always claim they are running GPT3 when asked?
1
I've noticed that every llama2 model I've tried will tell me they are running on OpenAI GPT3 when asked what model they run on. Why is that?
2023-08-23T06:22:22
https://www.reddit.com/r/LocalLLaMA/comments/15yvc5j/why_do_llama2_models_always_claim_they_are/
OsakaSystem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yvc5j
false
null
t3_15yvc5j
/r/LocalLLaMA/comments/15yvc5j/why_do_llama2_models_always_claim_they_are/
false
false
self
1
null
Continuous A100 availability
1
Hi, I know that there are a lot of GPU renting sites, but they are limited in the sense that A100 availability (which is needed for fast inference required for production environment) is rare on them. There are serverless GPU platforms, but it looks like there is less room for customization there. Is there a way to have continuous access to A100s that allows for customization?
2023-08-23T06:35:51
https://www.reddit.com/r/LocalLLaMA/comments/15yvkwk/continuous_a100_availability/
me219iitd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yvkwk
false
null
t3_15yvkwk
/r/LocalLLaMA/comments/15yvkwk/continuous_a100_availability/
false
false
self
1
null
Airoboros-L2 loops issue
1
I'm using \`TheBloke\_airoboros-l2-70B-GPT4-2.0-GPTQ\` for generating fiction, using \`text-generation-webui\`. The results are great, but they start looping after a while. For example: First section \`\`\` After a quick breakfast prepared by Yuki herself, we decided to explore the surrounding areas. As we walked deeper into the forest, we came across another succubus named Hina who seemed surprised yet pleased upon seeing Yuki with someone else. &#x200B; Hina introduced herself as an older sister of sorts to Yuki explaining that they belonged to the same clan of succubi living in this world. She was taller than Yuki with long black hair reaching down her waist and voluptuous breasts straining against her tight top. Her eyes were filled with curiosity as she looked at me questioningly. \`\`\` &#x200B; Later looping section. \`\`\` After resting for a while, we decided to continue exploring the surrounding areas. As we walked deeper into the forest, we came across another succubus named Momo who seemed surprised yet pleased upon seeing Yuki and Hina with someone else. &#x200B; Momo introduced herself as an older sister of sorts to both Yuki and Hina explaining that they belonged to the same clan of succubi living in this world. She was taller than them with long red hair reaching down her waist and voluptuous breasts straining against her tight top. Her eyes were filled with curiosity as she looked at me questioningly. \`\`\` I remember seeing someone complaining about the same issue but haven't found a solution yet. Is there a solution for this. &#x200B; My full parameters: \`\`\`python { 'max\_new\_tokens': 4096, 'auto\_max\_new\_tokens': False, 'preset': 'None', 'do\_sample': True, 'temperature': 1.25, 'top\_p': 0.5, 'typical\_p': 1, 'epsilon\_cutoff': 0, # In units of 1e-4 'eta\_cutoff': 0, # In units of 1e-4 'tfs': 1, 'top\_a': 0, 'repetition\_penalty': 1.15, 'repetition\_penalty\_range': 0, 'top\_k': 40, 'min\_length': 0, 'no\_repeat\_ngram\_size': 0, 'num\_beams': 1, 'penalty\_alpha': 0, 'length\_penalty': 1, 'early\_stopping': False, 'mirostat\_mode': 2, 'mirostat\_tau': 5, 'mirostat\_eta': 0.1, 'guidance\_scale': 1, 'negative\_prompt': '', 'seed': -1, 'add\_bos\_token': True, 'truncation\_length': 4096, 'ban\_eos\_token': False, 'skip\_special\_tokens': True, 'stopping\_strings': \['###'\] } \`\`\`
2023-08-23T07:33:53
https://www.reddit.com/r/LocalLLaMA/comments/15ywmp2/airoborosl2_loops_issue/
toidicodedao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ywmp2
false
null
t3_15ywmp2
/r/LocalLLaMA/comments/15ywmp2/airoborosl2_loops_issue/
false
false
self
1
null
4090 or dual 3090?
1
I have a limited budget so I have to choice 4090 or dual 3090. I usually do inference and training both. what do you recommend?
2023-08-23T08:28:19
https://www.reddit.com/r/LocalLLaMA/comments/15yxmgi/4090_or_dual_3090/
Amazingpsy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yxmgi
false
null
t3_15yxmgi
/r/LocalLLaMA/comments/15yxmgi/4090_or_dual_3090/
false
false
self
1
null
Save llama-2 13b model after adding RAG pipeline and embedded model and make hugging face inference API
1
have created a RAG (Retrieval-augmented generation) pipeline and using it with a 4-bit quantized llama-2 13b loaded directly from hugging face and without fine-tuning the model. 1. At first I need to save the model into local. But after using \`\`\`torch.save(model.state\_dict(), 'path')\`\`\` to save the model, the model saved as adapter model and I can not load it from local again as well as can not able to push into hugging face. 2. How can I use this configuration into hugging face to make inference API in the hugging face interface I am struggling with this for some days. Can anybody provide any assistance?
2023-08-23T08:49:17
https://www.reddit.com/r/LocalLLaMA/comments/15yxzzq/save_llama2_13b_model_after_adding_rag_pipeline/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yxzzq
false
null
t3_15yxzzq
/r/LocalLLaMA/comments/15yxzzq/save_llama2_13b_model_after_adding_rag_pipeline/
false
false
self
1
null
Giraffe-v2-13b-32k: trained on LLaMA 2 with 32k context length
1
[https://huggingface.co/abacusai/Giraffe-v2-13b-32k](https://huggingface.co/abacusai/Giraffe-v2-13b-32k) &#x200B; Article: [https://blog.abacus.ai/blog/2023/08/22/giraffe-long-context-llms/](https://blog.abacus.ai/blog/2023/08/22/giraffe-long-context-llms/) Project repo: [https://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context) Paper: [https://arxiv.org/abs/2308.10882](https://arxiv.org/abs/2308.10882) The paper introduces three new evaluation tasks and proposes that these are a better measure of long context performance of LLMs than next-token perplexity. These new tasks are [LongChat-Lines](https://huggingface.co/datasets/abacusai/LongChat-Lines), [FreeFormQA](https://huggingface.co/datasets/abacusai/WikiQA-Free_Form_QA) and [AlteredQA](https://huggingface.co/datasets/abacusai/WikiQA-Altered_Numeric_QA). The first extends a key-value retrieval task introduced by [LongChat](https://github.com/DachengLi1/LongChat/tree/longeval) to longer contexts. FreeFormQA and AlteredQA are formed from the [Natural Questions Dataset](https://ai.google.com/research/NaturalQuestions/) and are question-answering datasets based on Wikipedia.
2023-08-23T08:52:00
https://www.reddit.com/r/LocalLLaMA/comments/15yy1s3/giraffev213b32k_trained_on_llama_2_with_32k/
isaac_szpindel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yy1s3
false
null
t3_15yy1s3
/r/LocalLLaMA/comments/15yy1s3/giraffev213b32k_trained_on_llama_2_with_32k/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tyuEjGTvmBgDcGzMxUJeeMBnED8RnWJkYe5je8cNmIc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=108&crop=smart&auto=webp&s=2d8706ea01c5c92e16e5c723ce755cd85b46f5d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=216&crop=smart&auto=webp&s=1006892155b5722d93a54d5d428ced4c45205b90', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=320&crop=smart&auto=webp&s=798498faec8d3e551dbb83bdfab6a12d62f6a4cc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=640&crop=smart&auto=webp&s=23dd4066da2327b5c22c15555c59cf438888a1d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=960&crop=smart&auto=webp&s=750539d7abba282e833ed670f79d4c56db9c7fa7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=1080&crop=smart&auto=webp&s=1ad6ae625bc71deb49c3d19426418e6725279f5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?auto=webp&s=08edfa6334cd489537ea6e62edbe1a4fff0a2823', 'width': 1200}, 'variants': {}}]}
Falcon 7b instruct generating whole conversation, instead of just assistant part.
1
So, I finetuned Falcon7b-instruct model, with ###user...###agent conversation, when inferencing, it's generating whole conversation for user and agent. I need step by step conversation. Any idea, how to do so?
2023-08-23T08:59:50
https://www.reddit.com/r/LocalLLaMA/comments/15yy6o9/falcon_7b_instruct_generating_whole_conversation/
Anu_Rag9704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yy6o9
false
null
t3_15yy6o9
/r/LocalLLaMA/comments/15yy6o9/falcon_7b_instruct_generating_whole_conversation/
false
false
self
1
null
Converting GGML to GGUF (psa for Windows)
1
https://github.com/ggerganov/llama.cpp/issues/2715
2023-08-23T09:48:35
https://www.reddit.com/r/LocalLLaMA/comments/15yz3qw/converting_ggml_to_gguf_psa_for_windows/
ambient_temp_xeno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yz3qw
false
null
t3_15yz3qw
/r/LocalLLaMA/comments/15yz3qw/converting_ggml_to_gguf_psa_for_windows/
false
false
self
1
null
Which distro
1
So before I jump on the LLM hype I was fully on Linux running nobara (aka fedora gaming), but at the beginning I got in trouble make it run on it so I came back to dual boot. But now I feel like that all our tool like (oobabooga/text2imgUI/...) has very matured. So my question is: if I want a fairly easy experience, which distro should I use ?
2023-08-23T10:21:54
https://www.reddit.com/r/LocalLLaMA/comments/15yzqsc/which_distro/
Baddmaan0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15yzqsc
false
null
t3_15yzqsc
/r/LocalLLaMA/comments/15yzqsc/which_distro/
false
false
self
1
null
How to prepare prompts/data for fine-tuning to get the most out of your chatbot?
1
Let's say I want to train a versatile chatbot, that at its core is a friendly and empathetic chatbot, but can also take on various other roles based on the users request. I'm a little stuck on how to best prepare my data and adopt it for fine-tuning in the best way. &#x200B; # Preparing the instructions? For example, let's say I take the LLAMA2 nous-hermes bot, and have my own data which reflects the "style" that I want my bot to adopt. Right now, I have a fixed prompt template under the "###instruction" command. For example: "###instruction You are {bot\_name}, a friendly and empathetic chatbot. Your task is to respond to the user's messages in a curious manner" &#x200B; The problem I run into here is that once I fine-tune, this makes the bot lose a lot of it's general abilities, and suffer from catastrophic forgetting. How can I circumvent this? &#x200B; # few-shot prompting/long-term memory? Now let's say I'm also trying to append long-term memory to my bot, what I'm currently doing is that I have the traditional "###input" command where I have a conversation style prompt: &#x200B; \###input user: hey, how;'s it goin? bot: hey! all is well, how're you? user: remember when I asked you about what your favorite hobbies were? Can you remind me again what we spoke about? &#x200B; \###response bot: &#x200B; &#x200B; let's say I pull in additional memory using chromadb or that I have a fixed set of results that I would like the bot to mimic (few-shot prompting); what's the best way to include this into my input? Also, when it comes to fine-tuning, how can I incorporate this feature? &#x200B; Essentially for each user message, I'd either like to pull memory from previous conversations, or "ideal" examples that are fed in as few-shot prompts to guide the model. &#x200B; any help would be appreciated!
2023-08-23T10:42:13
https://www.reddit.com/r/LocalLLaMA/comments/15z05o5/how_to_prepare_promptsdata_for_finetuning_to_get/
Ok_Coyote_8904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z05o5
false
null
t3_15z05o5
/r/LocalLLaMA/comments/15z05o5/how_to_prepare_promptsdata_for_finetuning_to_get/
false
false
self
1
null
System prompt/message, how much does it affect the generation?
1
[removed]
2023-08-23T11:38:41
https://www.reddit.com/r/LocalLLaMA/comments/15z1ci0/system_promptmessage_how_much_does_it_affect_the/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z1ci0
false
null
t3_15z1ci0
/r/LocalLLaMA/comments/15z1ci0/system_promptmessage_how_much_does_it_affect_the/
false
false
self
1
null
Question about Training Llama 13B GGML Models on Local Documents
1
Is it possible to train Llama 13B GGML models on your own documents, such as those from Bookstack? If so, where can I find guides or tutorials on how to do this? Specifically, I'm curious if it can be achieved using consumer hardware. Alternatively, is it feasible to rent some hardware for training and then run the model locally? My system specifications are: * CPU: Ryzen 5600G * RAM: 64GB * GPU: RTX 2060 Super (8GB) Any insights, recommendations, or experiences with this process would be highly appreciated!
2023-08-23T12:16:34
https://www.reddit.com/r/LocalLLaMA/comments/15z27i8/question_about_training_llama_13b_ggml_models_on/
GiantFlyingPikachu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z27i8
false
null
t3_15z27i8
/r/LocalLLaMA/comments/15z27i8/question_about_training_llama_13b_ggml_models_on/
false
false
self
1
null
Any GitHub using python that are able to accurately summarise very long text (e.g.40 pages) using localLlama ggml model?
1
Is langchain the only way?
2023-08-23T13:03:07
https://www.reddit.com/r/LocalLLaMA/comments/15z3bi0/any_github_using_python_that_are_able_to/
jackfood2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z3bi0
false
null
t3_15z3bi0
/r/LocalLLaMA/comments/15z3bi0/any_github_using_python_that_are_able_to/
false
false
self
1
null
Trying to run TheBloke Llama-2-70B-chat-GPTQ via huggingface, the model loads into RAM then silently fails. No output.
1
[removed]
2023-08-23T13:04:59
https://www.reddit.com/r/LocalLLaMA/comments/15z3d6y/trying_to_run_thebloke_llama270bchatgptq_via/
crono760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z3d6y
false
null
t3_15z3d6y
/r/LocalLLaMA/comments/15z3d6y/trying_to_run_thebloke_llama270bchatgptq_via/
false
false
default
1
null
why do locally installed LLM's like llama 2 have an arbitrary token limit?
1
i want to be able to upload a document or something and talk with the bot about it without worrying about it forgetting conversations. how far are we from achieving this? really like the idea of local LLM's. just dont think im willing to jump through the backend hoops they require unless i can get a much larger token limit.
2023-08-23T13:07:20
https://www.reddit.com/r/LocalLLaMA/comments/15z3fem/why_do_locally_installed_llms_like_llama_2_have/
Upper_Judge7054
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z3fem
false
null
t3_15z3fem
/r/LocalLLaMA/comments/15z3fem/why_do_locally_installed_llms_like_llama_2_have/
false
false
self
1
null
Multimodel LLaMA
1
Why is no one making llama multimodel by encoding images, sounds etc into text format and then training llama on it. Then it will be able to generate and both understand images. Like for example there was [image-gpt](https://github.com/openai/image-gpt). DALL-E works in a similar way as far as i know, and i assume gpt 4 is the same.
2023-08-23T13:18:14
https://www.reddit.com/r/LocalLLaMA/comments/15z3p9e/multimodel_llama/
Cold_Sprinkles6709
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z3p9e
false
null
t3_15z3p9e
/r/LocalLLaMA/comments/15z3p9e/multimodel_llama/
false
false
self
1
{'enabled': False, 'images': [{'id': '5B7hnctWitYC9aOWGUj8sbD5agDT5TAI5P8yww31hdE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=108&crop=smart&auto=webp&s=1dcc7b479e3ba2076457ad886219b6d193b6330d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=216&crop=smart&auto=webp&s=23f41c0872b178302023e945490fac9d1f0114cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=320&crop=smart&auto=webp&s=251b84580ea15b3409a381d0d1a3de241655c314', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=640&crop=smart&auto=webp&s=94b948b8e0dc1efd23228800112db67222596218', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=960&crop=smart&auto=webp&s=41dfc847716338161d2ac42e4333094a2710b724', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=1080&crop=smart&auto=webp&s=c004f2675e3abab776f425a8b5d799a5fa79b1b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?auto=webp&s=48e3036325a2ff33cf0591af7d6fe72888d58a7a', 'width': 1200}, 'variants': {}}]}
Tips for Running Whisper and Local LLM on Basic Computer for Summarizing Conversation
1
I would appreciate any help or suggestions. I am trying to create my own local AI medical scribe. I did some tests with some of my coworkers and used Whisper Jax here ([https://huggingface.co/spaces/sanchit-gandhi/whisper-jax](https://huggingface.co/spaces/sanchit-gandhi/whisper-jax)) along with Claude 2 and had incredible success. I am able to have a conversation with someone as if they were a patient and talk naturally with them and have the entire 15 minute conversation transcribed quickly and then copy/paste it into claude and told it to summarize the conversation as a medical note and it was absolutely excellent. It gathers the important points of the conversation and organized it beautifully. I know there are some versions of this already available such as DeepScribe, but I'm hoping I can make my own version with free open-source tools such as Whisper and an LLM such as LLaMA. Whisper Jax on Huggingface and Claude through the browser work perfectly, but I assume I should try to run these completely locally on my computer to avoid sending data to the cloud for HIPAA purposes if I use this with real patients. I am trying to find the simplest way to do this. Keep in mind that I do not have any experience with coding, python, etc so I'm hoping to find a simple program I can install that has a user-friendly and simple interface. And keep in mind that I have a basic computer with an i5 processor and integrated graphics. I see that there are a ton of different versions of whisper than can be run locally and lots of local LLMs too. Any suggestions? My best options so far have been: [https://www.ermine.ai/](https://www.ermine.ai/) for speech to text (which uses transformers.js and the whisper-tiny.en model) - this works well overall! Only issue is that the tiny model does not seem as accurate as the large model from huggingface. Any similar option that perhaps uses the small or base or medium model? I would like it to have the microphone option so I can record directly rather than need to upload an audio file. For LLM, I am still new to researching what model can run on my low-end computer. Any recommendations? I see a lot of info about GPT4ALL and would appreciate any advice regarding a simple model and some UI that makes it easy to use. For the LLM, honestly the main feature I need is just something that is good at summarizing a conversation. I don't need a massive LLM with a ton of knowledge. I appreciate any tips!
2023-08-23T13:47:37
https://www.reddit.com/r/LocalLLaMA/comments/15z4g43/tips_for_running_whisper_and_local_llm_on_basic/
jpzsports
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z4g43
false
null
t3_15z4g43
/r/LocalLLaMA/comments/15z4g43/tips_for_running_whisper_and_local_llm_on_basic/
false
false
self
1
{'enabled': False, 'images': [{'id': '1KqPBRboMe364qDgkrkvKFYqwifARuiKtgiVEn6gDzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=108&crop=smart&auto=webp&s=dea5a903fc9a9b9348e911f3b050a576923f26f1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=216&crop=smart&auto=webp&s=bb5f3ba74d7bb15c88836257cd14997ccf3a53e8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=320&crop=smart&auto=webp&s=da02a71bd2241dda800243cf89eb9ade227cb7a8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=640&crop=smart&auto=webp&s=d891a8662375a3b13a687fc5e2c0ba91bfeb082b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=960&crop=smart&auto=webp&s=87d33803023ae064a73d2d09262730c07b26056d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=1080&crop=smart&auto=webp&s=b8667cd242931a3e56b5355313b2c79763148d28', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?auto=webp&s=f35600738298cf048170ceedb5830647021d293c', 'width': 1200}, 'variants': {}}]}
How to use LLMs for bias quantification?
1
[removed]
2023-08-23T14:04:58
https://www.reddit.com/r/LocalLLaMA/comments/15z4w3k/how_to_use_llms_for_bias_quantification/
sbs1799
self.LocalLLaMA
2023-08-24T04:59:41
0
{}
15z4w3k
false
null
t3_15z4w3k
/r/LocalLLaMA/comments/15z4w3k/how_to_use_llms_for_bias_quantification/
false
false
default
1
null
Resources usage in KoboldAi!
1
Hello, i'am usuing KoboldAi for ggml models inference, and i'am confused about some statistics of resources usage in KoboldAi. I have: Ryzen 7 5700x 32GB Ram RTX 3060 12 GB, i'am using Llama2 Q4 K-M model 19 GB of size. 1- Threads: I noticed that when using different number of threads the performance doesn't change at all, for example, i use 7 threads i get 69% of CPU usage and 2.1 T/s, when i switch to 14 threads i get 100% of CPU usage but get the same performance 2.1 T/s, but the difference i have found is in the temperature, when using less threads temperature get slightly higher: - 7 threads 63 to 65 °C - 14 threads 59 to 62 °C 2 - CuBlas vs ClBlas: I have RTX 3060 nvidia, i noticed that there is difference in memory usage when using the two, - when using CLBlas memory usage is less than when using CuBlas, in CuBlas it use shared GPU shared memory from RAM about 3.8 GB , but when using CLBlas, there is 'o shared memory used 0 GB In CLBlas total Ram used 23 GB and 9.2 GB of VRAM In CuBlas total RAM used is 26 GB (23 of RAM + 3. RAM allocated for shared memory with GPU) and 9.8 GB of VRam. There is no difference in performance between two way methods: 2.7 to 2.8 T/s. But there is some difference usage ?? Why there is no improvement when using more threads, why GPU offloading doesn't improve a lot, Just 0.6 T/s benefit. and why those difference in memory usage?
2023-08-23T14:07:05
https://www.reddit.com/r/LocalLLaMA/comments/15z4y2k/resources_usage_in_koboldai/
SageQuestN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z4y2k
false
null
t3_15z4y2k
/r/LocalLLaMA/comments/15z4y2k/resources_usage_in_koboldai/
false
false
self
1
null
I’m going to use LLaMa to generate Unit Test for my company.
1
I’m in an industry that requires 100% code coverage via tests. This is very time consuming so we are trying to find ways to automate the test generation and just have a human manually review them. Any tips before I dive into this?
2023-08-23T14:15:48
https://www.reddit.com/r/LocalLLaMA/comments/15z561u/im_going_to_use_llama_to_generate_unit_test_for/
UnknownEssence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z561u
false
null
t3_15z561u
/r/LocalLLaMA/comments/15z561u/im_going_to_use_llama_to_generate_unit_test_for/
false
false
self
1
null
Uncensored LLMs that work on languages other than English?
1
I got into using free LLMs by installing bare llama2-uncensored with ollama. It is not good with Hindi like chatGPT. Can someone suggest models that work for other languages than English, uncensored preferred. And also how to use them locally on a mac m1 8gb ram. Thank you. I started this morning only and have learned a lot about LLMs since then but please help me here. Thank you again Also I get this error on many models on ollama- Error: Post "http://localhost:11434/api/generate": EOF Is this because I am trying to run models bigger than 7B on 8gb ram?
2023-08-23T14:17:50
https://www.reddit.com/r/LocalLLaMA/comments/15z582o/uncensored_llms_that_work_on_languages_other_than/
throwfalseaway123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z582o
false
null
t3_15z582o
/r/LocalLLaMA/comments/15z582o/uncensored_llms_that_work_on_languages_other_than/
false
false
self
1
null
Max token prompt size for story writing
1
[removed]
2023-08-23T14:23:42
https://www.reddit.com/r/LocalLLaMA/comments/15z5dhi/max_token_prompt_size_for_story_writing/
Spawndli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z5dhi
false
null
t3_15z5dhi
/r/LocalLLaMA/comments/15z5dhi/max_token_prompt_size_for_story_writing/
false
false
self
1
null
I have 1 central computer that would run the LLM, is there a way to use an API?
1
I am looking to run a server(?) that hosts the LLM, and can be accessed via API. I also will need to do fine-tuning, but I believe I can do that outside of this API problem I'm asking about. That said, if there are any mature systems/frameworks out, I'm all ears.
2023-08-23T15:49:36
https://www.reddit.com/r/LocalLLaMA/comments/15z7oo8/i_have_1_central_computer_that_would_run_the_llm/
pr1vacyn0eb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7oo8
false
null
t3_15z7oo8
/r/LocalLLaMA/comments/15z7oo8/i_have_1_central_computer_that_would_run_the_llm/
false
false
self
1
null
WizardCoder Multi File Context
1
Is it possible to allow for multifile context when using the WizardCoder-15B model? For example, could I just pass the other files in before the current file? Also, is there a way to tell how much context WizardCoder can handle?
2023-08-23T15:52:32
https://www.reddit.com/r/LocalLLaMA/comments/15z7rhr/wizardcoder_multi_file_context/
kintrith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7rhr
false
null
t3_15z7rhr
/r/LocalLLaMA/comments/15z7rhr/wizardcoder_multi_file_context/
false
false
self
1
null
Production-grade OpenAI API drop-in for CUDA & GPTQ models?
1
Hi all - been part of the community since it started, but now finally preparing to deploy something production-grade professionally. Looking at all the different deployment options and frameworks, I've come to the conclusion it's going to save my sanity (and others) to just use a drop-in OpenAI API replacement - reminds me of the same cycle as AWS S3 standards. What's a good starting point, or a few to evaluate? Ooba isn't it - but LocalAI, with ExLlama support, looks promising. There's a bunch of ggml options out there for this, but haven't seen as many for exllama or autogptq. Also open to exploring ctranslate2 or any other ideas folks have. Thanks!
2023-08-23T15:54:15
https://www.reddit.com/r/LocalLLaMA/comments/15z7t47/productiongrade_openai_api_dropin_for_cuda_gptq/
towelpluswater
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7t47
false
null
t3_15z7t47
/r/LocalLLaMA/comments/15z7t47/productiongrade_openai_api_dropin_for_cuda_gptq/
false
false
self
1
null
Recommended AWS instance type to host Llama 70B Chat
1
Hi everyone, It is my first post, pardon me if my post missed any points. I am developing an application which inferences with Llama. I am using 13B-Chat on my local, with Llama Cpp Python. I also have a approximately 150 words system prompt. My laptop specifications are: - M1 Pro. - 64 GB Ram. I build Llama Cpp as the official document to make work with Metal GPU. When the application inferenced with Llama, it took 20 seconds for the model to responsd the first message and 10 to response the next ones. It is not bad, I guess. I plan to deploy it to AWS and target to reduce the response time to under 5 seconds, I do not know if it is feasible. Have anyone deployed Llama to AWS EC2 before and abled to archived the high-performance? May you please recommend some instance type? Much appreciated.
2023-08-23T15:59:47
https://www.reddit.com/r/LocalLLaMA/comments/15z7yg4/recommended_aws_instance_type_to_host_llama_70b/
jThaiLB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7yg4
false
null
t3_15z7yg4
/r/LocalLLaMA/comments/15z7yg4/recommended_aws_instance_type_to_host_llama_70b/
false
false
self
1
null
I fine-tuned ChatGPT 3.5 so you don´t have to!
1
I have a chatbot that I programmed to offer some products online and serve customers. Using custom embeddings and prompt engineering, it runs great! Yesterday the news came out about the possibility of fine tuning GPT 3.5 and I decided to test it. My information base is very good, with more than 500 records. I've been improving it over time to meet the vector search for embeddings. Well, after 20 dollars and one hour of training, I got an email saying that my fine tuned model was ready. It simply is the GPT 3.5 template, without almost any customization. It's like my information is buried deep inside the model somewhere. Yes, I followed all the steps and my content has nothing illegal that could be filtered. It looks like the open source community is ahead of the curve on this one.
2023-08-23T16:03:33
https://www.reddit.com/r/LocalLLaMA/comments/15z82hn/i_finetuned_chatgpt_35_so_you_dont_have_to/
arretadodapeste
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z82hn
false
null
t3_15z82hn
/r/LocalLLaMA/comments/15z82hn/i_finetuned_chatgpt_35_so_you_dont_have_to/
false
false
self
1
null
Never get the download mail after the download request.
1
It is Llama 2 limited by country?, I ask several times for the Download Model and I never get the mail, I'm in Panamá.
2023-08-23T16:37:48
https://www.reddit.com/r/LocalLLaMA/comments/15z9281/never_get_the_download_mail_after_the_download/
lv412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z9281
false
null
t3_15z9281
/r/LocalLLaMA/comments/15z9281/never_get_the_download_mail_after_the_download/
false
false
self
1
null
Oobabooga text generation webui
1
Anybody know if this can run across multiple servers to split up the model? I know llama.cpp can use mpi to do so and text-generation-webui can use llama.cpp, but I haven't been able to find any documentation on how to do it
2023-08-23T16:42:48
https://www.reddit.com/r/LocalLLaMA/comments/15z97jt/oobabooga_text_generation_webui/
amonymus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z97jt
false
null
t3_15z97jt
/r/LocalLLaMA/comments/15z97jt/oobabooga_text_generation_webui/
false
false
self
1
null
Fast Vector Similarity Library, Useful for Working With Llama2 Embedding Vectors
1
I recently found myself computing the similarity between lots of very high dimensional vectors (i.e., sentence embedding vectors from LLMs), and I wanted to try some more powerful measures of similarity/dependency than just Cosine similarity, which seems to be the default for everything nowadays because of its computational efficiency. There are many other more involved measures that can detect more subtle relationships, but the problem is that some of them are quite slow to compute, especially if you're trying to do it in Python. For my favorite measure of statistical dependency, Hoeffding's D, that's true even if you use Numpy. Since I recently learned Rust and wanted to learn how to make Python packages using Rust, I put together this new library that I call Fast Vector Similarity. I was blown away by the performance of Rust and the quality of the tooling while making this. And even though it required a lot of fussing with Github Actions, I was also really impressed with just how easy it was to make a Python library using Rust that could be automatically compiled into wheels for every combination of platform (Linux, Windows, Mac) and Python Version (3.8 through 3.11) and uploaded to PyPi, all triggered by a commit to the repo and handled by Github's servers-- and all for free if you're working on a public repo! Anyway, this library can easily be installed to try out using `pip install fast_vector_similarity`, and you can see some simple demo Python code in the readme to show how to use it. Aside from exposing some very high performance implementations of some very nice similarity measures, I also included the ability to get robust estimates of these measures using the Bootstrap method. Basically, if you have two very high dimensional vectors, instead of using the entire vector to measure similarity, you can take the same random subset of indices from both vectors and compute the similarity of just those elements. Then you repeat the process hundreds or thousands of times and look at the robust average (i.e., throw away the results outside the 25th percentile to 75th percentile and average the remaining ones, to reduce the impact of outliers) and standard deviation of the results. Obviously this is very demanding of performance, but it's still reasonable if you're not trying to compute it for too many vectors. Everything is designed to fully saturate the performance of multi-core machines by extensive use of broadcasting/vectorization and the use of paralell processing via the Rayon library. I was really impressed with how easy and low-overhead it is to make highly parallelized code in Rust, especially compared to coming from Python, where you have to jump through a lot of hoops to use multiprocessing and there is a ton of overhead. Anyway, please let me know what you think. I'm looking to add more measures of similarity if I can find ones that can be efficiently computed (I already gave up on including HSIC because I couldn't get it to go fast enough, even using BLAS/LAPACK).
2023-08-23T17:07:43
https://github.com/Dicklesworthstone/fast_vector_similarity
dicklesworth
github.com
1970-01-01T00:00:00
0
{}
15z9x1x
false
null
t3_15z9x1x
/r/LocalLLaMA/comments/15z9x1x/fast_vector_similarity_library_useful_for_working/
false
false
https://b.thumbs.redditm…87odL7TmV_QQ.jpg
1
{'enabled': False, 'images': [{'id': 'LglpqFeAgThyhA9PZHv5p-_aUWaIueWwNpJLTlLCCjk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=108&crop=smart&auto=webp&s=e356100a8f038b9d20b5bf693135402b15a3039f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=216&crop=smart&auto=webp&s=4f9e5b792b36693444b18a9862345489d8cccd72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=320&crop=smart&auto=webp&s=6046370f8aa4836210e7e183b85ea7faad8da2dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=640&crop=smart&auto=webp&s=b5a433db02fae2c429aeeb20652aa7b488d18faf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=960&crop=smart&auto=webp&s=29501af8b88c90537117153cd8379fe3033001d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=1080&crop=smart&auto=webp&s=b4317f29b913b9972e25bab47408965f2dd21256', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?auto=webp&s=2ccb3dab4b0db4c451f3c4321bc09ac617386c50', 'width': 1200}, 'variants': {}}]}
Searching for basic chunking - embedding example
1
Basic chunking - embedding example [help] Hi everyone. Maybe this is a dumb question, but I'm still learning, please don't roast me. I'd really appreciate if someone have (or can share like/resources) a basic example of a python code that take text, split it, embedded, store it and recall based on a query, **That doesn't use LangChain**? Thanks in advance for every kind of answers.
2023-08-23T17:09:39
https://www.reddit.com/r/LocalLLaMA/comments/15z9yz1/searching_for_basic_chunking_embedding_example/
Natural_Speaker7954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z9yz1
false
null
t3_15z9yz1
/r/LocalLLaMA/comments/15z9yz1/searching_for_basic_chunking_embedding_example/
false
false
self
1
null
Help in fine tuning llama2
1
Hey guys ! So we have a task , where we have to build a QA conversation bot. We have to take data from many books , so initially what were doing was RAG with llama2 13B but couldn't get good results [ it's not a strict factual QA bot]. So now we want to fine tune llama2. So here is the approach that I have in mind. 1. Will use gpt 3.5 turbo to make question answer set as a role play 2. Customise the qa data generated in format to feed into llama2 3. Now here is the main part , how to do efficient fine tuning: Steps : 1. Using bitsandbytes to load llama2 13B in 4bit quantized [ is it good to fine tune 4 bit or should I do with fp16] 2. Using Lora or qlora Using PEFT [ which one should I start with] 3. Using supervised fine tuning with TRL , are there unsupervised way and what is RLHF. 4. Upload model on hub and use for inference [ will it be ggml if I fine tune in 4 bit qlora] So guys, please share your views on my approach and your inputs will be really helpful
2023-08-23T17:31:20
https://www.reddit.com/r/LocalLLaMA/comments/15zalrg/help_in_fine_tuning_llama2/
Spiritual-Rub925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zalrg
false
null
t3_15zalrg
/r/LocalLLaMA/comments/15zalrg/help_in_fine_tuning_llama2/
false
false
self
1
null
Can I train/extend a tokenizer with qlora?
1
I want to pick up special acronyms. I'm reading that qlora, since it uses a static snapshot, might not be able to update the tokenizer? Idk. But if I want to make it feasible to train on custom corpus data, I would need to do modify the tokenizer. I know how to extend a tokenizer, I'm just not sure if I can continue a tokenizers training.
2023-08-23T17:48:15
https://www.reddit.com/r/LocalLLaMA/comments/15zb34i/can_i_trainextend_a_tokenizer_with_qlora/
Thistleknot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zb34i
false
null
t3_15zb34i
/r/LocalLLaMA/comments/15zb34i/can_i_trainextend_a_tokenizer_with_qlora/
false
false
self
1
null
Llama 2 70B model running on old Dell T5810 (80GB RAM, Xeon E5-2660 v3, no GPU)
1
2023-08-23T18:39:41
https://v.redd.it/dfx1jrqymwjb1
Ninjinka
/r/LocalLLaMA/comments/15zcj40/llama_2_70b_model_running_on_old_dell_t5810_80gb/
1970-01-01T00:00:00
0
{}
15zcj40
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dfx1jrqymwjb1/DASHPlaylist.mpd?a=1695494385%2CMTVhMWI0MGExNzM0YzJmMzVjNDUwZDRmOWQwYjkyMzBjOWIxOTkwNWYzMjI2NmMxMTc5ODVlNzFkNWRkNzViZg%3D%3D&v=1&f=sd', 'duration': 173, 'fallback_url': 'https://v.redd.it/dfx1jrqymwjb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/dfx1jrqymwjb1/HLSPlaylist.m3u8?a=1695494385%2CYjMxZDVlMTgzMzY0NzA1MGUyZjMzZWZiM2I2NjJhNDRlNmI3M2EwNjJmYzJjM2NmNDlmZTAxMjA5NWM1OTA0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dfx1jrqymwjb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_15zcj40
/r/LocalLLaMA/comments/15zcj40/llama_2_70b_model_running_on_old_dell_t5810_80gb/
false
false
https://a.thumbs.redditm…QTjgr81Jo1-8.jpg
1
{'enabled': False, 'images': [{'id': '1mkI6JlSRH7zk982Hf94alzquXrDtUJG3P3CLUb6SMY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=108&crop=smart&format=pjpg&auto=webp&s=9e9ed2501ce0f0e04a413059a3103ba0f6e8b6cd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=216&crop=smart&format=pjpg&auto=webp&s=bb0075694bf415e146643ef0cce273f196decfad', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=320&crop=smart&format=pjpg&auto=webp&s=fb1ad2fe34e12cd5fab9a7786c80a8e64f1359e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=640&crop=smart&format=pjpg&auto=webp&s=3fa4610ebd404dac1d21e4e93b57967ff4c0e7db', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=960&crop=smart&format=pjpg&auto=webp&s=ee836c4ceb6a01a0aa39cd8cf35c0821686afb7a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a1709a3109ca7c1cf187a852e3605e22b8540ea7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?format=pjpg&auto=webp&s=7856e754076511289716c4279b267efaf17422a6', 'width': 1920}, 'variants': {}}]}
Help on Fine-Tuning llama 2
1
I recently fine-tuned a llama 2 model with my dataset. Everything went fine, but the files it gave me were an adapter\_model.bin and adapter\_config.json. As I read it is possible to do with those two files merge them with full base model and thus create a better model based on llama 2. Does anyone know how to do it?
2023-08-23T18:42:09
https://www.reddit.com/r/LocalLLaMA/comments/15zclp0/help_on_finetuning_llama_2/
danielbrdz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zclp0
false
null
t3_15zclp0
/r/LocalLLaMA/comments/15zclp0/help_on_finetuning_llama_2/
false
false
self
1
null
Finetuning for structured output: Special Tokens vs Free Form Text
1
Hey all, I'm hoping to use LLAMA2 for generating structured output for an app-specific problem. To make sure I have the setup right, I've been first trying to fine tune LLAMA2 on a toy dataset ([https://huggingface.co/datasets/GEM/viggo/](https://huggingface.co/datasets/GEM/viggo/)). The dataset gives a a structured representation for a free form text. So far, I've fine tuned the base LLAMA-7B (float16 precision) and the 8-bit version of the same (both with LORA). My approach at the moment is to run training on the dataset a formatted string > text = f"### Target: {example['target'][i]}\n\n### Repr: {example['meaning_representation'][i]}{tokenizer.eos_token}" This works _okay_ , but the model keeps going even after outputting the meaning representation ``` ### Target: SpellForce 3 is a pretty bad game. The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway. ### Repr: inform(name[SpellForce 3], developer[Grimlore Games], release_year[2017], rating[poor])) ### Explanation start ### Expectation(release_year[2017], rating[poor], developer[Grimlore Games], name[SpellForce 3]) ### Rating(rating[poor]) ### Name(name[SpellForce 3]) ### Developer(developer[Grimlore Games)) ### Explanation text(The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway.) ### Release year(release_year[2017]) ### Ratings(rating[poor]) - name[SpellForce 3] - developer[ ``` I run all my fine tunes for 3 epochs.I have a few questions: 1. Is there a better way to train the model to stop output? 2. I see the transformers library has special tokens, should I use them instead of formatted strings with words with special meanings? Minor sidenote: The vocab size seems to be 32K and performance considerations in changing this nice round number to something like 32003. 3. How have you been finding the "right" hyperparameters for text completion? For ex: I'm unsure if 3 epoch runs too high / too low.
2023-08-23T18:59:53
https://www.reddit.com/r/LocalLLaMA/comments/15zd2oc/finetuning_for_structured_output_special_tokens/
Connect-Wonder2348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zd2oc
false
null
t3_15zd2oc
/r/LocalLLaMA/comments/15zd2oc/finetuning_for_structured_output_special_tokens/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gWLLv3XvXoE0jibXMNpAxCPU4q70lTNDbjeLkd8lqF0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=108&crop=smart&auto=webp&s=ff3222becf6cfba5a9ad7d7fc67a5f4a3cecaa19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=216&crop=smart&auto=webp&s=7ff0e571fc3d8efb7fa609ff359a13fa4ec6f428', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=320&crop=smart&auto=webp&s=5f6e7b79b58adbb00b7596b75921d776d53d4bff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=640&crop=smart&auto=webp&s=ff0ecd5071e94dbd1b3cb7522eb12ddbd903a669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=960&crop=smart&auto=webp&s=711ecec73d5e51168766b36e110db4e46bb1280c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=1080&crop=smart&auto=webp&s=9446bbcffa6a394354b6b6769b5737d2a2591805', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?auto=webp&s=b7450bc287c286db98aa4ac9a0d738473c89161e', 'width': 1200}, 'variants': {}}]}
Finetuning for structured output: Special Tokens vs Free Form Text
1
Hey all, I'm hoping to use LLAMA2 for generating structured output for an app-specific problem. To make sure I have the setup right, I've been first trying to fine tune LLAMA2 on a toy dataset ([https://huggingface.co/datasets/GEM/viggo/](https://huggingface.co/datasets/GEM/viggo/)). The dataset gives a a structured representation for a free form text. So far, I've fine tuned the base LLAMA-7B (float16 precision) and the 8-bit version of the same (both with LORA). My approach at the moment is to run training on the dataset a formatted string > text = f"### Target: {example['target'][i]}\n\n### Repr: {example['meaning_representation'][i]}{tokenizer.eos_token}" This works _okay_ , but the model keeps going even after outputting the meaning representation ``` ### Target: SpellForce 3 is a pretty bad game. The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway. ### Repr: inform(name[SpellForce 3], developer[Grimlore Games], release_year[2017], rating[poor])) ### Explanation start ### Expectation(release_year[2017], rating[poor], developer[Grimlore Games], name[SpellForce 3]) ### Rating(rating[poor]) ### Name(name[SpellForce 3]) ### Developer(developer[Grimlore Games)) ### Explanation text(The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway.) ### Release year(release_year[2017]) ### Ratings(rating[poor]) - name[SpellForce 3] - developer[ ``` I run all my fine tunes for 3 epochs.I have a few questions: 1. Is there a better way to train the model to stop output? 2. I see the transformers library has special tokens, should I use them instead of formatted strings with words with special meanings? Minor sidenote: The vocab size seems to be 32K and performance considerations in changing this nice round number to something like 32003. 3. How have you been finding the "right" hyperparameters for text completion? For ex: I'm unsure if 3 epoch runs too high / too low.
2023-08-23T19:00:21
https://www.reddit.com/r/LocalLLaMA/comments/15zd3b0/finetuning_for_structured_output_special_tokens/
Connect-Wonder2348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zd3b0
false
null
t3_15zd3b0
/r/LocalLLaMA/comments/15zd3b0/finetuning_for_structured_output_special_tokens/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gWLLv3XvXoE0jibXMNpAxCPU4q70lTNDbjeLkd8lqF0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=108&crop=smart&auto=webp&s=ff3222becf6cfba5a9ad7d7fc67a5f4a3cecaa19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=216&crop=smart&auto=webp&s=7ff0e571fc3d8efb7fa609ff359a13fa4ec6f428', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=320&crop=smart&auto=webp&s=5f6e7b79b58adbb00b7596b75921d776d53d4bff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=640&crop=smart&auto=webp&s=ff0ecd5071e94dbd1b3cb7522eb12ddbd903a669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=960&crop=smart&auto=webp&s=711ecec73d5e51168766b36e110db4e46bb1280c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=1080&crop=smart&auto=webp&s=9446bbcffa6a394354b6b6769b5737d2a2591805', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?auto=webp&s=b7450bc287c286db98aa4ac9a0d738473c89161e', 'width': 1200}, 'variants': {}}]}
Need help on Ai ChatBot
1
I have built a Ai chatbot which i want to use as a business tool. I currently have everything on a local server. What got me stuck was how would i be able to give it to business owners without having to install all the tedious applications needed. I want it as simple as sending it to them and them installing and using the chatbot feature. Sorry if this comes across as dumb question
2023-08-23T19:01:55
https://www.reddit.com/r/LocalLLaMA/comments/15zd4yw/need_help_on_ai_chatbot/
Significant_Front_92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zd4yw
false
null
t3_15zd4yw
/r/LocalLLaMA/comments/15zd4yw/need_help_on_ai_chatbot/
false
false
self
1
null
Train Stable Beluga on own data
1
Does anybody of you know, how to train Stable Beluga on my own data? I couldn't find any tutorial on it. Or should I use another model?
2023-08-23T19:16:09
https://www.reddit.com/r/LocalLLaMA/comments/15zdjr2/train_stable_beluga_on_own_data/
Ok-Injury8193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zdjr2
false
null
t3_15zdjr2
/r/LocalLLaMA/comments/15zdjr2/train_stable_beluga_on_own_data/
false
false
self
1
null
Best Practices to Increase Speed of Llama-2 70B?
1
Thanks to everyone in this community for all of the helpful posts! I'm looping over many prompts with the following specs: 1. Instruct v2 version of Llama-2 70B (see [here](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)) 2. 8 bit quantization 3. Two A100s 4. 4k Tokens of input text 5. Minimal output text (just a JSON response) Each prompt takes about one minute to complete. I would like to cut down on this time, substantially if possible, since I have thousands of prompts to run through. Here are a few thoughts I've had: * Use Exllama (does anyone know why it speeds things up?) * Use 4 bit quantization so that I can run more jobs in parallel * Try classification. For many of my prompts I want Llama-2 to just answer with 'Yes' or 'No'. Are there ways to speed up Llama-2 for classification inference? * Add RAM/CPU Cores? I'm using a server where I could request more regular ram or CPU cores. Not sure if this will make a difference. I'm mostly curious to see if this runtime is similar to what everyone else is experiencing and any best practices to speed it up. &#x200B; Thanks so much!
2023-08-23T19:31:36
https://www.reddit.com/r/LocalLLaMA/comments/15zdzuw/best_practices_to_increase_speed_of_llama2_70b/
MasterJaguar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zdzuw
false
null
t3_15zdzuw
/r/LocalLLaMA/comments/15zdzuw/best_practices_to_increase_speed_of_llama2_70b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'g2F5tv1cdLIM8WGiiEED81EoP7qdkFXvvRx6UDAWDwA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=108&crop=smart&auto=webp&s=f6930f3dcc161f6968de2c89bb04b8845de6bb39', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=216&crop=smart&auto=webp&s=c72ad2dd9274239cac01c628fa214c69aed63e37', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=320&crop=smart&auto=webp&s=79f200b12956bf3fd53423c4e0b4c3e02bbcca6e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=640&crop=smart&auto=webp&s=a64dea973d55ac3e56812113b5600db4bc88702c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=960&crop=smart&auto=webp&s=e36f2678da191935fb2b3713c6cb048a4f0d84bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=1080&crop=smart&auto=webp&s=11ee83775ad44cecc2fe3a3a7efa0c4a107e33cf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?auto=webp&s=5814e226a1840b68d88ed47fcfbf5bb86a5da738', 'width': 1200}, 'variants': {}}]}
Accessing oobabooga api with silero_tts extension activated
1
I wondered if anyone ever tried decoding the webui response when silero\_tts is activated for using the generated audio file in another program ?
2023-08-23T19:56:40
https://www.reddit.com/r/LocalLLaMA/comments/15zeq3u/accessing_oobabooga_api_with_silero_tts_extension/
aldur15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zeq3u
false
null
t3_15zeq3u
/r/LocalLLaMA/comments/15zeq3u/accessing_oobabooga_api_with_silero_tts_extension/
false
false
self
1
null