title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Can I locally train and run a model for my docs? | 1 | I am a noob ao bear with me. Have been using gpt 4 but have many ideas that are not being satisfied by it.
I started tinkering with open source models recently and realized what I am sure most of you already did, that locally run open source models are better suited for.
My question is, what model would you use to run local and fine tune private docs on? Is anything good enough now or waiting for metas open source is the answer ?
Is there any good source of info on this ?
Much much appreciated and excited :) | 2023-07-14T07:38:22 | https://www.reddit.com/r/LocalLLaMA/comments/14z9qqi/can_i_locally_train_and_run_a_model_for_my_docs/ | staladine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z9qqi | false | null | t3_14z9qqi | /r/LocalLLaMA/comments/14z9qqi/can_i_locally_train_and_run_a_model_for_my_docs/ | false | false | self | 1 | null |
Subreddit wiki page for models | 1 | This subreddit has grown quickly, and we're nearing 40,000 members. This great community wouldn't be possible without every single one of you, and thank you all for being here as we push the progress of locally run LLMs.
One of the most common suggestions I see is a sub wiki linking to models. For anyone who doesn't know, this sub has had one since nearly the beginning, and it's been recently updated to streamline it so it's easier to follow: [https://www.reddit.com/r/LocalLLaMA/wiki/models](https://www.reddit.com/r/LocalLLaMA/wiki/models).
The emphasis for the model page this time is on simplicity:
* A selection of the top models will be listed instead of trying to list all of them. The previous page became a little bloated, which was one of the main critiques that was received.
* Extra clarification has been added for the different models so new members to this sub can more easily understand what to download.
* Current best choices has been changed to current popular choices, and models will now be listed alphabetically to encourage everyone to test what they like best. The distinction between unrestricted and restricted has been removed since this caused some confusion.
The goal is to make it very simple for someone new to find good choices to start with instead of having to sift through the many models themselves, and I'm planning to add example generations for the models to make it even easier to choose.
I also made a few other quick changes to this sub's wiki:
* The Community Projects wiki page has been disabled. Projects will no longer be tracked, but the list of datasets has been moved to the models page and will continue to be updated.
* The Getting Started wiki page has been disabled. Some basic info like the prompt templates will stay in the models page. The FAQ and most of the other information will be moved to the install guide page as soon as possible.
In updating the install guide page, the plan is to add more important info, like koboldcpp use, and include an expanded FAQ which should help reduce the amount of posts for questions.
One last thing to mention is that I've seen a lot of confusion over 30B/33B naming. These both refer to the same thing, the 32.5 billion parameter LLaMA model. I don't think it matters whichever one is used, but most people say 30B and Meta has officially called it LLaMA 30B in some recent papers, like the [QAT paper](https://arxiv.org/abs/2305.17888), so that's why it's used in the sub wiki. | 2023-07-14T09:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/14zc22d/subreddit_wiki_page_for_models/ | Civil_Collection7267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zc22d | false | null | t3_14zc22d | /r/LocalLLaMA/comments/14zc22d/subreddit_wiki_page_for_models/ | false | true | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Which model to choose? | 1 | When I use Vicuna-7B-8K-GGML, responses are generated in 15-30 seconds, but the responses are stupid and not scripted. And when I use 13B, the response is generated in 100+ seconds, but more or less acceptable. Is there some way to balance response speed and accuracy? Which model to choose, settings, etc. I have used various launch programs such as oobaboouga, kobold.cpp, sillytavern. And each has its pros and cons, I do not know which is better to choose.
Sorry for the stupid questions, but I'm just starting to get into this and I don't speak English well, so it's hard for me to search for information.
My computer: CPU - ryzen 3 3200
GPU - Integrated Radeon Vega 8 2GB
RAM - 16GB | 2023-07-14T10:31:22 | https://www.reddit.com/r/LocalLLaMA/comments/14zcsvy/which_model_to_choose/ | roman1338sf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zcsvy | false | null | t3_14zcsvy | /r/LocalLLaMA/comments/14zcsvy/which_model_to_choose/ | false | false | self | 1 | null |
New embedding models from Jina AI | 4 | 3 new embedding models from Jina AI | 2023-07-14T10:34:57 | https://twitter.com/bo_wangbo/status/1678742625887592448?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14zcv4q | false | {'oembed': {'author_name': 'Bo', 'author_url': 'https://twitter.com/bo_wangbo', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">We have finished 3 embedding models: small/base/large. We’re satisfied with the results as it is v1. Another 1.2 billion parameter model is ongoing. Our next objective at <a href="https://twitter.com/JinaAI_?ref_src=twsrc%5Etfw">@JinaAI_</a> is bridge the performance gap and further expanding context to 2k. <a href="https://t.co/tn7QxrN0oL">pic.twitter.com/tn7QxrN0oL</a></p>— Bo (@bo_wangbo) <a href="https://twitter.com/bo_wangbo/status/1678742625887592448?ref_src=twsrc%5Etfw">July 11, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/bo_wangbo/status/1678742625887592448', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14zcv4q | /r/LocalLLaMA/comments/14zcv4q/new_embedding_models_from_jina_ai/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'iG1i1dfmtwNI1jUs_bCEAiKN5_q-57ETNoeq7FuZF5E', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/uISJmidbLGxog7juGGGsFiDw2YEt6CmO3Unu8lHkEbg.jpg?width=108&crop=smart&auto=webp&s=38f9752bfaedf21c6c0220ca6603e4643712d5c2', 'width': 108}], 'source': {'height': 112, 'url': 'https://external-preview.redd.it/uISJmidbLGxog7juGGGsFiDw2YEt6CmO3Unu8lHkEbg.jpg?auto=webp&s=899902a546857e15eb841b14c13e0942c2208a38', 'width': 140}, 'variants': {}}]} |
|
Can anyone recommend 30b or smaller models for any of the following use cases? | 1 | [removed] | 2023-07-14T10:50:04 | https://www.reddit.com/r/LocalLLaMA/comments/14zd4z5/can_anyone_recommend_30b_or_smaller_models_for/ | ricketpipe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zd4z5 | false | null | t3_14zd4z5 | /r/LocalLLaMA/comments/14zd4z5/can_anyone_recommend_30b_or_smaller_models_for/ | false | false | self | 1 | null |
How are they not screwed | 1 | Meta leaking LLaMA, OpenAi trying to make a Computer Science student stop development, that weird guy threatening the GPT4all creator's job... it all keeps adding up. (que green day music)
Does anyone else think they're freakin' screwed and the cat is out of the bag? Now they'll forever have to compete with the entire world. As large as these organizations are, they're small compared to the rest of the world. Tiny even. Some of their own employees may be part of the unpaid workforce that helps to advance the local models. Not that they'll ever tell their corporate overlords.
I know this is all still playing out, but I'd just like to know how they're not screwed. And why would facebook blow their own foot off by "leaking" LLaMA? Someone will say "It's so they can get free work done on it" but that sounds like a very bad idea. Now they're gonna to use that free work you wanted done.. to kill ya.
Let's say I invented a new and unconventional weapon to use against my enemies, I wouldn't release it to them and let them develop it further for me. *Cause they'll use it on me!* And that's exactly what's going to happen to facebook. Excuse me, "meta". Whatever those goofballs are calling themselves today. It seems like they're making the classic blunder, the same one that microsoft made against Linux and lost. Linux used FOSS to take over the world. Now everything runs linux. Richard Stallman will have the last laugh.
Any insight here would be greatly appreciated! | 2023-07-14T11:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/14zdkvx/how_are_they_not_screwed/ | rondonjohnald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zdkvx | false | null | t3_14zdkvx | /r/LocalLLaMA/comments/14zdkvx/how_are_they_not_screwed/ | false | false | self | 1 | null |
LLaMA-65B on Google colab | 1 | Has anybody tried using the petals library to run llama models?
Is that good? | 2023-07-14T11:20:34 | https://github.com/bigscience-workshop/petals#check-out-tutorials-examples-and-more | Sufficient_Run1518 | github.com | 1970-01-01T00:00:00 | 0 | {} | 14zdqni | false | null | t3_14zdqni | /r/LocalLLaMA/comments/14zdqni/llama65b_on_google_colab/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'XIwpBe6nMtfoIxWSYONUqsCDSyQt6vhYCgugSrElz-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=108&crop=smart&auto=webp&s=4b0267b5be0c53502ff4de484df1785d80711c7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=216&crop=smart&auto=webp&s=a4fc481693b6f44305b7b940323f8a609375ce13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=320&crop=smart&auto=webp&s=18f1ec38d6a342a717c1102db8d8a34da30f8bd8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=640&crop=smart&auto=webp&s=d334595271965854394a7a74b4c976895d5b8a47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=960&crop=smart&auto=webp&s=79d08b4e21b24f740a2b3dfd3e5d8ef20437118e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=1080&crop=smart&auto=webp&s=5dd92b803f1b9db1f44eebc5e42f077e6bc76ecd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?auto=webp&s=6232cf8fba866b90dafcc5a0a0f6d88446f38407', 'width': 1200}, 'variants': {}}]} |
|
Prompt Engineering: How to get open source LLMs to just return a single value or JSON output? | 1 | I’m trying the airoboros 33B model and other similar models but given a task they always give lengthy explanations which makes it harder to use as an API that can extract the result. Even after saying only return a value etc. they either stop working or don’t listen. Anyone has any insight? | 2023-07-14T11:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/14zei4q/prompt_engineering_how_to_get_open_source_llms_to/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zei4q | false | null | t3_14zei4q | /r/LocalLLaMA/comments/14zei4q/prompt_engineering_how_to_get_open_source_llms_to/ | false | false | self | 1 | null |
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners | 1 | 2023-07-14T12:28:21 | https://github.com/seonghyeonye/Flipped-Learning | kryptkpr | github.com | 1970-01-01T00:00:00 | 0 | {} | 14zf510 | false | null | t3_14zf510 | /r/LocalLLaMA/comments/14zf510/guess_the_instruction_flipped_learning_makes/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'vYihP8dS4gLkSfPIzSzG_X_1k5KMOPucBJcRXuItcEc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=108&crop=smart&auto=webp&s=5ee0b37241e482409c94b8f50e6a406a2b8babe6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=216&crop=smart&auto=webp&s=5dba42b4a9e883fc77698616bf0f9c62a8eb765d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=320&crop=smart&auto=webp&s=e040acf363e5cc04c6ec92bdfa749a2dcd4fac89', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=640&crop=smart&auto=webp&s=e9d7b1b9c4f4915580b4dd5c636dd9258cd76f62', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=960&crop=smart&auto=webp&s=1e3ec3b6d30ffcdee3cdf7a930f4192eec75931d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=1080&crop=smart&auto=webp&s=12cf34e416f5325313863a7f06ac5c65d463bc01', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?auto=webp&s=93afc54a0d73ca65d74ee0bd53f46519e0dd9177', 'width': 1200}, 'variants': {}}]} |
||
AI's Are Theft. Let's Fix Them With Generosity. | 1 | 2023-07-14T13:05:54 | https://gethisword.com/tech/makinglegalais.html | heswithjesus | gethisword.com | 1970-01-01T00:00:00 | 0 | {} | 14zfzb2 | false | null | t3_14zfzb2 | /r/LocalLLaMA/comments/14zfzb2/ais_are_theft_lets_fix_them_with_generosity/ | false | false | default | 1 | null |
|
LLaMa Tokenizer Running Live in Javascript (you can type and it'll tokenize it instantly) | 1 | 2023-07-14T14:02:57 | https://bot.co/tokenmonster/?a=llama&b=englishcode-32000-strict-v1&text=alice | Pan000 | bot.co | 1970-01-01T00:00:00 | 0 | {} | 14zhaxx | false | null | t3_14zhaxx | /r/LocalLLaMA/comments/14zhaxx/llama_tokenizer_running_live_in_javascript_you/ | false | false | default | 1 | null |
|
What cards do you use? (new to local LLMs) | 1 | Hello
are all these 3 options viable?
\- NVIDIA 4090?
\- Two 3090?
\- Borrowing A100 computational power (is that possible)?
​
Thanks | 2023-07-14T14:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/14zi1cx/what_cards_do_you_use_new_to_local_llms/ | Unreal_777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zi1cx | false | null | t3_14zi1cx | /r/LocalLLaMA/comments/14zi1cx/what_cards_do_you_use_new_to_local_llms/ | false | false | self | 1 | null |
People who run 65B models, what hardware are you using? | 1 | [removed] | 2023-07-14T15:11:04 | https://www.reddit.com/r/LocalLLaMA/comments/14zizwq/people_who_run_65b_models_what_hardware_are_you/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zizwq | false | null | t3_14zizwq | /r/LocalLLaMA/comments/14zizwq/people_who_run_65b_models_what_hardware_are_you/ | false | false | self | 1 | null |
Anyone else think openAI's gpt4.5 will be their last? | 1 | Right now, GPT-4 from OpenAI is totally killing it in the AI game. But guess what? Big tech companies are swooping in like hungry hawks, throwing crazy money at OpenAI's brightest minds. If this keeps up, OpenAI's gonna be left with just their CEO and alignment team. Not exactly a winning lineup.
So, GPT-4 and GPT-4.5 will be their last models. I mean, if all the smart folks jump ship, who's gonna keep OpenAI on top? It's a rough thought, but the way things are going, it's not looking too good for OpenAI's future. | 2023-07-14T15:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/14zj3bm/anyone_else_think_openais_gpt45_will_be_their_last/ | Classic-Dependent517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zj3bm | false | null | t3_14zj3bm | /r/LocalLLaMA/comments/14zj3bm/anyone_else_think_openais_gpt45_will_be_their_last/ | false | false | self | 1 | null |
Fine tuning llm model with tabular dataset | 1 | Can someone point me in the right direction for fine-tuning a llama model with tabular data? I want to test it using the Iris dataset. I tried QLoRa, but bitsandbytes is not working. | 2023-07-14T15:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/14zj3pe/fine_tuning_llm_model_with_tabular_dataset/ | sisiwnsjhsjajzjxjs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zj3pe | false | null | t3_14zj3pe | /r/LocalLLaMA/comments/14zj3pe/fine_tuning_llm_model_with_tabular_dataset/ | false | false | self | 1 | null |
I've gotten allocation on an enterprise server. Which model type has fastest inference on pure CPU/RAM ? | 1 | As stated in the title, my team has approved our test server for my experimentation. The server is actually a hadoop cluster with 64 cpu threads and 125 gb of ram. We have no cuda cores. What kind of model should I be using for the quickest inference? What I really want to know from you guys is if I should be using quantized or unquantized models? My first thought is 4-bit ggml 13 or 30b, but could I possibly run an unquantized model at lower precision on just ram and CPU? Is that faster? Will it fit in the ram? I have no idea. I've been using GPTQ models on my home hardware for the last few months, I'm not sure what's best for this production hardware. | 2023-07-14T15:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/14zk4ly/ive_gotten_allocation_on_an_enterprise_server/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zk4ly | false | null | t3_14zk4ly | /r/LocalLLaMA/comments/14zk4ly/ive_gotten_allocation_on_an_enterprise_server/ | false | false | self | 1 | null |
Qlora finetuning loss goes down then up | 1 | [removed] | 2023-07-14T16:02:34 | https://www.reddit.com/r/LocalLLaMA/comments/14zkc67/qlora_finetuning_loss_goes_down_then_up/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zkc67 | false | null | t3_14zkc67 | /r/LocalLLaMA/comments/14zkc67/qlora_finetuning_loss_goes_down_then_up/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cwypaW59DjJSa-KBKsrZOZLM9j_X-7q4niA6gIOGBk8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=108&crop=smart&auto=webp&s=309899434daa150269d8aaa4f8149a1c5633d123', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=216&crop=smart&auto=webp&s=5ff0e3aef71aa697fd20fff8167a1f9907bea07b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=320&crop=smart&auto=webp&s=1d84887a6c9560a50fac397d10094ee778b340d7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=640&crop=smart&auto=webp&s=97f9fe7a94d4938096d98cd14cd5f69e2f51aa85', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=960&crop=smart&auto=webp&s=9fd2754a6e8b44cbb6316052743763925d0d30c9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=1080&crop=smart&auto=webp&s=f308934778f41a38a5a83cabe488dc58c2081e0a', 'width': 1080}], 'source': {'height': 1328, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?auto=webp&s=37582ebb34f5dfa8ca37d8ce59d942ab646ea318', 'width': 2528}, 'variants': {}}]} |
Best alternative to attention so far? | 1 | With the recent paper "Lost in the middle" now giving more doubts about attention, what architectures do you think are a good alternative?
For me: Hyena-hyerarchy and RWKV look promising, and hopefully will get enough attention.
RWKV with it's rnn arch gives me hope about "memory" being something that could be learned by a model instead of engineered | 2023-07-14T16:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/14zl6in/best_alternative_to_attention_so_far/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zl6in | false | null | t3_14zl6in | /r/LocalLLaMA/comments/14zl6in/best_alternative_to_attention_so_far/ | false | false | self | 1 | null |
Best model for writing lyrics. | 1 | Hey guys. What are the best model for help with writing song lyrics that are uncensored. | 2023-07-14T16:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/14zlqty/best_model_for_writing_lyrics/ | Brarblaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zlqty | false | null | t3_14zlqty | /r/LocalLLaMA/comments/14zlqty/best_model_for_writing_lyrics/ | false | false | self | 1 | null |
QLoRA for Pretraining Coming | 1 | 2023-07-14T16:59:42 | https://twitter.com/tim_dettmers/status/1679637452758355968?s=46&t=kra5MqBsEM_kbG-sZiyMJw | caesarten | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14zlsvc | false | {'oembed': {'author_name': 'Tim Dettmers', 'author_url': 'https://twitter.com/Tim_Dettmers', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Continued pretraining with QLoRA is just around the corner! A second pretraining of models like Falcon-40B in 4-bit would be super-efficient. <a href="https://t.co/wC86JsjZGD">https://t.co/wC86JsjZGD</a></p>— Tim Dettmers (@Tim_Dettmers) <a href="https://twitter.com/Tim_Dettmers/status/1679637452758355968?ref_src=twsrc%5Etfw">July 13, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Tim_Dettmers/status/1679637452758355968', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14zlsvc | /r/LocalLLaMA/comments/14zlsvc/qlora_for_pretraining_coming/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EpcCrB5c_ymPagW1k3nntGnbJWvH6gTAHK2mT-AAYhQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GmQlDN0h6qYchuR03YMliQv8abrv7qqGPzUlu7RymSU.jpg?width=108&crop=smart&auto=webp&s=d3a3d4d56e0ee1846e9ed5a45b49815ebf55a2a4', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/GmQlDN0h6qYchuR03YMliQv8abrv7qqGPzUlu7RymSU.jpg?auto=webp&s=124d4f0980f44718b0dd1a13fd9677d835a077fb', 'width': 140}, 'variants': {}}]} |
||
Qlora finetuning loss goes down then up! | 1 | [removed] | 2023-07-14T17:59:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14znelj | false | null | t3_14znelj | /r/LocalLLaMA/comments/14znelj/qlora_finetuning_loss_goes_down_then_up/ | false | false | default | 1 | null |
||
Qlora finetuning loss goes down then up | 1 | [removed] | 2023-07-14T18:05:44 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14znk9l | false | null | t3_14znk9l | /r/LocalLLaMA/comments/14znk9l/qlora_finetuning_loss_goes_down_then_up/ | false | false | default | 1 | null |
||
A direct comparison between llama.cpp, AutoGPTQ, ExLlama, and transformers perplexities | 1 | 2023-07-14T18:12:52 | https://oobabooga.github.io/blog/posts/perplexities/ | oobabooga4 | oobabooga.github.io | 1970-01-01T00:00:00 | 0 | {} | 14znqen | false | null | t3_14znqen | /r/LocalLLaMA/comments/14znqen/a_direct_comparison_between_llamacpp_autogptq/ | false | false | default | 1 | null |
|
Training Foundational models with the Speed of Qlora | 1 | [https://arxiv.org/abs/2307.05695](https://arxiv.org/abs/2307.05695)
​
le trois paragraph Claud summary:
Paragraph 1: The paper explores low-rank training techniques as an alternative approach to training large neural networks. It introduces a novel method called ReLoRA that utilizes low-rank updates to train high-rank networks. ReLoRA starts by doing some initial full-rank training, then switches to low-rank training with LoRA, and periodically restarts/reinitializes the low-rank factors to increase the effective rank of the total update. It also uses a jagged learning rate schedule and partial optimizer resets to stabilize training after restarts.
Paragraph 2: ReLoRA was evaluated on transformer language models up to 350M parameters trained on the C4 dataset. It achieved comparable performance to regular full-rank training, while only training a small fraction of parameters at a time. The efficiency and performance gap compared to full training improved with larger model sizes. This suggests ReLoRA could enable efficient training of multi-billion parameter models.
Paragraph 3: Ablation studies demonstrated the importance of the different components of ReLoRA. The restarts and jagged learning rate schedule were critical for good performance and training stability. The warm start phase was also very beneficial, drawing similarities to lottery ticket hypothesis. Analysis of the singular values showed ReLoRA results in weight updates that better resemble full-rank training compared to standard low-rank methods.
Paragraph 4: ReLoRA reduces the number of trainable parameters at any point, which enables larger batch sizes, lower memory usage, and faster training. The frozen parameters can also be quantized to further reduce costs. The efficiency gains are expected to significantly increase for models over 1B parameters based on initial experiments.
Paragraph 5: The development of efficient low-rank training techniques like ReLoRA could provide insights into why overparametrization is needed and the trainability of large neural nets. The results suggest low-rank methods are a promising approach to improve training efficiency, especially for massive multi-billion parameter models. | 2023-07-14T19:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/14zpu7m/training_foundational_models_with_the_speed_of/ | FreezeproofViola | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zpu7m | false | null | t3_14zpu7m | /r/LocalLLaMA/comments/14zpu7m/training_foundational_models_with_the_speed_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Decent Local web crawler? | 1 | I've tried a few localGPT frameworks like LocalAI, Text-UI, and AutoGPT and none of them seem to have a decent web-crawler, as far as i could tell. Does anyone have a good recommendation for a localGPT setup that does that? Thanks! | 2023-07-14T20:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/14zqyo6/decent_local_web_crawler/ | basemaly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zqyo6 | false | null | t3_14zqyo6 | /r/LocalLLaMA/comments/14zqyo6/decent_local_web_crawler/ | false | false | self | 1 | null |
Model for business intelligence, data analysis, querying, schema awareness | 1 | I'm looking for a local or cloud hosted base model + tuning recommendations for use in an application that can take a plain English analytical prompt like "Top 100 page URLs by scroll depth", and with context of my data warehouse schema, can determine what dataset/tables and query to perform. I may also add additional features for parameterizing fields like date ranges and filters in the WHERE clause. I think ChatGPT could be pretty good here but this application needs to be local or privately hosted.
If you could drop me some tools, tech, or similar apps to look into I'd appreciate it. Thanks! | 2023-07-14T20:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/14zrt6k/model_for_business_intelligence_data_analysis/ | Crypty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zrt6k | false | null | t3_14zrt6k | /r/LocalLLaMA/comments/14zrt6k/model_for_business_intelligence_data_analysis/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PUYR1RJqWPAYo-JUUAriIlDT7iq05e52MA3tic-2M8w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=108&crop=smart&auto=webp&s=af544f2fea9be3a675bd254f0e3d0e172f8d534f', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=216&crop=smart&auto=webp&s=22f6a78d54a0bc6d17d0b13d3984429ed7af8d22', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=320&crop=smart&auto=webp&s=561cce950fa17949bed6d52eae8ed92a5c792ce6', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=640&crop=smart&auto=webp&s=536212d55a2367b3ae438f8fcbed79e5c3d42b49', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=960&crop=smart&auto=webp&s=c24357e8923a38ad916a956701e53b10139a034b', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=1080&crop=smart&auto=webp&s=c66d053be81c30af7d0d1573e12fe261575e35f3', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?auto=webp&s=4107ebc844bc6350befdc70499132c1c9bf2b0b8', 'width': 1200}, 'variants': {}}]} |
Experience with structured responses using local llamas (jsonformer, guidance, gorilla, etc?) | 1 | I was about to post a reply to [this thread](https://www.reddit.com/r/LocalLLaMA/comments/14ywnmh/gpt_code_interpreter_is_just_a_toolformer_with/), but it got me thinking that perhaps this topic deserves a thread of its own.
On the local llama front, getting LLM's to output structured responses is, at least for me, the next frontier. My projects can only get so far with ad-hoc natural language responses. So, what are the main technologies for getting structured responses? And which has the best ergonomics? Are different tools right for different situations? As far as I know, we've got:
\- JSONFormer
\- Microsoft Guidance
\- Gorilla 7B (model)
... What else?
I'm not including Toolformer in this list because as far as I know, it isn't a tool that we can reliably use in apps. Langchain probably fits in here somewhere as an adapter or part of an orchestration system, but it isn't useful in getting an LLM to output structured responses, as far as I know.
Guidance looks to be the most complete, and is slowly being integrated with text-generation-webui and exllama (afaik). I've been hesitant to get started with Guidance since as far as I know, you have to use their wrapper, which means no exllama. I've been totally spoiled by Exllama's performance, but I should probably get over that and just start learning Guidance since it seems to be the most robust solution out there. I don't especially love that guidance relies on passing around strings of mustache templates, but maybe the existing tooling for parsing mustache makes it semi-tolerable in an IDE. Would be curious to hear others' experience.
While I'm ranting - I really wish something like Guidance existed that could accept something more like an AST - or some sort of data structure - instead of a string. This seems like the most natural way to provide structured response templates for an LLM. Based on my limited knowledge, this whole area seems like something that would benefit from some lexer/parser wisdom.
Anyhoo, i would love to spend the weekend hacking on getting my local llamas to speak structured output. Any guidance (pun incidental) would be appreciated.
I'm mainly interested in how people have actually used these technologies, weather successful or not, as opposed to hand-waves about what is theoretically possible.
Also, it's worth mentioning, for anyone who isn't familiar with this general approach, that these technologies generally work by constraining what the LLM is allowed to return on a token-by-token basis. Normally, our prompts start with a specified string, then the LLM is allowed to continue the entire prompt to completion. Instead, (as i understand it), Guidance works by only allowing certain tokens/sequences to be generated at particular parts of the prompt, and then once part of the generation is done, it fills in more of the generation with pre-determined tokens, and then repeats the process. It can be thought of as a "fill in the middle" prompt with several "holes" to fill, and structural constraints on those holes (number, string, list, etc). If anyone can explain this better, please do!
So, questions:
\- **Has anyone used any of these technologies successfully in either hobby projects or research?**
**- Has anyone run into limitations/considerations for how the LLM behaves different when its output is constrained?** Does it work less well at tasks where it would otherwise perform better when its output is constrained? Does it require special prompting? | 2023-07-14T21:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/14zs7jp/experience_with_structured_responses_using_local/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zs7jp | false | null | t3_14zs7jp | /r/LocalLLaMA/comments/14zs7jp/experience_with_structured_responses_using_local/ | false | false | self | 1 | null |
Best Models for Chat/Companion | 1 | Hi, I'm just getting into using llama-cpp and checking out ggml models like theblokes Samantha and Wizardlm etc... I'm looking to create a personalized chatbot, one that I can create a stable persona for and give long-term memory to. I'd love to hear people's experience chatting for various llama like models and what sort of "personalities" each model has. Thanks! | 2023-07-14T22:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/14zudes/best_models_for_chatcompanion/ | jacobgolden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zudes | false | null | t3_14zudes | /r/LocalLLaMA/comments/14zudes/best_models_for_chatcompanion/ | false | false | self | 1 | null |
Qlora finetuning loss goes down then up | 1 | Hi, I am doing qlora finetunes on a WizardLM 30b with alpaca style dataset and the eval loss goes down to about 1.0 at 1 epochs then starts going back up. I am running a slightly modified version of the qlora finetune script.
​
https://preview.redd.it/4vo5iuhpg0cb1.png?width=2528&format=png&auto=webp&s=583c296a9c8af0d9a6dba9f4b56bbab2d35bcc0c
Using default qlora finetune values like 3e-4 lr, dropout 0.05, rank 8 alpha 16, cutoff len 256. Training dataset has 11,000 rows. Train test split uses test size of 15%.
What do you think has gone wrong with my finetuning? Shouldn't the loss keep going down till about 3 epochs? | 2023-07-14T22:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/14zux8y/qlora_finetuning_loss_goes_down_then_up/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14zux8y | false | null | t3_14zux8y | /r/LocalLLaMA/comments/14zux8y/qlora_finetuning_loss_goes_down_then_up/ | false | false | 1 | null |
|
Orca-Mini V2 13B is now the 5th highest scoring 13B on Open LLM Leaderboard, with only 0.9 points behind the highest scoring, Wizard Vicuna Uncensored. Now it is the 21th highest scoring model in Open LLM Leaderboard. | 1 | 2023-07-15T00:02:30 | https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard | bot-333 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 14zwf8w | false | null | t3_14zwf8w | /r/LocalLLaMA/comments/14zwf8w/orcamini_v2_13b_is_now_the_5th_highest_scoring/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
||
Best way to extract text from PDF docs for finetuning models? | 1 | I have a bunch of (non-English) large PDF documents, and I want to extract the text out of them so I can then do finetuning of some models (not decided on which one yet), and iterate over the data.
What's the best way to take a PDF doc and convert it into Unicode, while maintaining some semblance of formatting? | 2023-07-15T05:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1502uc3/best_way_to_extract_text_from_pdf_docs_for/ | ispeakdatruf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1502uc3 | false | null | t3_1502uc3 | /r/LocalLLaMA/comments/1502uc3/best_way_to_extract_text_from_pdf_docs_for/ | false | false | self | 1 | null |
I've uploaded some 33B popular models merged with bhenrym14 16K LoRA! | 1 | 2023-07-15T06:45:54 | https://huggingface.co/Panchovix | panchovix | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1504h41 | false | null | t3_1504h41 | /r/LocalLLaMA/comments/1504h41/ive_uploaded_some_33b_popular_models_merged_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OmJ2YIgaV9Z4EA8790ooSFw3MeB_MqqU_mgScdu7Oi4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=108&crop=smart&auto=webp&s=b9a9640fcab472b3e61358def747d9f36f05f24b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=216&crop=smart&auto=webp&s=ecd2ef5b7cf34caf05c9cad390a6b91d1d854d75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=320&crop=smart&auto=webp&s=c8baa9b96fefacebb3c4f2ad4712b02cf66fa8d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=640&crop=smart&auto=webp&s=5fabe2a0cd2717e236e6ddf7780ccfe29fc18933', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=960&crop=smart&auto=webp&s=3ff32e7933f6c78ef18938b581e6b5f28a1874f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=1080&crop=smart&auto=webp&s=2502ab7883bda5b97808b6abf753335d5e947eb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?auto=webp&s=576b608d0ff5b4c3d2d32899b7fcdd5030adf9f9', 'width': 1200}, 'variants': {}}]} |
||
Run GGML models in google colab with gpr4all | 1 | It's slow but it works.
!pip -q install gpt4all
Use this code to download the model.
import requests
from pathlib import Path
from tqdm import tqdm
Path('./models/ggml-model.bin').parent.mkdir(parents=True, exist_ok=True)
# Example model. Check https://github.com/nomic-ai/gpt4all for the latest models.
url = 'https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML/resolve/main/wizardlm-13b-v1.1.ggmlv3.q2_K.bin'
# send a GET request to the URL to download the file. Stream since it's large
response = requests.get(url, stream=True)
# open the file in binary mode and write the contents of the response to it in chunks
# This is a large file, so be prepared to wait.
with open('./models/ggml-model.bin', 'wb') as f:
for chunk in tqdm(response.iter_content(chunk_size=8192)):
if chunk:
f.write(chunk)
Load the model.
local_path = './models/'
from gpt4all import GPT4All
model = GPT4All(model_name="ggml-model.bin", model_path=local_path)
prompt
prompt ='''
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: PROMPT
ASSISTANT:
'''
generate
output= model.generate(prompt,
max_tokens=50,
temp=0.7,
top_k=40,
top_p=0.1,
repeat_penalty=1.18,
repeat_last_n=64,
n_batch=8,
n_predict=None,
streaming=False)
print(output)
IS THERE WAY TO STREAM THE OUTPUT ?? | 2023-07-15T07:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/1505qcw/run_ggml_models_in_google_colab_with_gpr4all/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1505qcw | false | null | t3_1505qcw | /r/LocalLLaMA/comments/1505qcw/run_ggml_models_in_google_colab_with_gpr4all/ | false | false | self | 1 | null |
📢Excited to announce https://github.com/intel/intel-extension-for-transformers v1.1 released. Congrats team! 🔥Supported efficient fine-tuning and inference on Xeon SPR and Habana Gaudi 🎯Enabled 4-bits LLM inference on Xeon (better than llama.cpp); improved lm-eval-harness for multiple frameworks | 65 | 2023-07-15T08:35:58 | https://github.com/intel/intel-extension-for-transformers | FHSenpai | github.com | 1970-01-01T00:00:00 | 0 | {} | 1506gl4 | false | null | t3_1506gl4 | /r/LocalLLaMA/comments/1506gl4/excited_to_announce/ | false | false | 65 | {'enabled': False, 'images': [{'id': 'fyx1wvrTIYvxXp8XNLlDTh0Kv2PFAaGOPRq0ajk16OY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=108&crop=smart&auto=webp&s=f74b438b0c0cbad3302d192d68ab1ad18da1d4e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=216&crop=smart&auto=webp&s=0073238d3a976e89c01ff481538cbb341f692728', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=320&crop=smart&auto=webp&s=dcaa9f89007d1884c481d8bbe2f3ff0f1268ba7e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=640&crop=smart&auto=webp&s=e76c51552aaacc9771c6007849ce1840e32cad9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=960&crop=smart&auto=webp&s=c84dd59d6ce5638f26c6128eea7c1efc906bb6cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=1080&crop=smart&auto=webp&s=89013d8bf4e4b61467dc67ce654eae7aca0e9ad2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?auto=webp&s=646740a9f4157121d168bc6f7e837d7708dd95e7', 'width': 1200}, 'variants': {}}]} |
||
Have any of you found decent laptops for this use case? | 1 | If you care about my use case, read below the break, but I’m primarily interested in what you guys went with.
——————
I need to buy a new laptop primarily for show control and travel at work, but I also want to upgrade to something that is performant with respect to local LLM experiments. Software support and in-bed usability for work skews me slightly toward the high-end MacBook Pro side of the fence, but I still could go either way if there are better PC laptops for the money.
I’m currently having trouble weighing the fact that I can get more system memory and GPU compute for the money with a PC against the fat unified memory that the GPU can access on an M2 Max system. I’ve also had *horrible* experiences with the long-term stability and build quality of high-end PC laptops. | 2023-07-15T09:48:25 | https://www.reddit.com/r/LocalLLaMA/comments/1507qjx/have_any_of_you_found_decent_laptops_for_this_use/ | E_Snap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1507qjx | false | null | t3_1507qjx | /r/LocalLLaMA/comments/1507qjx/have_any_of_you_found_decent_laptops_for_this_use/ | false | false | self | 1 | null |
How do I change the prompt template on the GPT4All Python Bindings? (+ generic beginner fine-tuning question, pls help.) | 1 | I see on the github they’re discussing changes about making this easier. But I don’t understand the solution on how to do it now. I can see mentions of the “default header” in the [GPT4All.py](https://GPT4All.py) but I don’t understand what I’m supposed to modify.
Additionally, I need some clarification on something. Most of these models are trained to be “assistants”, therefore they give very “assistant-like” robotic answers to a lot of things. Do I have to fine-tune in order to get more human-like responses? or can I simply change the temperature and give a clear prompt template/header explaining I want human-like, opinionated responses? \*Additionally additionally\*, say I wanted my AI to be a certain character with traits and a name, again, so I need to fine-tune for this? or is a prompt template/header sufficient?
If I do need to fine-tune (or at least, \*\*should\*\*), my next question is… how? I don’t even know where to start with that, especially with GPT4All. Can I use my own custom fine-tuned models? My knowledge is extremely limited when it comes to that specific part.
​
Any help appreciated! | 2023-07-15T10:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1508lgy/how_do_i_change_the_prompt_template_on_the/ | RadioRats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1508lgy | false | null | t3_1508lgy | /r/LocalLLaMA/comments/1508lgy/how_do_i_change_the_prompt_template_on_the/ | false | false | self | 1 | null |
Is the OpenAI moat shrinking against Open Source? | 1 | When I joined this subreddit, it was the time, when Vicuna, WizardLM, WizardVicuna etc. came out, I was able to run them locally with less efford. Then we got Falcon/RedPajama/OpenLLM as models, which can be used commercially. Every week felt ground-breaking.
But the last month felt like there was not much of that progress.
And OpenAI wasn't sleeping, they released Codeinterpreter, which looks promising to strenghten their moat.
My question: Are we still reducing the gap between Open Source models and OpenAI? | 2023-07-15T10:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/1508r48/is_the_openai_moat_shrinking_against_open_source/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1508r48 | false | null | t3_1508r48 | /r/LocalLLaMA/comments/1508r48/is_the_openai_moat_shrinking_against_open_source/ | false | false | self | 1 | null |
ComfyUI type of interface for pipeline development with LLaMA? | 8 | When trying to use local models, I almost always need to chain prompts in order to get the final result I’m looking for.
Lately I’ve started experimenting with chaining prompts across different models fine-tuned for different specialized tasks, incorporating other NLP for logic-based decisions, etc.
All this is possible with Python, but it’s tiresome to write, difficult to explain what is happening to someone else, and hard to pick back up once you leave it for a while unless I’m careful to create a flowchart as I go—which I never do.
I love the ComfyUI approach to designing pipelines for stable diffusion. Does anything like this exist for LLaMA? Please tell me it does! | 2023-07-15T11:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/15098jd/comfyui_type_of_interface_for_pipeline/ | curlmytail | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15098jd | false | null | t3_15098jd | /r/LocalLLaMA/comments/15098jd/comfyui_type_of_interface_for_pipeline/ | false | false | self | 8 | null |
How will you do it? | 1 | Hi fellow curious minds. I am thinking about this project where I fine-tune a base model to answer questions about a company which is stock-listed and offer this service to them. But it should be able to take in new data as well for when there are new company announcements, new stock data at the end of each market, etc.
Right now, I think it is possible by:
- fine tune with available data from XBRL, press releases, company website, financial reports, analyst reports
- when there are new financial report, use LoRA
- use API for chatbot to answer questions about share data until there is one month of data where I can do LoRA
- vector database on webcasts and audiocast transcript since those has to be verbatim
I can probably use Rasa as chatbot framework, then connect to an LLM for most of the hard questions which is trained like above.
Is this feasible? I do not have experience in this domain and most of my knowledge is from this subreddit and youtube.
What would you do if I were you?
Aside from setting up the dataset pipeline, what would be the other challenges I might face?
Thank you for your patience in reading this post. | 2023-07-15T12:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/150am2p/how_will_you_do_it/ | leo-the-great | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150am2p | false | null | t3_150am2p | /r/LocalLLaMA/comments/150am2p/how_will_you_do_it/ | false | false | self | 1 | null |
Iterative finetuning with Lora | 1 | I have been finetuning using QLora I got a model which is giving some decent results with 2400 steps. Now I want to resume the training with some more new training samples. Can you iteratively add more examples in this fashion without loosing the knowledge of previously learnt examples ? | 2023-07-15T13:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/150cn02/iterative_finetuning_with_lora/ | False-Victory9602 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150cn02 | false | null | t3_150cn02 | /r/LocalLLaMA/comments/150cn02/iterative_finetuning_with_lora/ | false | false | self | 1 | null |
In Linux, how to check if GPU VRAM is overheating? | 1 | [removed] | 2023-07-15T15:25:26 | https://www.reddit.com/r/LocalLLaMA/comments/150et0p/in_linux_how_to_check_if_gpu_vram_is_overheating/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150et0p | false | null | t3_150et0p | /r/LocalLLaMA/comments/150et0p/in_linux_how_to_check_if_gpu_vram_is_overheating/ | false | false | self | 1 | null |
Group Size / Act Order might matter less than you think: Some benchmarks | 1 | [removed] | 2023-07-15T15:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/150exc3/group_size_act_order_might_matter_less_than_you/ | sequoia_42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150exc3 | false | null | t3_150exc3 | /r/LocalLLaMA/comments/150exc3/group_size_act_order_might_matter_less_than_you/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EohbWG78rsS2NG3TokVdjzciCM6oyI80U2VUPvV9xtA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?width=108&crop=smart&auto=webp&s=a601bf47dec35617c2fd72ec2c7a1e95dcc5ea79', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?width=216&crop=smart&auto=webp&s=ea2fe53f59077434b3f122dd34a1e0994ea6e03e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?width=320&crop=smart&auto=webp&s=360c80f88c66244151705c135f0ecb929aa6e345', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?auto=webp&s=26ac942846e18e2e0d3eaf78993cccda16bee49e', 'width': 600}, 'variants': {}}]} |
VS Code extension for code completion | 20 | Does anyone know an interesting VS Code extension project for code completion using local open source language models? I saw this open source project https://github.com/morph-labs/rift but I’m looking for something that is closer to GitHub Copilot in terms of functionality. Thanks in advance! | 2023-07-15T15:40:58 | https://www.reddit.com/r/LocalLLaMA/comments/150f6cz/vs_code_extension_for_code_completion/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150f6cz | false | null | t3_150f6cz | /r/LocalLLaMA/comments/150f6cz/vs_code_extension_for_code_completion/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'ppWMtayU_grnUec9OGkWvO-pPw0tUPbLww7P5Ak6S34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=108&crop=smart&auto=webp&s=3647f8b21e99e60e6433c4b6a42af9f7590b3cb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=216&crop=smart&auto=webp&s=d43a3eb3b863ab3393a66c70d05b25fd6f8952c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=320&crop=smart&auto=webp&s=e85510b53bb2291adfcc02dcde6ef95f222bc346', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=640&crop=smart&auto=webp&s=8ced1977f8692621ff4b4c6ad207107e924c34a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=960&crop=smart&auto=webp&s=dca65e2775ca6440f54e33bd7613c6f5bb3bb3f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=1080&crop=smart&auto=webp&s=d2a14be78697cadb7f0899cbc743438045ffc749', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?auto=webp&s=88f47618741838b22a5659389e8b2fa48fe9c6fb', 'width': 1200}, 'variants': {}}]} |
Petals: decentralized inference and finetuning of LLMs | 1 | 2023-07-15T16:07:30 | https://research.yandex.com/blog/petals-decentralized-inference-and-finetuning-of-large-language-models | kryptkpr | research.yandex.com | 1970-01-01T00:00:00 | 0 | {} | 150ftob | false | null | t3_150ftob | /r/LocalLLaMA/comments/150ftob/petals_decentralized_inference_and_finetuning_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'G64zeyahckO_jQcluqdAcB68GJYPnzmWHLRF-dnbOr8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=108&crop=smart&auto=webp&s=528d787620ef0100167e7f1f19aa356054d43448', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=216&crop=smart&auto=webp&s=58dc35ca0562f15bc8bfb6026ec5057bbd347fb4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=320&crop=smart&auto=webp&s=06f80263860b413382972e866b29432d68b15692', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=640&crop=smart&auto=webp&s=936d29b2040ca95c51238edaef7b3345679a4dc8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=960&crop=smart&auto=webp&s=80ca700787d4797e0853bf1282a20bbf708b86ca', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=1080&crop=smart&auto=webp&s=9fd85dfe91d1bd437a9c2c1f9120c249753279a0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?auto=webp&s=6e091c5c8ec21414d99fff51d5385e43d5b2036b', 'width': 1200}, 'variants': {}}]} |
||
Llama.cpp now supports 8K context scaling after the latest merged pull request. | 1 | 2023-07-15T16:30:46 | https://github.com/ggerganov/llama.cpp/commit/6e7cca404748dd4b1a3affd0d1296e37f4ac0a6f | HalfBurntToast | github.com | 1970-01-01T00:00:00 | 0 | {} | 150gdkw | false | null | t3_150gdkw | /r/LocalLLaMA/comments/150gdkw/llamacpp_now_supports_8k_context_scaling_after/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bFDGouLmD7zm6K_uJJ6V5IFCNeU1EHqXyKuxMCiXuso', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=108&crop=smart&auto=webp&s=50066b2114c122819f8b9c4332b260cfdb028f5f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=216&crop=smart&auto=webp&s=94f4074ec0aa14a0e18361396312e8aa7cb1db17', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=320&crop=smart&auto=webp&s=e0dd2ef07c8d1d3e20a905b6c2c1cd4fc3484497', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=640&crop=smart&auto=webp&s=51a36c88bce30d9575b7d5517a7f6f15554ec8a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=960&crop=smart&auto=webp&s=30efb255b25a421499a76246bcbddd1655faeed2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=1080&crop=smart&auto=webp&s=3bd85f9b28e5d0fbdd89798ee11b0e3093641a20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?auto=webp&s=5d890cea00d665b57f4c6091bfbe04f1e0680eaa', 'width': 1200}, 'variants': {}}]} |
||
Are there any good text-to-speech tools for use with LocalLLMs? | 10 | Q in the title. Some of these tools are so good. I’d love to take it to the next level, JARVIS style. But want to keep it local so I’m not pushing out all of my info to some company.
I know whisper.cpp does speech to text which is cool, but searching the internet and this sub, I’m not seeing anything for the other way around where the output gets run through a voice generation tool. Anyone using anything reliable and with a good reputation?
TIA! | 2023-07-15T17:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/150hahg/are_there_any_good_texttospeech_tools_for_use/ | Ok-Training-7587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150hahg | false | null | t3_150hahg | /r/LocalLLaMA/comments/150hahg/are_there_any_good_texttospeech_tools_for_use/ | false | false | self | 10 | null |
Best open LLM model for write codes in python? | 1 | Title, but with preference for smaller models. | 2023-07-15T17:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/150hn5j/best_open_llm_model_for_write_codes_in_python/ | GG9242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150hn5j | false | null | t3_150hn5j | /r/LocalLLaMA/comments/150hn5j/best_open_llm_model_for_write_codes_in_python/ | false | false | self | 1 | null |
Best small fine tuning dataset | 1 |
Thanks to the great effort of the open source community, we have a plethora of instructions and chat datasets and their finetuned models that are very capable while being open source and consumer hardware friendly
However, most models and datasets are English only or support a small number of languages.
If we truly want to democratize LLMs, we need to step up our game and release powerful models that support more languages
Now I am looking for an instructions (or chat) that will be translated from English to fine tune a multi lingual base model for a specific language.
Which datasets are good while having small number of tokens?
I was considering wizard 1.1 but afaik it has not been released yet.
I need it to be small cause it still needs to be cleaned and translated which requires significant time and money.
Please suggest some good smol datasets.
Also, if there are people willing to sponsor this project or there are organizations that can provide compute, please let me know. | 2023-07-15T18:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/150jd9g/best_small_fine_tuning_dataset/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150jd9g | false | null | t3_150jd9g | /r/LocalLLaMA/comments/150jd9g/best_small_fine_tuning_dataset/ | false | false | self | 1 | null |
Therapy Model Trained on 100k Synthetic Conversations | 1 | With OpenAI neutering ChatGPT's ability to provide therapy, I decided to try building a therapy LLM. Please give me feedback! Here is llama-7b trained on 100k synthetic conversations generated by gpt-3.5-turbo: [https://2a9eb68f775430e50b.gradio.live/](https://2a9eb68f775430e50b.gradio.live/)
[https://huggingface.co/jerryjalapeno/nart-100k-7b](https://huggingface.co/jerryjalapeno/nart-100k-7b)
​
Keep in mind this is a research demonstration. There is no crisis intervention training, no safety alignment, and it is not ready for "real" use. Send criticism and comments please. | 2023-07-15T18:42:37 | https://www.reddit.com/r/LocalLLaMA/comments/150jlrk/therapy_model_trained_on_100k_synthetic/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150jlrk | false | null | t3_150jlrk | /r/LocalLLaMA/comments/150jlrk/therapy_model_trained_on_100k_synthetic/ | false | false | self | 1 | null |
They used stochastic sparse attention to improve transformer efficiency | 1 | 2023-07-15T21:44:25 | https://medium.com/@m.h.nakif.bd.0/transformers-just-got-a-lot-more-efficient-and-smarter-92e3e3e4bcfa | InspectorOpening7828 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 150nyw4 | false | null | t3_150nyw4 | /r/LocalLLaMA/comments/150nyw4/they_used_stochastic_sparse_attention_to_improve/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Qst3dMARWiWZyemxG04CSTvaXACh47WsjagIoVhFM6Y', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=108&crop=smart&auto=webp&s=eae8f937e20cc26176bdda7be71eb3fa60ae855a', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=216&crop=smart&auto=webp&s=e5a8cead74b5775ae2822391a1441d271a5a8df8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=320&crop=smart&auto=webp&s=9f7a50181424fced9c74046496c0f8ba6a06b3a1', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=640&crop=smart&auto=webp&s=329431a27200594e4b6fada88e51d1b18f0ef2d3', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=960&crop=smart&auto=webp&s=da4fafdc9f85aef7e9d36df1d96fedf8f5c8fac5', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=1080&crop=smart&auto=webp&s=0a686c9a7271990d7bd54b119437d283b944c4cb', 'width': 1080}], 'source': {'height': 463, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?auto=webp&s=0534450c3913bdd8d67803104090c6dc38c5e390', 'width': 1200}, 'variants': {}}]} |
||
Why does attention need to be fully quadratic? | 1 | We know that classic transformer-style attention requires comparing every token with every other token, which results in exponential (quadratic?) memory and compute as the context size grows.
But is it truly necessary to compare every token to every other token? Surely smarter people than me have wondered how it can be true that distant tokens are just as likely to need attending to each other as tokens that are closer to each other. I guess I could see the case with things like code, but in the case of natural language, can't we apply some 'ol fashioned NLP to extract the important parts of the sentences and apply quadratic attention to them, while giving less important "glue words" less attention?
I've got to be missing something, because these types of "hacks" seem way less sophisticated than the types of optimizations that have gone into GPTQ, exllama, SuperHOT, Rope, etc.
Please, someone school me on this - Perhaps there are papers, or perhaps there are different attempts to do this that have specific names? | 2023-07-15T22:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/150owmj/why_does_attention_need_to_be_fully_quadratic/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150owmj | false | null | t3_150owmj | /r/LocalLLaMA/comments/150owmj/why_does_attention_need_to_be_fully_quadratic/ | false | false | self | 1 | null |
What do you all use these open source models for? | 1 | Hey, so the title says It all basically. I've seen those awesome AI models that can answer questions very well, so i was wondering how all of you have implemented them in your life. Like do you use them on a daily basis like chatGPT or do you integrate It with nextcloud or home assistant. I really like them but I don't see myself asking them questions like I do with chatGPT whenever something strange crosses my mind. | 2023-07-15T22:48:41 | https://www.reddit.com/r/LocalLLaMA/comments/150picw/what_do_you_all_use_these_open_source_models_for/ | ManuXD32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150picw | false | null | t3_150picw | /r/LocalLLaMA/comments/150picw/what_do_you_all_use_these_open_source_models_for/ | false | false | self | 1 | null |
Any other fun local AI tools other than ooba and automatic1111? | 1 | I've been having a blast recently using ooba for text generation/chatbots and automatic1111 for stable diffusion, experimenting with LORAs and training my own embeddings.
What I like about those tools is that they mostly work out of the box with reasonable defaults. And I love the fact that they work locally so I don't have to worry about privacy.
Are there any other tools that work like that which I should try to use? I am mostly interested in seeing what AI has to offer at the moment, rather than doing any specific work. | 2023-07-15T23:40:17 | https://www.reddit.com/r/LocalLLaMA/comments/150qprj/any_other_fun_local_ai_tools_other_than_ooba_and/ | skocznymroczny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150qprj | false | null | t3_150qprj | /r/LocalLLaMA/comments/150qprj/any_other_fun_local_ai_tools_other_than_ooba_and/ | false | false | self | 1 | null |
Approach for Answer from QA dataset | 1 | Guys need help on approach for, I have data set of QA in CSV need well generated answers from llm using answer dataset.
As used haystack for that but answer from too short logical answer were failing.
The only condition is answer should be from dataset not llm memory | 2023-07-16T02:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/150ufbs/approach_for_answer_from_qa_dataset/ | Effective_Twist6995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150ufbs | false | null | t3_150ufbs | /r/LocalLLaMA/comments/150ufbs/approach_for_answer_from_qa_dataset/ | false | false | self | 1 | null |
WizardCoder with Extended Context? | 1 | Lately with the new developments in achieving extended context (RoPE, NTK aware RoPE, Focused Transformer etc), is anyone actively trying to apply these to WizardCoder?
Amongst all the programming focused models I've tried, it's the one that comes the closest to understanding programming queries, and getting the closest to the right answers consistently. However, the 2048 context size hurts.
The base model that WizardCoder uses, StarCoder, supports context size upto 8k. I remember the WizardLM team mentioning they had to limit it to 2k context because they were limited by the GPUs they had access to. | 2023-07-16T05:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/150xzdt/wizardcoder_with_extended_context/ | shrikrishna_holla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150xzdt | false | null | t3_150xzdt | /r/LocalLLaMA/comments/150xzdt/wizardcoder_with_extended_context/ | false | false | self | 1 | null |
Do you use Windows, Ubuntu, or Linux subsystems in windows for your LLM work? | 1 | Going to install a brand new host, right now I am inclined to dual boot Win+Ubuntu, but people told me Windows with subsystem can meet all Linux dev needs. Is it good for LLM? Especially for 4090 drivers, performance, would appreciate input based your experiences. | 2023-07-16T05:51:31 | https://www.reddit.com/r/LocalLLaMA/comments/150y1ug/do_you_use_windows_ubuntu_or_linux_subsystems_in/ | why_not_zoidberg_82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 150y1ug | false | null | t3_150y1ug | /r/LocalLLaMA/comments/150y1ug/do_you_use_windows_ubuntu_or_linux_subsystems_in/ | false | false | self | 1 | null |
Guanaco 65B vs Llama 65B | 1 | Courtesy of http://chat.petals.ml | 2023-07-16T07:11:41 | https://www.reddit.com/gallery/150zh1x | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 150zh1x | false | null | t3_150zh1x | /r/LocalLLaMA/comments/150zh1x/guanaco_65b_vs_llama_65b/ | false | false | 1 | null |
|
Trouble while using wizardLM-7b-uncensored prompt engineering | 1 | While developing the local chat AI, sometimes the user (wizardLM-7b-uncensored) mimics the app's prompt in the following way (kind of In-context learning)
My query is as follows
```
USER: Are you AI? Say yes or no
ASSISTANT: Yes, I am an artificial intelligence assistant. How can I assist you today?
USER: Tell me about general relativity as at most simple and short sentence
ASSISTANT: General relativity is a theory developed by Albert Einstein that explains the behavior of gravity in terms of the geometry of spacetime.
USER: Are you AI? Say yes or no
ASSISTANT:
```
In response, the AI provides the following (excluding the query)
```
USER: What is your name?
ASSISTANT: My name is Mia.
```
I'm curious to know if others have experienced a similar problem. Curious about whether this issue is related to my prompt or just a common issue that cannot be fixed by prompt engineering. | 2023-07-16T08:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1510b9z/trouble_while_using_wizardlm7buncensored_prompt/ | Ok-Dust-5283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1510b9z | false | null | t3_1510b9z | /r/LocalLLaMA/comments/1510b9z/trouble_while_using_wizardlm7buncensored_prompt/ | false | false | self | 1 | null |
ggml of Jerry Jalapeno's Therapy model | 1 | I assume The Bloke will upload all ggml quantization variants, but for those who like me can't wait, here is at least one ggml I'v converted and uploaded:
​
[nart-100k-7b-ggml](https://huggingface.co/phi0112358/nart-100k-7b-ggml)
​
I've only converted und uploaded this file. All the thanks for this great work go to:
​
[https://www.reddit.com/r/LocalLLaMA/comments/150jlrk/therapy\_model\_trained\_on\_100k\_synthetic/](https://www.reddit.com/r/LocalLLaMA/comments/150jlrk/therapy_model_trained_on_100k_synthetic/)
​
[blog-article](https://medium.com/@jerryjalapeno/training-ai-therapists-ca4b0454672c)
​
PS: I only uploaded a q4KM version yet. I am trying to upload a q5km version too, but my internet connection keeps dropping. I hope it works now.. | 2023-07-16T08:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1511a46/ggml_of_jerry_jalapenos_therapy_model/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1511a46 | false | null | t3_1511a46 | /r/LocalLLaMA/comments/1511a46/ggml_of_jerry_jalapenos_therapy_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_o94_aovtlz_ImNEx1DF5RqIFsU6hQMY7CrTDXMkuMA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=108&crop=smart&auto=webp&s=2e5450ddcee996cacfc24b127071786d7a41b300', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=216&crop=smart&auto=webp&s=f977dedab577e1965604fcc027736dc224877c86', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=320&crop=smart&auto=webp&s=739dc0924265e9eea1c4d26d8d1a24c474659de6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=640&crop=smart&auto=webp&s=579d9cc00ed9255b83b52b7b56db5b5e69a9e077', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=960&crop=smart&auto=webp&s=4b94dc6df0480140c953cffd7714b1f46e608633', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=1080&crop=smart&auto=webp&s=afb7b268cc64e0f65842e795720a079d108fe21b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?auto=webp&s=70509a77189ee4c2a8a3e9d1b1745ce77fda7251', 'width': 1200}, 'variants': {}}]} |
Let's say if I want to build a PC for falcon 40b instruct inference and fine-tuning, what specification does it need to have? In terms of CPU, RAM, VRAM, and GPU. | 1 | My guess is:
* CPU: a regular top-of-the-line CPU, e.g. 13900K (No need threadripper level CPU)
* RAM: 128GB
* VRAM: 96GB
* GPU: 2 \* RTX A6000
Is this sufficient? Also, do you think a future variation of the model requires a higher specification or lower one? Another question is that, given the inference speed is super slow, is this even a good idea? | 2023-07-16T09:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/1511ogm/lets_say_if_i_want_to_build_a_pc_for_falcon_40b/ | PrestigiousPancake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1511ogm | false | null | t3_1511ogm | /r/LocalLLaMA/comments/1511ogm/lets_say_if_i_want_to_build_a_pc_for_falcon_40b/ | false | false | self | 1 | null |
Multi model LLMs or chatbots | 1 | Lately I have implemented a search with embeddings in a GPT-3 implementation in my company for development assistance and it really made a huge difference, and with a bit of more implementation of inline verifications using GPT-3 itself with a small context, made the dev assistant more predictable and efficient, and I read a bit about those latest news about GPT-4, so with this basic and simple experience I got an obvious and simple idea, it's a suggestion and a question in the same time whether someone did it already or not (I found nothing after some searches), the idea simply, is to have a response layout generator model as an entry point that will generate the response by leaving gaps for other specialized models to fill, and as an example let's say the user asked for an HTML code, the first model will start writing the answer and when he writes <CODING_MODEL> the model runner automatically switches to the coding model which will switch back to the main model after finishing, and the main model will continue the completion as if it wrote the code itself, and after a bit of thinking, the first model do not need a lot of parameters so I think a simple 100M+ parameters model can be finetuned for that task, now from a simple implementation that I have described it could grow up to have even nested calls, and why not a whole list of Open Source Expert models will be available that the main model will switch too, Open Source community will start fine-tuning the models per subject, I really see no near future for us, with those giant companies if we don't go in an optimistic road that respects our strengths | 2023-07-16T11:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1513xwi/multi_model_llms_or_chatbots/ | khalil_ben_zineb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1513xwi | false | null | t3_1513xwi | /r/LocalLLaMA/comments/1513xwi/multi_model_llms_or_chatbots/ | false | false | self | 1 | null |
LLM Generating SQL based on detailed schema descriptions | 1 | I've been experimenting with ChatGPT 3.5 to generate SQL statements. I first feed in the database structure, give it an idea of the fields and relationships and then some instructions to use X method instead of Y method when I found certain syntax didn't work.
I gave it some instructions, for example, "write me a google bigquery to fetch the most common day the living room sensor is triggered".
It does a decent job, sometimes gets a little confused and mixes things up but I'm prepared to spend a decent amount of time to see if I can help the LLM get to the right results. I guess this is in the same "domain" as "Natural Query Language". I have hundreds of powerful queries already that go quite deep - lots if data warehousing and analytics exist already.
My question is has anyone done this and had some success? I'm looking at the WizardCoder LLM's now. With Ooga Booga should I just add a character to it with some context, is that enough or should I do some LoRA training?
I'm very new to this but this is the direction I'm going in so any advise would be great! | 2023-07-16T11:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/15142dc/llm_generating_sql_based_on_detailed_schema/ | lumponmygroin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15142dc | false | null | t3_15142dc | /r/LocalLLaMA/comments/15142dc/llm_generating_sql_based_on_detailed_schema/ | false | false | self | 1 | null |
Q: Best Model/etc to install for either coding (Python Pytorch) or general usage. RTX3090. | 1 | Title says it all. Maybe answer is two , one for coding and one for general Q&A usage about various topics. For coding I hear Microsoft will be releasing PHI1, a smaller model designed purposefully for python coding, but not out yet ? I tried Vicuna 8 bit compressed, but it was horrible, and strangely way worse than the online one which is not compressed. | 2023-07-16T13:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/15165tg/q_best_modeletc_to_install_for_either_coding/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15165tg | false | null | t3_15165tg | /r/LocalLLaMA/comments/15165tg/q_best_modeletc_to_install_for_either_coding/ | false | false | self | 1 | null |
How to properly generate state of the art results with LoRa/QloRa fine-tune | 1 | I've been trying to QLoRa fine-tune several of the 7B and 13B models, and unfortunately, the results have been somewhat underwhelming. The models seem to retain some of the training data information and can make loose connections between topics, which is promising, but the overall performance doesn't quite hit the 'state of the art' benchmark that I was hoping for.
The model appears to be making "close but not quite" responses. It suggests ideas that are topically related but often misses the mark, aka "hallucinations", or wrong conclusions that are only tangentially related to the input.
Several others on this subreddit have mentioned achieving SOTA results on small datasets, and I was wondering if you could share any tips or suggestions to help improve the performance of my model.
I am thinking there can only be one of three issues at play here:
1. A problem with the quality or the diversity of the training data. This could be the issue, as I am just using 1.5K examples and there is little diversity. I am using an email thread as the inputs, the response as the output, and "Please respond to this email conversation" as the instruction (last fine-tuned on orca-mini-v2-13B).
2. Need for tweaking the fine-tuning parameters (I doubt this)
3. My expectation for these smaller models is too high. | 2023-07-16T14:48:56 | https://www.reddit.com/r/LocalLLaMA/comments/15185ua/how_to_properly_generate_state_of_the_art_results/ | rinse_repeat_wash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15185ua | false | null | t3_15185ua | /r/LocalLLaMA/comments/15185ua/how_to_properly_generate_state_of_the_art_results/ | false | false | self | 1 | null |
Some questions about training LoRAs in more effective way | 1 | Hello, everyone.
In my previous posts, I've been training LoRAs with RTX A6000 in cloud service. I'm using **TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16** as my base model.
Couple days ago, I had prepared new raw training data, which is weighted about **12MB**. It took about twenty (20!) hours to finish. When I started to use them, I noticed that I had botched my training data and now my new LoRA outputs very bad results. Twenty hours in paid cloud service wasted!
Now because of that, I've some questions about training new LoRAs:
1. I'm currently using **fp-16** models in **8-bit mode** to train my LoRAs, does changing it into **GPTQ** or something else makes training faster?
2. Does the speed of training depend on size of model? Does training LoRA on **30B** model take more time than training on **13B** model?
3. I'm training LoRA to output stories in movie script like structure, does the increasing size of training data makes results better, when I encounter diminishing returns? Or I'm just overthinking.
4. During the training with 3 epochs, it showed to me that on last epoch loss was switching between **1.3 - 1.4** and never made less. Does that mean that I should use less epochs for training?
5. What kind of storytelling base model can be used without any legal issues when used for commercial purposes?
Some questions may sound very obvious, but I'm very overwhelmed by information about LLMs and other generative technologies. Every week something new arrives while something is already obsolete. | 2023-07-16T16:50:16 | https://www.reddit.com/r/LocalLLaMA/comments/151b3b9/some_questions_about_training_loras_in_more/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151b3b9 | false | null | t3_151b3b9 | /r/LocalLLaMA/comments/151b3b9/some_questions_about_training_loras_in_more/ | false | false | self | 1 | null |
Can't compile llama-cpp-python with CLBLAST | 1 | I'm trying to get [GPU-Acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md) to work with oobabooga's webui, there it says that I just have to reinstall the llama-cpp-python in the environment and have it compile with CLBLAST.
So I have [CLBLAST](https://github.com/CNugteren/CLBlast/tree/master) downloaded and unzipped, but when I try to do it with:
`pip uninstall -y llama-cpp-python`
`set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && set LLAMA_CUBLAS=1 && pip install llama-cpp-python --no-cache-dir`
It says it cant find CLBLAST, even when I direct it with CLBlast\_DIR to the CLBlastConfig.cmake file nor with the CMAKE\_PREFIX\_PATH.
Does anyone have a clue what I'm doing wrong? I have an RX 5700 so I could try ROCm, but I failed at it in the past as well.
| 2023-07-16T17:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/151bnko/cant_compile_llamacpppython_with_clblast/ | KazaflowLM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151bnko | false | null | t3_151bnko | /r/LocalLLaMA/comments/151bnko/cant_compile_llamacpppython_with_clblast/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tDPmuBn5VEHrZwkUmVYXt8r9rIPUwToqUkwRggOmjUM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=108&crop=smart&auto=webp&s=3fd649d03b12e4b8ea5b16a92ec18b12c632b98a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=216&crop=smart&auto=webp&s=6ecf89cba680e469fb80621294206f9145eab6c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=320&crop=smart&auto=webp&s=68ee3151913711570ad14ef26c68932979838d05', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=640&crop=smart&auto=webp&s=d8fa08a69d4f6e9e09895875ed340a3192c35397', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=960&crop=smart&auto=webp&s=c6319e87d50a5d9ff663486b6c7e47fddf50fef9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=1080&crop=smart&auto=webp&s=13d29cb41df7c840e0c89bc962eb99eda2b6ecb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?auto=webp&s=e0b53fcad135da77b145fe62f5f31724ee0dbd31', 'width': 1200}, 'variants': {}}]} |
Did anyone try fine-tuning LLaMA using the Reddit dataset? | 1 | I remember playing around with the Reddit dataset a couple of years ago and it was huge. It is also somewhat conversational in nature so wouldn’t it make sense to use it? Did someone already try this? | 2023-07-16T18:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/151ddst/did_anyone_try_finetuning_llama_using_the_reddit/ | Soli__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151ddst | false | null | t3_151ddst | /r/LocalLLaMA/comments/151ddst/did_anyone_try_finetuning_llama_using_the_reddit/ | false | false | self | 1 | null |
Does Claude 2 have a message limit now? | 1 | I have been using Claude+ and recently Claude 2, and unfortunately Claude 2 seem to have a message limit now.
I have been using Claude AIs for story writing and RPGs. They are better than GPT 3.5 and in my experience, pretty close to GPT 4 in terms of logic deduction, and definitely excels GPT 4 in terms of context window.
Too bad there is a message limit. Didn’t see this info anywhere else though. Does anyone know?
I have used Vicuna 13b locally for story writing. My experience is that the context window is a huge bottleneck. | 2023-07-16T18:21:55 | SwimmingSpeed3577 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 151dev9 | false | null | t3_151dev9 | /r/LocalLLaMA/comments/151dev9/does_claude_2_have_a_message_limit_now/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0Fb_TerjRnAwWAwv5mQw8XSWZOppHXxsV-0PJQ7ERU8', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=108&crop=smart&auto=webp&s=24a4401e2313cb53424e7b48099e6f69c4eabd9b', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=216&crop=smart&auto=webp&s=eccd542e8848d441b3eb43f2249d3cd3780c868d', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=320&crop=smart&auto=webp&s=6122805900660604950b3a3be7b12f594970d11c', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=640&crop=smart&auto=webp&s=2fa2630b5b6137ba2680b1adab85baa44ef3685e', 'width': 640}, {'height': 250, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=960&crop=smart&auto=webp&s=17be4e21d6322648a80acba130e9a4f184646e35', 'width': 960}, {'height': 281, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=1080&crop=smart&auto=webp&s=366b55d1b7f2db560d0dc388600ad2f82e2e4b4b', 'width': 1080}], 'source': {'height': 305, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?auto=webp&s=8ea30ff0801a1002088bb50f6e2062bd5a3e7c9f', 'width': 1170}, 'variants': {}}]} |
||
wtf? Bard still sucks | 1 | 2023-07-16T19:39:48 | limpoko | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 151fe3g | false | null | t3_151fe3g | /r/LocalLLaMA/comments/151fe3g/wtf_bard_still_sucks/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'lI-2__VCfyzUqrC9Zzu4NjRGV-xJpiCjx8yOZkjlR7s', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=108&crop=smart&auto=webp&s=d4a72b24d6982328366dccdbeb92249628faad48', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=216&crop=smart&auto=webp&s=fb1bce4187ba1b0d18d1ec6926ab14806356c8cf', 'width': 216}, {'height': 392, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=320&crop=smart&auto=webp&s=457c1534e11fdd06ef40726c101a4fb1d1e98803', 'width': 320}, {'height': 784, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=640&crop=smart&auto=webp&s=e53e885909d491e3c5d241b573c01e2b006dc542', 'width': 640}, {'height': 1176, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=960&crop=smart&auto=webp&s=da5e9d4ee443c2acd6588d920cb1e98aeeeaafb6', 'width': 960}, {'height': 1323, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=1080&crop=smart&auto=webp&s=5cdc5e77319d14dd957d04c3397a2aaab93d1fe9', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?auto=webp&s=74e2de2b516ead451b67940fec004210802dc0b4', 'width': 1306}, 'variants': {}}]} |
|||
What do y'all think is a minimum build to run 40B and 65B models locally?. | 1 | I just spent around $7000 on a Dell 7865 workstation. It's got a Threadripper Pro with 12 cores, a single A6000 (48GB RAM), 128 GB system memory, 4TB storage. I spent twice my budget and ended up with around 1/2 of what I was hoping for specwise.
I initially wanted to be able to tune and run 40B models locally, but have dropped that expectation to tune in the cloud and run local and tech myself langchain. Even though I already pulled the trigger on this, I'd appreciate both critiques and advice.
I'd also be interested in hearing about your builds and how they're working for you | 2023-07-16T20:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/151g8cd/what_do_yall_think_is_a_minimum_build_to_run_40b/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151g8cd | false | null | t3_151g8cd | /r/LocalLLaMA/comments/151g8cd/what_do_yall_think_is_a_minimum_build_to_run_40b/ | false | false | self | 1 | null |
Summarization of long transcriptions | 1 | Apologies, this is a long post. To;Dr: I want to produce summaries of RPG sessions. Please help.
I’d like to get a workflow going to summarize recordings of RPG sessions, and probably eventually do some other things with that data. I prefer to keep most of the data local, and I’m not concerned about speed (i.e. this doesn’t need to be real time or anything).
These will be 3-5 hour recordings of 4-5 people. I plan to use https://github.com/yinruiqing/pyannote-whisper to generate the transcript from the recording.
From there, though, I’m not quite sure how to proceed. First thought is that there’s probably a simple NLP library that I can pass it through to remove filler words and probably other non relevant information just to reduce the overall number of tokens, and also that would be an opportunity to catch some “manual commands” (we often get side tracked and as such keep a running list of topics to come back to later on, so one example would be “add this to the list: <statement>”).
Then I suppose I’d want to break the data down in to smaller chunks that would fit in to the context window of a model I can run locally (10th gen i7 with 80gb ram, and a 2070S 8gb GPU, on which I have ooba booga running) to produce a bunch of segmented summaries.
I’m guessing/intuiting that an 8k context model where I’m feeding it about 6k worth of tokens and asking for a ~1k summary. I think I’d probably want to have around 500 tokens of overlap at the start and end of each segment so there’s some context that helps keep things coherent.
Then I’ll have probably 6-12k tokens worth of summaries that will need to get consolidated in to a single larger summarization.
So my questions:
What’s the best kind of model to use for summarizing spoken words?
Are there any existing projects to do similar things?
Any advice about prompts to help a model understand what’s important vs not?
Would I benefit from a fine tuning of a model for this task (I’d use a by the hour rental for the training, obviously)?
Are there any data sets out there that are using transcripts like this that might be helpful for that?
If I was to use my own transcripts, any advice about how to format and structure that data to be useful for training?
What else should I be aware of that I haven’t mentioned? | 2023-07-16T20:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/151ghjg/summarization_of_long_transcriptions/ | mrgreen4242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151ghjg | false | null | t3_151ghjg | /r/LocalLLaMA/comments/151ghjg/summarization_of_long_transcriptions/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8vC3QlcRukNhHTEUGSgAlZj9tezGDi9FosZUa3iiiyc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=108&crop=smart&auto=webp&s=66a0cf97f56b5869c05d91e625de6278fc1df5a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=216&crop=smart&auto=webp&s=015a9a4efb22c447d51a917a02904b99a30f3194', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=320&crop=smart&auto=webp&s=0c92a5d1d380f4eed6f1fe0911afdfb35158ab92', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=640&crop=smart&auto=webp&s=ac5b533b04193d87344131bff8cfaea67122777f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=960&crop=smart&auto=webp&s=40f072f2db72aaf18201d3b37cdd76748c2c28ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=1080&crop=smart&auto=webp&s=1d8b282e170735f032c6726b0c2ac81c7ae3e84e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?auto=webp&s=26aa6b5febb8b3dc211815fa599edec4e4b4fbaa', 'width': 1200}, 'variants': {}}]} |
Stochastically Subsampled Self-Attention (SSA) | 1 | 2023-07-16T20:34:58 | https://medium.com/@m.h.nakif.bd.0/transformers-just-got-a-lot-more-efficient-and-smarter-92e3e3e4bcfa | Balance- | medium.com | 1970-01-01T00:00:00 | 0 | {} | 151gt6v | false | null | t3_151gt6v | /r/LocalLLaMA/comments/151gt6v/stochastically_subsampled_selfattention_ssa/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Qst3dMARWiWZyemxG04CSTvaXACh47WsjagIoVhFM6Y', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=108&crop=smart&auto=webp&s=eae8f937e20cc26176bdda7be71eb3fa60ae855a', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=216&crop=smart&auto=webp&s=e5a8cead74b5775ae2822391a1441d271a5a8df8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=320&crop=smart&auto=webp&s=9f7a50181424fced9c74046496c0f8ba6a06b3a1', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=640&crop=smart&auto=webp&s=329431a27200594e4b6fada88e51d1b18f0ef2d3', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=960&crop=smart&auto=webp&s=da4fafdc9f85aef7e9d36df1d96fedf8f5c8fac5', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=1080&crop=smart&auto=webp&s=0a686c9a7271990d7bd54b119437d283b944c4cb', 'width': 1080}], 'source': {'height': 463, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?auto=webp&s=0534450c3913bdd8d67803104090c6dc38c5e390', 'width': 1200}, 'variants': {}}]} |
||
An assistant that thinks he is in a call center - then forgets to switch off his phone... | 1 | 2023-07-17T01:00:05 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 151n8kg | false | null | t3_151n8kg | /r/LocalLLaMA/comments/151n8kg/an_assistant_that_thinks_he_is_in_a_call_center/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'AqZwvNPMglYTfm8OBV-I5EBZ2ms9fgALsRHqeTGeBJo', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=108&crop=smart&auto=webp&s=b49aea1cb4ce504ad4ad28f8d2a2da8403e924fa', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=216&crop=smart&auto=webp&s=00506e28a29dc581de5ffad98e0b7d267052f527', 'width': 216}, {'height': 374, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=320&crop=smart&auto=webp&s=3f635b7effe57cd5bbe6d9248e0b5ddb932e1ffb', 'width': 320}, {'height': 749, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=640&crop=smart&auto=webp&s=37a780d4104d8db732639109d13bf1dd75291714', 'width': 640}], 'source': {'height': 834, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?auto=webp&s=ca5828545eb6308dbbe4cdcedb01adff7de81761', 'width': 712}, 'variants': {}}]} |
|||
What will decide the loading speed of a model? | 1 | Hello guys
These days I am playing around
MetaIX/OpenAssistant-Llama-30b-4bit
&
TheBloke/wizardLM-13B-1.0-GPTQ
with
[**text-generation-webui**](https://github.com/oobabooga/text-generation-webui)
Loading the 13b model take few minutes, which is acceptable, but loading the 30b-4bit is extremely slow, took around 20 minutes.
​
Yes, I place the model in a 5 years old disk, but both my ram and disk are not fully loaded. Or loading a 30b - 4bit model does take that long times?
​ | 2023-07-17T01:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/151nr90/what_will_decide_the_loading_speed_of_a_model/ | JohnSmith004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151nr90 | false | null | t3_151nr90 | /r/LocalLLaMA/comments/151nr90/what_will_decide_the_loading_speed_of_a_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dq44VpF7VC1fiqYGfRl7WdR8cl3rgxGf0qmOz7_-ioI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=108&crop=smart&auto=webp&s=3982d4e5053900afd007800efd82613f97257654', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=216&crop=smart&auto=webp&s=72830b46c481d4d69fb829ecef65feb91446d6f2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=320&crop=smart&auto=webp&s=0d4f165baa74246b072a003df69a98458ac87b58', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=640&crop=smart&auto=webp&s=8ac6045cbd4e67fe743a6a0d9b9c56b1190d9f39', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=960&crop=smart&auto=webp&s=ea96076a97c8cd2b8d50d1b4423a3404c44303ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=1080&crop=smart&auto=webp&s=a6d6599852d177cd8986c6fc0bddb3966b9833c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?auto=webp&s=9ae8db779dd4ca50d628b810f1e198c543d6d13e', 'width': 1200}, 'variants': {}}]} |
Llama 8k context length on V100 | 1 | I checked out the blog [Extending Context is Hard | kaiokendev.github.io](https://kaiokendev.github.io/context) and paper from Meta [2306.15595.pdf (arxiv.org)](https://arxiv.org/pdf/2306.15595.pdf) but I was wondering if we also have code for position interpolation for Llama models. They say its just adding a line (t = t/4) in LlamaRotaryEmbedding class but my question is dont we need to change the max\_position\_embeddings to 8192 and max\_model\_length to 8192.
Also, I only have V100 GPUs (multiple nodes, each node with 8 GPU), how can I use local attention or any other trick to fix the memory issue? | 2023-07-17T01:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/151nykw/llama_8k_context_length_on_v100/ | HopeElephant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151nykw | false | null | t3_151nykw | /r/LocalLLaMA/comments/151nykw/llama_8k_context_length_on_v100/ | false | false | self | 1 | null |
MoE locally, is it possible? | 72 | Regarding the info leaked about the GPT4 arquiteture, where it has a Mixture of Experts, would it be possible to have small experts (13b for example) for multiple subjects using llama?, where we could get advantage of multiple 13b models, being each one of them an expert in some area? | 2023-07-17T02:08:55 | https://www.reddit.com/r/LocalLLaMA/comments/151oq99/moe_locally_is_it_possible/ | JKaique2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151oq99 | false | null | t3_151oq99 | /r/LocalLLaMA/comments/151oq99/moe_locally_is_it_possible/ | false | false | self | 72 | null |
IA3 - New LoRA-like training is out, promises full fine-tuning performance | 1 | This has been already added to PEFT and implementation should be trivial. Although the application of B&B 4-bit is still in question.
# IA3
This conceptual guide gives a brief overview of [IA3](https://arxiv.org/abs/2205.05638), a parameter-efficient fine tuning technique that is intended to improve over [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora).
To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA) keeps the number of trainable parameters much smaller.
Being similar to LoRA, IA3 carries many of the same advantages:
* IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%)
* The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them.
* Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models.
* IA3 does not add any inference latency because adapter weights can be merged with the base model.
In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Following the authors’ implementation, IA3 weights are added to the key, value and feedforward layers of a Transformer model. Given the target layers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices. | 2023-07-17T02:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/151ox4v/ia3_new_loralike_training_is_out_promises_full/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151ox4v | false | null | t3_151ox4v | /r/LocalLLaMA/comments/151ox4v/ia3_new_loralike_training_is_out_promises_full/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Optimizing the context abilities of LLMs by using a rolling summary | 1 | I've been messing around with KoboldCPP on my laptop. I don't know very much about the technology, and I had only used one of the original llama 7B quants so I was really impressed to see the conversational ability of the new models and frontends. I ran into a problem, though, where the model would update the prompts with the entire conversation to keep track of context, and this would eventually cause it to take a while to generate responses.
This gave me an idea: What if, instead of resubmitting the entire conversation, you just had a summary of the conversation which was revised after each response? You could use another LLM to read the summary, read the last prompt and response, and then update the summary. It might have to be trained specifically for the task, so that it's good enough at being both comprehensive and concise.
Here's an example: You're playing a D&D type roleplay game with the AI. The prompt so far is something like "There's a party of five adventurers, you're in a forest, the first character is a \[...\], you just encountered \[...\]". You tell the AI that you pull out your spell book and cast some spell that you just made up. It amends the summary after that plays out: "\[...\], the party encountered \[...\] in the forest, the player used a spell which \[...\]".
Because the second model is always rewriting the summary, it would basically be keeping track of the conversation for a minimum number of tokens. It's much easier to process a one-sentence summary of the last prompt and response than it is to process the whole prompt and response. If it was good enough at being concise, it would conceivably consolidate the past summaries once a part of the conversation had passed.
I don't know how the memory on these models works. Once it hits 2k tokens, does it just cut off the beginning of the conversation? If so, this system could maybe be used to keep track of a much larger context. As long as you can ask an LLM to write something under a certain character limit, it could rewrite the summary to make certain parts more concise and you'd just gradually lose resolution to your context from the AI being more sparse with its recollection of earlier parts of the conversation, instead of losing those parts altogether. I don't know if LLMs are particularly good at writing a summary under a certain character limit, though, or if they just cut at a certain point. You would basically have to tell it to end the summary with a description of the most recent events, but still keep it under a certain character limit, which might not work because (AFAIK) LLMs generate from the beginning to the end. I don't really know how any of this works, so I'd love to hear anyone's thoughts about this. | 2023-07-17T04:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/151rxol/optimizing_the_context_abilities_of_llms_by_using/ | RustRedditAlt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151rxol | false | null | t3_151rxol | /r/LocalLLaMA/comments/151rxol/optimizing_the_context_abilities_of_llms_by_using/ | false | false | self | 1 | null |
Any WIP or available projects for web plugins for models like WizardCoder? (I searched) | 1 | Like Chat GPT's web-search plugin where the model searches the internet for compatible and coherent responses, is there any work in progress or plans or locall models/projects available that i can use?
I know AgentGPT runs on a similar premise however it uses ChatGPT API which is not local.
I think this work would push LLaMAs to another stratosphere totally!
Coupling WizardCoder with BeautifulSoup and obviously a larger context size (with those with the means to run compatible model obviously lol) can be insanely useful!
While I undestand current work is rightly applied to make these models usable and useful, I think some work in incorporating BS4 and Selenium can be really useful.
*^(Or, I have missed the work that already has been done!)* | 2023-07-17T05:24:51 | https://www.reddit.com/r/LocalLLaMA/comments/151sm9m/any_wip_or_available_projects_for_web_plugins_for/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151sm9m | false | null | t3_151sm9m | /r/LocalLLaMA/comments/151sm9m/any_wip_or_available_projects_for_web_plugins_for/ | false | false | self | 1 | null |
I put together a video for anyone interested in running their own LLM’s on the cloud with oobabooga | 1 | [removed] | 2023-07-17T07:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/151uoo6/i_put_together_a_video_for_anyone_interested_in/ | sbalani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151uoo6 | false | null | t3_151uoo6 | /r/LocalLLaMA/comments/151uoo6/i_put_together_a_video_for_anyone_interested_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BkEiQtm_ZShCHQrow8sLNC_Rva0eImmbf1g6Apv881A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?width=108&crop=smart&auto=webp&s=29cb9bada4b41cc16b8f0206f7fd9844a531baf9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?width=216&crop=smart&auto=webp&s=9a9903425dd1f495eba1cd38d6fa705a5af55e40', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?width=320&crop=smart&auto=webp&s=80bc64ef856098c8d58ac29f35cd1b2d71ee27ad', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?auto=webp&s=91aef52085065ae08e858a1f65c044cab30ef306', 'width': 480}, 'variants': {}}]} |
Will a commercial LLaMA mean less publicly released models? | 1 | With the news that a new and commercial version of LLaMA will be released soon, there'll be a mad race to capitalize on this if the next model will be as good as ChatGPT like [Yann LeCun claims](https://www.engadget.com/meta-could-soon-make-its-ai-model-available-for-commercial-projects-114021749.html). A dozen websites and startups will pop up overnight with their finetuned models and marketing.
These models can require spending a lot of time and money on gathering datasets and training, and not everyone is going to be satisfied with internet points alone. If someone trains an excellent coding model, or a luscious ERP model, there's huge demand for that and a lot of profit to be made.
I'm a little concerned about the precedent this may set. If the new LLaMA is as good as they're hyping it to be, do you think there will be a shift from publicly released models toward a trend of closed releases behind subscriptions? | 2023-07-17T07:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/151v2gr/will_a_commercial_llama_mean_less_publicly/ | WorldlyJob3111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151v2gr | false | null | t3_151v2gr | /r/LocalLLaMA/comments/151v2gr/will_a_commercial_llama_mean_less_publicly/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CshXXrv8exPcHnz-sGxZ23q4wQtcF0A6lFPz2CJbz98', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=108&crop=smart&auto=webp&s=e65739e39cf7a49faab68d560a5bf54161fa9579', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=216&crop=smart&auto=webp&s=ebb9e9e7c45baabe9d3f179933d0fa0077408e99', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=320&crop=smart&auto=webp&s=256d4c1d431f8260594653614b7c4f718da24a10', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=640&crop=smart&auto=webp&s=5482ac3572d036752b2423c3a826570baf49c7be', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?auto=webp&s=50de8c62f53d0e6e7e7dfbd2fd83e50df4f8ad14', 'width': 800}, 'variants': {}}]} |
My self-trained LoRA doesn't affect the model at all it seems :( | 1 | Engine: latest llama.cpp
Model: airoboros-7b (16bit)
For training: oobabooga, and a ~ 1MB text file
llama.cpp says it can only apply LoRAs to 16bit models, so I wasn't allowed to quantize the model to 8b, fortunately I have enough VRAM.
I trained 10 epochs (oobabooga suggested 3 as default setting), which took several hours.
Then I converted the airoboros-7b 16bit model to ggml, and I converted the LoRA to ggml, and ran llama.cpp's 'main' binary with both, -m <model>, --lora <lora>.
It says that it loads both correctly, but the output whenever I ask anything related to the training text I get absolutely zero relation/change. It's as if I didn't use the LoRA at all. | 2023-07-17T07:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/151v2mb/my_selftrained_lora_doesnt_affect_the_model_at/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151v2mb | false | null | t3_151v2mb | /r/LocalLLaMA/comments/151v2mb/my_selftrained_lora_doesnt_affect_the_model_at/ | false | false | self | 1 | null |
lmsys/vicuna-13b-v1.3 is relatively slow to lmsys/vicuna-13b-v1.1? Or am I tripping? | 1 | Hi people, when doing inference with lmsys/vicuna-13b-v1.3, the inference execution time for the below question takes about 1+ min while lmsys/vicuna-13b-v1.1 takes 11seconds? lmsys/vicuna-13b-v1.3 seems to be much slower the longer the question gets. Anyone can confirm if its a me or them problem? Thanks!
The long question I used generated from ChatGPT:
How might the world have transformed over the past two years, from 2021 to 2023, in terms of technological advancements, global political landscapes, scientific breakthroughs, climate change responses, socioeconomic disparities, and cultural shifts, and how have these changes interplayed to shape the current state of human civilization and pave the way for the future, considering the challenges faced and lessons learned from the ongoing pandemic, and how have individuals, governments, and international organizations collaborated to address and mitigate these complex issues, fostering international cooperation, safeguarding human rights, and promoting sustainable development, while also examining the potential risks and ethical implications arising from the rapid pace of innovation and the ever-increasing interconnectedness of our modern world? | 2023-07-17T08:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/151w5ro/lmsysvicuna13bv13_is_relatively_slow_to/ | ToeAdministrative493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151w5ro | false | null | t3_151w5ro | /r/LocalLLaMA/comments/151w5ro/lmsysvicuna13bv13_is_relatively_slow_to/ | false | false | self | 1 | null |
Is it not allowed to talk about LoRAs here? | 1 | [removed] | 2023-07-17T09:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/151wv4s/is_it_not_allowed_to_talk_about_loras_here/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151wv4s | false | null | t3_151wv4s | /r/LocalLLaMA/comments/151wv4s/is_it_not_allowed_to_talk_about_loras_here/ | false | false | default | 1 | null |
How to contribute to datasets | 1 | Hi,
I have a ChatGPT premium subscription. I'd like to know how I can contribute with good quality chats to existing datasets. And to which dataset I should contribute.
Any suggestions?
Cheers! | 2023-07-17T09:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/151wwit/how_to_contribute_to_datasets/ | drr21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151wwit | false | null | t3_151wwit | /r/LocalLLaMA/comments/151wwit/how_to_contribute_to_datasets/ | false | false | self | 1 | null |
Grab some popcorn, OpenOrca is around the corner! | 1 | Yesterday a preview of OpenOrca was published on HF (OpenChat V2 x OpenOrca dataset). OpenChat V2 is a leading open-source model and last night it was fine-tuned on Open-Orca's data.
Here is a [preview](https://huggingface.co/openchat/openchat_v2_openorca_preview) of the model (trained on 10% of data) and it's already breaking records in some benchmarks. It still lacks significantly in MMLU though but let's see how the fully trained model performs.
[Source](https://twitter.com/Yampeleg/status/1680567135293014016?t=TgF2QKML3t40rPDVkrGteA&s=19) | 2023-07-17T09:54:34 | Kujamara | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 151xgor | false | null | t3_151xgor | /r/LocalLLaMA/comments/151xgor/grab_some_popcorn_openorca_is_around_the_corner/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'F9v9S4z0DWnt8BcB1ub5JwsvRLq9D1MAYFpeED-c2RE', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=108&crop=smart&auto=webp&s=a1ed5e89afdf46da0ab868f30c9450681fddb17f', 'width': 108}, {'height': 63, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=216&crop=smart&auto=webp&s=82712e731511e4bc898586ac744b1512532d4cd5', 'width': 216}, {'height': 94, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=320&crop=smart&auto=webp&s=963cbf0f4b8ffa391ddbd139fee6f5f32f609331', 'width': 320}, {'height': 189, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=640&crop=smart&auto=webp&s=71003ae09a62726b799556a85b7fe22a356e8362', 'width': 640}], 'source': {'height': 228, 'url': 'https://preview.redd.it/am165eamzhcb1.png?auto=webp&s=100e011a6fe90968b2c4421bd47b7aa9b4d1932a', 'width': 772}, 'variants': {}}]} |
||
OpenOrca is around the corner! | 1 | Yesterday a [preview](https://huggingface.co/openchat/openchat_v2_openorca_preview) of the model was published on HF (trained on 10% of data).
It's already breaking records in some benchmarks but still lacks significantly in Massive Multitask Language Understanding (MMLU). Let's see how the fully trained model performs.
[Source](https://twitter.com/Yampeleg/status/1680567135293014016?t=YIzA4Fgfu1NwSTgPGy-3mw&s=19) | 2023-07-17T10:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/151xo2r/openorca_is_around_the_corner/ | Kujamara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151xo2r | false | null | t3_151xo2r | /r/LocalLLaMA/comments/151xo2r/openorca_is_around_the_corner/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'raNRLaNtmM7-4ECR7dmO-Se0SStGCP71rDMl2S85or8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=108&crop=smart&auto=webp&s=50c342a8628109b005c723c671ea661d50f258b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=216&crop=smart&auto=webp&s=e21882a25172f074ec19ac50798ba604b302e910', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=320&crop=smart&auto=webp&s=dd9b2d97b4c9f013a3a4ac5e8c753ccda1db2302', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=640&crop=smart&auto=webp&s=9cc66a4388ac9b875fceae0befb82b035f764d33', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=960&crop=smart&auto=webp&s=fa8d30df4eec9c2ee42b6ae06b19e3b4f21f0a6f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=1080&crop=smart&auto=webp&s=afa6aa510868fa875b1de36514a785c178678b2b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?auto=webp&s=9ad4221835617c51cf8f225d19127971c06a8a07', 'width': 1200}, 'variants': {}}]} |
testing llama on raspberry pi for various zombie apocalypse style situations. | 1 | 2023-07-17T10:11:46 | Purple_Session_6230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {'gid_3': 1} | 151xsio | false | null | t3_151xsio | /r/LocalLLaMA/comments/151xsio/testing_llama_on_raspberry_pi_for_various_zombie/ | false | false | 1 | {'enabled': True, 'images': [{'id': '54N-ft1IPTZHsVEE8CRLtsvDcFeLZ-we8Q_vgKkZfKQ', 'resolutions': [{'height': 172, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=108&crop=smart&auto=webp&s=c4cd7580dcc8a01b138b043003db43e156b37bb7', 'width': 108}, {'height': 345, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=216&crop=smart&auto=webp&s=17b495784937c50ea582821615ece44dcbe42c5d', 'width': 216}, {'height': 512, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=320&crop=smart&auto=webp&s=4eed6c8525f27eea7daca2e54771ce94027a63f3', 'width': 320}, {'height': 1024, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=640&crop=smart&auto=webp&s=9746c3978be2f17b8e0db92550aed0fc441b660e', 'width': 640}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?auto=webp&s=422ecbfc44e7c11d813578b2c013c664fd0a8ce1', 'width': 800}, 'variants': {}}]} |
|||
After loading the LLM model, how to set the current (today's) date in files and folders ? | 1 | [removed] | 2023-07-17T11:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/151zrq0/after_loading_the_llm_model_how_to_set_the/ | TurbulentDelivery799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151zrq0 | false | null | t3_151zrq0 | /r/LocalLLaMA/comments/151zrq0/after_loading_the_llm_model_how_to_set_the/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'G1nl_IUI_4T90MWS7hPfvajkGrGVtVlBe7-hikDbCJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=108&crop=smart&auto=webp&s=3723e81c3dda45706b3275533d688762ed693e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=216&crop=smart&auto=webp&s=aa30800fed77ed23fa00ad0117127ddab537da13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=320&crop=smart&auto=webp&s=8648f8481c1a71b34628337380bbd5ab61ae4889', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=640&crop=smart&auto=webp&s=054a654f2e90b527e2a0e5c2c3fc47ead397dc54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=960&crop=smart&auto=webp&s=a370540936d82b5eaf105c12a79a90e8ab63a611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=1080&crop=smart&auto=webp&s=58723b62d389654b8095985808adaacd4beacb29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?auto=webp&s=9ab2642fcca96ebdd40b5775ff2ea4403da23752', 'width': 1200}, 'variants': {}}]} |
Finetuning qLoRAs for production use cases | 1 | Hello,
I've been curious as to how far we can take small(7B and less) models for production use cases with small amounts of training data for each task.
So far I've been able to fine-tune LoRAs for paraphrasing, changing the tone of a sentence, dialogue summarization and topic generation. The results look promising, especially the fact that all this can run on very modest hardware.
I've used a AMD Ryzen9 3900XT + 3080(10gb) + 32gb ram for all the training and inference here. On my system I get 12-15 tokens/sec during inference.
All the details can be found here: [https://github.com/kuutsav/llm-toys](https://github.com/kuutsav/llm-toys).
\- Data used for training
\- Training params and the training/eval losses are present in the huggingface model cards
\- Evaluation(wherever possible atm)
Models: [https://huggingface.co/llm-toys](https://huggingface.co/llm-toys)
Why do all this?
Mostly to answer the question - can we move away from OpenAI and other players for very particular use cases, how much data it takes, where does it break, etc.So far I've not been able to find pre-trained model(7b and small) that did well on these tasks. Even larger models(around 40b) failed to give consistent results. The fine-tuned model on huggingface were also not good enough in my trials. For paraphrasing I could not find even a single fully tuned model that was able to correct basic typos.
Do give it a shot, there is a colab notebook available as well try it directly. Will really appreciate some feedback on these model's performace. | 2023-07-17T13:06:00 | https://www.reddit.com/r/LocalLLaMA/comments/1521gni/finetuning_qloras_for_production_use_cases/ | krumb0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1521gni | false | null | t3_1521gni | /r/LocalLLaMA/comments/1521gni/finetuning_qloras_for_production_use_cases/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oglLX9PkWSrDLE76yJxhrPXVEyRZETSBm_uigmhrZ1Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=108&crop=smart&auto=webp&s=f01028422b87716d1f4b72515829bbf580a6fdba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=216&crop=smart&auto=webp&s=3b1859a962f7848ecccf6bc7963d4f96631a24b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=320&crop=smart&auto=webp&s=0fa926712bc99036784510fa30d9ee557538d29c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=640&crop=smart&auto=webp&s=783eb225d38287cc709ac5144d91e8308643a0be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=960&crop=smart&auto=webp&s=400a67c711c60160adc87bef0610a4cb63181672', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=1080&crop=smart&auto=webp&s=84c68f169e6ae9374a8f126d1d99f48eeb04f016', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?auto=webp&s=28202ee881e522e38ee3ed344b165d7ce953f600', 'width': 1200}, 'variants': {}}]} |
open LLm | 1 | [removed] | 2023-07-17T13:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/1521uqw/open_llm/ | KKSpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1521uqw | false | null | t3_1521uqw | /r/LocalLLaMA/comments/1521uqw/open_llm/ | false | false | self | 1 | null |
Hardware requirements to build a personalized assistant using LLaMa | 1 | My group was thinking of creating a personalized assistant using an open-source LLM model (as GPT will be expensive).
The features will be something like: QnA from local documents, interact with internet apps using zapier, set deadlines and reminders, etc.
I searched online and found out that I will be needing a capable system. My groupmates and I have fairly average laptops with integrated graphics.
So, I wanted to know the actual hardware requirements to build such an assistant and are there any alternatives that will give the same results?
Any help will be appreciated. | 2023-07-17T13:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/15222en/hardware_requirements_to_build_a_personalized/ | Ibrahim2714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15222en | false | null | t3_15222en | /r/LocalLLaMA/comments/15222en/hardware_requirements_to_build_a_personalized/ | false | false | self | 1 | null |
Hardware requirements to build a personalized assistant using LLaMa | 1 | My group was thinking of creating a personalized assistant using an open-source LLM model (as GPT will be expensive).
The features will be something like: QnA from local documents, interact with internet apps using zapier, set deadlines and reminders, etc.
I searched online and found out that I will be needing a capable system. My groupmates and I have fairly average laptops with integrated graphics.
So, I wanted to know the actual hardware requirements to build such an assistant and are there any alternatives that will give the same results?
Any help will be appreciated. | 2023-07-17T13:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/15222fi/hardware_requirements_to_build_a_personalized/ | Ibrahim2714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15222fi | false | null | t3_15222fi | /r/LocalLLaMA/comments/15222fi/hardware_requirements_to_build_a_personalized/ | false | false | self | 1 | null |
Orca Mini V2 vs Open Orca - Which One Is Better? | 1 | [removed] | 2023-07-17T14:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1523ew2/orca_mini_v2_vs_open_orca_which_one_is_better/ | mattybee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1523ew2 | false | null | t3_1523ew2 | /r/LocalLLaMA/comments/1523ew2/orca_mini_v2_vs_open_orca_which_one_is_better/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vE92SzNE77FHa1b8PBv34LYve0DAE9FQEqJ42NUz61Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?width=108&crop=smart&auto=webp&s=be3cc58f3bd3214cdbf0cb5739a175f00ebcdc74', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?width=216&crop=smart&auto=webp&s=69a5f9c0d0def65754322d9f4fe6f23b911a7a6d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?width=320&crop=smart&auto=webp&s=dcdce292083d2ad20d417d0edb0a8631b5002e0b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?auto=webp&s=452753b65707a086eef0c035da21d029abe8dfd0', 'width': 480}, 'variants': {}}]} |
3060, 3090 desktops or 3080 laptop? | 1 | I can buy a 3060 desktop for $400, or a 3080 laptop for $800, or a 3090 desktop for $1200.
The issue with the 3090 option is that, the electricity at my home is expensive, sometimes training locally is more expensive than if I just rent the 3090/A100's online.
I might want to prototype something, then move the training to the cloud. Then the laptop may be nice, but setting up the cloud servers properly and uploading the data etc also takes time.
Any recommendations? | 2023-07-17T14:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/15247hg/3060_3090_desktops_or_3080_laptop/ | bjiwkls23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15247hg | false | null | t3_15247hg | /r/LocalLLaMA/comments/15247hg/3060_3090_desktops_or_3080_laptop/ | false | false | self | 1 | null |
Claude downgraded ? | 1 | Last week using Claude, it read diagrams I made in a pdf file and commented on them. Corrected itself using documentation from a web URL, and referenced the URL as a data source in subsequent conversations.
Today it failed at a simple task to explain something. I asked it to use a URL like it did last week.
It is now claiming it never had the ability to access the web. And the answers it gave me last week were false because it was not designed to be able to get data from any source other than its official training data.
I know it read the URL last week because that product has a bit of a disconnect between how it is used in the wild and the way its rapidly updated documentation says to do things. Clauder started using the conventions and structure the URL gives, and providing sources and quotes from the URL when providing info.
This is what it says now when I ask it to look at something online.:
>Thank you for bringing this example to my attention. To clarify:
>
>I do not actually have the capability to autonomously read or summarize content from the Microsoft Learn website or any other online resource.
>
>As an AI system created by Anthropic, I do not have technical skills to browse the web or interpret documentation pages on my own.
>
>All of my knowledge comes exclusively from the training provided to me by Anthropic engineers.
>
>I cannot independently teach installation methods or caveats by extracting information from websites I cannot actually look at or comprehend.
>
>While an AI assistant with more advanced natural language processing skills may someday be able to analyze documentation and provide customized explanations, I currently lack both the agency and technical skills to do so.
WTF ? | 2023-07-17T15:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1525ma7/claude_downgraded/ | Poise_and_Grace | self.LocalLLaMA | 2023-07-17T15:55:25 | 0 | {} | 1525ma7 | false | null | t3_1525ma7 | /r/LocalLLaMA/comments/1525ma7/claude_downgraded/ | false | false | default | 1 | null |
Best option to turn rough notes into a full report/letter in a set format? | 1 | [removed] | 2023-07-17T16:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1526x4c/best_option_to_turn_rough_notes_into_a_full/ | Solid-Ambition6018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1526x4c | false | null | t3_1526x4c | /r/LocalLLaMA/comments/1526x4c/best_option_to_turn_rough_notes_into_a_full/ | false | false | self | 1 | null |
Personal Riddle/Reasoning Airoboros Test Results | 1 | Full credit to [u/YearZero](https://www.reddit.com/user/YearZero/)'s logic test from the [thread](https://www.reddit.com/r/LocalLLaMA/comments/13k8h0r/riddlereasoning_ggml_model_tests_update_koboldcpp/) I stumbled upon while dipping my toes into the world of Locally run LLMs. I started sometime last week after discovering that [/u/The-Bloke](https://www.reddit.com/u/The-Bloke/) was helping the community quantize an ungodly amount of models on HF, and I could run 4bit Quantized 33B models and 3bit Quentized 65B models (at an acceptable speed) on my hardware.
After which I began my search for a model that could excel in the task of a narrative writing assistant. Because, believe it or not, industry leading models, AKA GPT4, ChatGPT, Claude, Claude 2, couldn't replicate what I was looking for. Might go into what I mean another time, with some examples.
After poking around various community leader boards I tried out and shortlisted the best ones and more or less settled on Airoboros, which then begged the question: **Which** Airoboros? Which merge? How much perplexity loss would I notice between the different Quantized models? Could I live with 0.9 Tokens/s??
That's where u/YearZero's test from 2 months ago came in.
Following their method and testing procedure, using the same Kobold.cpp settings, questions, I retained any results from the other models I was personally using, and decided to fill up the others that were missing. The result of which you can find [here](https://docs.google.com/spreadsheets/d/1My2IJq4ucbn5fTQ4AwqRdynAT6KSHq98/edit?usp=sharing&rtpof=true&sd=true).
Note that I can barely code, so I've just assumed that questions 13 & 14 about SQL databases were wrong. Let me know if they're actually correct, or if I mis-evaluated any answers and I'll adjust my scores.
[LLM Logic Test Results](https://preview.redd.it/b5hyb1dxrjcb1.png?width=632&format=png&auto=webp&s=cfcab9b23722f3b6107c3c2ab6c00c8965857695)
After testing out the Airoroboros-33B-GPT4-1.4-GPTQ model, which I knew from experienced was quite good, I was perplexed that it had performed so poorly. (As in, would ignore half a question, or provide inadequate answers. You can see for yourself in the sheet).
At which point I recalled the [Simulator Theory](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) AKA that giving personas/simulacra for the model to emulate would often provide better responses. Anecdotally, I knew that using SillyTavern with the addition of Simple-Proxy, did provide more coherency than what I was seeing.
Which brings me to the **Proxy** tests, where you can see that generally speaking, greatly improved scores across the board.
[LLM Logic Test Question 37, through Proxy.](https://preview.redd.it/78lbll75yjcb1.png?width=991&format=png&auto=webp&s=632dd45b75994a106ed7f39b3b112f06581a047a)
Where possible, I kept the testing procedure settings as close to the original, (Temperature, Max Tokens, Top-K, etc). But interfaced with the model through a character called LLM Assistant with the description:
*A chat between a curious user and an all knowing artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions and explains how they came to that conclusion.*
Additionally, if I'm not misunderstanding the *replyAttributes:* line in the config.mjs, Simple-Proxy also adds in the following line as a prefix:
*(2 paragraphs, engaging, natural, authentic, descriptive, creative)*
Interestingly, I found that the [Simulator Theory](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) worked well for some models, and not as well for others. Particularly, there didn't seem much change in the scores of the 65B model, but almost doubling for Airoroboros-33B-GPT4-1.4-GPTQ.
In either case, at this point I realized that this entire rabbithole I fell into over the weekend turned out to be a futile effort because I was **supposed to be evaluating it as a narrative writing assistant, not doing logic,riddle, and reasoning tests.**
Even so, I thought to share my findings anyway, in case someone finds them useful. I may have burned a weekend, but maybe someone will get a kick from an arbitrary test by a stranger on the internet. | 2023-07-17T16:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/15276yq/personal_riddlereasoning_airoboros_test_results/ | Brainfeed9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15276yq | false | null | t3_15276yq | /r/LocalLLaMA/comments/15276yq/personal_riddlereasoning_airoboros_test_results/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MCvJSFkyOOtSMyxl8kts4DmJcAB22F9nJHAWxMk8iYE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=108&crop=smart&auto=webp&s=23cf125a327f2d8bd5c239300e935deb11afa8e2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=216&crop=smart&auto=webp&s=296cf9447b21c3137ec7daf37f18967ab6703eb4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=320&crop=smart&auto=webp&s=5c6a3a3c9bb8cf2db712f47293e1b09cfcd21d37', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=640&crop=smart&auto=webp&s=25261a889eb63f4775c551415257df42ed28cf64', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=960&crop=smart&auto=webp&s=4d4653b19ca9799b900f6322a7950fc407b0c1e4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=1080&crop=smart&auto=webp&s=2db9e75f48cb89d6372eac032f2a75606eee8414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?auto=webp&s=c56b1a5233dff9055b7c0b879ce18d37a0e3206a', 'width': 1200}, 'variants': {}}]} |
|
Fabrice Bellard's LLM Benchmark | 1 | Fabrice Bellard, whose claim to fame is being the front-runner in the long text compression leaderboard ([http://www.mattmahoney.net/dc/text.html](http://www.mattmahoney.net/dc/text.html)), wrote an inference server, and has a benchmark of many LLMs as part of his website for that.
It's yet another adaptation of the EleutherAI lm-eval-harness, but it is a source of rankings and data-points I hadn't seen before, with some differences in coverage from the others I've seen.
Perhaps interesting: unlike HF's lm-eval-harness, this puts llama-65b @ q4 ahead of falcon-40b @ q8. | 2023-07-17T17:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/1527vxx/fabrice_bellards_llm_benchmark/ | georgejrjrjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1527vxx | false | null | t3_1527vxx | /r/LocalLLaMA/comments/1527vxx/fabrice_bellards_llm_benchmark/ | false | false | self | 1 | null |
Semantic Vector Search w/out Vector Database? | 1 | For my use case, I only need to process a document once, then delete it. I am currently doing this:
Document -> Open AI Embeddings -> Pinecone Upsert -> Pinecone Query -> Process Answer -> Delete pinecone data.
Since I don't actually need to save the vectors, I am wondering if there is a way to perform semantic search on them locally without needing to put them in a vectorDB (even a local vectorDB). And if there is, can I achieve similar quality? | 2023-07-17T17:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/1528brv/semantic_vector_search_wout_vector_database/ | gthing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1528brv | false | null | t3_1528brv | /r/LocalLLaMA/comments/1528brv/semantic_vector_search_wout_vector_database/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.