title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Unstructured text to graphDB | 1 | [removed] | 2023-06-20T06:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/14e2wq3/unstructured_text_to_graphdb/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e2wq3 | false | null | t3_14e2wq3 | /r/LocalLLaMA/comments/14e2wq3/unstructured_text_to_graphdb/ | false | false | default | 1 | null |
Best free open-source LLM to use for the recommendation task | 1 | [removed] | 2023-06-20T06:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/14e3muo/best_free_opensource_llm_to_use_for_the/ | lahaine93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e3muo | false | null | t3_14e3muo | /r/LocalLLaMA/comments/14e3muo/best_free_opensource_llm_to_use_for_the/ | false | false | default | 1 | null |
PirvateGPT: Error loading source documents? | 1 | [removed] | 2023-06-20T07:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/14e47y0/pirvategpt_error_loading_source_documents/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e47y0 | false | null | t3_14e47y0 | /r/LocalLLaMA/comments/14e47y0/pirvategpt_error_loading_source_documents/ | false | false | default | 1 | null |
How do I run Stable Diffusion and LLMs from my PC on my mobile device? Offline and private ways? | 3 | Posting here after a recommendation from r/selfhosted .
Currently I'm on Windows 10 but I'm switching to Pop!\_Os in the next few days.
And I have a Samsung Galaxy Tab E 9.6' with an Android 4.4.4 custom rom, though I can switch to a 7.1.2 custom rom if needed, and root it too.
If I have a long usb cable, is that the best option to connect my tablet to my pc? Would this be offline, private, secure, etc.? I came across [this stack overflow answer](https://stackoverflow.com/questions/9887621/accessing-localhost-of-pc-from-usb-connected-android-mobile-device) that talked about this. I don't use my tablet outside my room, so having it not be wireless isn't a big deal to me if it's the best option.
I've seen people talk about lots of stuff that I don't fully get. --share and --listen and --gradio-auth command-args and gradio and public urls, addresses that start with 127. or 192., using openvpn on the mobile device, termux, tailscale, and more.
I don't know what to do or follow. I've also seen people share how to have Stable Diffusion always generate and store images in the mobile device's sd card, instead of storing it on the pc. But I can't find the instructions again.
I'm also hoping multimodals like LLaVa can take files from that sd card as input too. | 2023-06-20T07:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/14e4arx/how_do_i_run_stable_diffusion_and_llms_from_my_pc/ | ThrowawayProgress99 | self.LocalLLaMA | 2023-06-26T23:34:32 | 0 | {} | 14e4arx | false | null | t3_14e4arx | /r/LocalLLaMA/comments/14e4arx/how_do_i_run_stable_diffusion_and_llms_from_my_pc/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'nfayPavSUB5ngYv6-19UHNBThsXfcLIDQl4HkEe3Cv0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/QgPvRTknlY3rMNDqH1k4I37XGiq9tZF_FsygC_Xht4o.jpg?width=108&crop=smart&auto=webp&s=b1c8a90e5690a7186afdb269ad05279551994d09', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/QgPvRTknlY3rMNDqH1k4I37XGiq9tZF_FsygC_Xht4o.jpg?width=216&crop=smart&auto=webp&s=533bd055cdae7998d1b8f9cd9d7dedabc1715bda', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/QgPvRTknlY3rMNDqH1k4I37XGiq9tZF_FsygC_Xht4o.jpg?auto=webp&s=8cd5e918e2bde6ca72d4445d6fc007f203689799', 'width': 316}, 'variants': {}}]} |
Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models | 45 | >Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly improve language models' (LM) multi-step reasoning capability. However, the CoT lengths can grow rapidly with the problem complexity, easily exceeding the maximum context size. Instead of increasing the context limit, which has already been heavily investigated, we explore an orthogonal direction: making LMs divide a problem into multiple contexts. We propose a new inference framework, called Recursion of Thought (RoT), which introduces several special tokens that the models can output to trigger context-related operations. Extensive experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs' inference capability to solve problems, whose solution consists of hundreds of thousands of tokens.
https://preview.redd.it/3pbmoc4ap47b1.jpg?width=2388&format=pjpg&auto=webp&s=c55a53a702747e5247a0d6b7c28f7d91c0954511
Paper: [https://arxiv.org/abs/2306.06891](https://arxiv.org/abs/2306.06891)
Code: [https://github.com/soochan-lee/rot](https://github.com/soochan-lee/rot) | 2023-06-20T07:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/14e4mg6/recursion_of_thought_a_divideandconquer_approach/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e4mg6 | false | null | t3_14e4mg6 | /r/LocalLLaMA/comments/14e4mg6/recursion_of_thought_a_divideandconquer_approach/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
|
WizardLM-13B-V1.0-Uncensored | 203 | Following up on [https://www.reddit.com/r/LocalLLaMA/comments/14ckrd6](https://www.reddit.com/r/LocalLLaMA/comments/14ckrd6)
Today I released WizardLM-13B-V1.0-Uncensored
[https://huggingface.co/ehartford/WizardLM-13B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-13B-V1.0-Uncensored)
Based on the [WizardLM/WizardLM\_evol\_instruct\_V2\_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) dataset I filtered it to remove refusals, avoidance, bias. [ehartford/WizardLM\_evol\_instruct\_V2\_196k\_unfiltered\_merged\_split](https://huggingface.co/datasets/ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split)
I trained this with Vicuna's FastChat, as the new data is in ShareGPT format and WizardLM team has not specified a method to train it.
The prompt format is Vicuna-1.1 style.
While the instruct layer has been uncensored, LLaMA itself has opinions, and will resist instructions it disagrees with. You must overcome that with your own wit and prompt engineering.
Finally, you are responsible for the content you create using this model. It is uncensored and can produce illegal and unethical content, if you ask it to. If deploying this in a service, I would recommend you train a LoRA to implement your own alignment to prevent unwanted behavior.
[u/The-Bloke](https://www.reddit.com/u/The-Bloke/) has kindly quantized this model as a service to the community. Respect.
[https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GPTQ) [https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GGML](https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GGML)
33b is forthcoming. Should be finished this weekend.
Thank you to the open-source community and everyone who helped me.
Also thanks to [chirper.ai](https://chirper.ai) for sponsoring some of my compute. | 2023-06-20T08:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/14e4sw8/wizardlm13bv10uncensored/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 14e4sw8 | false | null | t3_14e4sw8 | /r/LocalLLaMA/comments/14e4sw8/wizardlm13bv10uncensored/ | false | false | self | 203 | {'enabled': False, 'images': [{'id': '_K7qWtA4Mm1G0sMWCNN7AsiduSrbTwTCBzLbjFUqJpY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=108&crop=smart&auto=webp&s=5f4b7b7002f99e0fd7ba9816582ebab7510b9467', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=216&crop=smart&auto=webp&s=210c76560aefd284e83fd708ea783224c3ae2289', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=320&crop=smart&auto=webp&s=9f8879ef3e2e551a2e175a5ae2b22d1648c13e6d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=640&crop=smart&auto=webp&s=1c652de7471a344fef3856c9e3438fc26115dbee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=960&crop=smart&auto=webp&s=60ba81d523e5071bf043289f80fb6a4c4e190829', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=1080&crop=smart&auto=webp&s=6fb42256a0028bfc942616c655741e9c684096ed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?auto=webp&s=f8a6e8695b03c0105904baa35ea7f2bd68b320a6', 'width': 1200}, 'variants': {}}]} |
Visual Studio Code extension for WizardCoder | 81 | I just built the Quick Extension for WizardCoder so I can try it out in a real coding environment. I also have access to GitHub Copilot chat. I have to say that WizardCoder is quite good.
PS:
extension :[https://marketplace.visualstudio.com/items?itemName=mzbac.wizardcoder-vsc&ssr=false#overview](https://marketplace.visualstudio.com/items?itemName=mzbac.wizardcoder-vsc&ssr=false#overview) Once installed, simply right click on the IDE and then click "Chat with Wizard Coder".
source code:[https://github.com/mzbac/wizardCoder-vsc](https://github.com/mzbac/wizardCoder-vsc)
self-host wizardCoder api repo: [https://github.com/mzbac/AutoGPTQ-API](https://github.com/mzbac/AutoGPTQ-API) | 2023-06-20T09:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/14e5vwi/visual_studio_code_extension_for_wizardcoder/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e5vwi | false | null | t3_14e5vwi | /r/LocalLLaMA/comments/14e5vwi/visual_studio_code_extension_for_wizardcoder/ | false | false | self | 81 | {'enabled': False, 'images': [{'id': '0X9djfd_bWdlhM07dSftS7otpejs3cdKDPe1jQJUgj0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=108&crop=smart&auto=webp&s=aa64a99f2e0981debb75f458f19bb532977b67ae', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=216&crop=smart&auto=webp&s=083e3114a6852a5e124040613bce8531ef155ca6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=320&crop=smart&auto=webp&s=03c920391d18c9bddaa438cf53f53733d46ac0e7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=640&crop=smart&auto=webp&s=082cc7ac4112fce0b195f10783895ae411fefedb', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=960&crop=smart&auto=webp&s=70fc83e3a941d40fe517ca09cb1db513594a4093', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?auto=webp&s=2230eb5ae604d0585b82c59239f795801a639e7b', 'width': 1024}, 'variants': {}}]} |
Discussion about optimal Hardware-Requirements | 0 | [deleted] | 2023-06-20T10:06:27 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14e70bi | false | null | t3_14e70bi | /r/LocalLLaMA/comments/14e70bi/discussion_about_optimal_hardwarerequirements/ | false | false | default | 0 | null |
||
What Model should I use? | 4 | I’m working on a bot that uses an LLM to take on a personality of an insulting, pompous Space pirate AI. So far I can’t find an LLM that can do chatting well. It keeps asking if I need any help. I just want to chat and follow the prompt.
Edit: I’m using the Oobabooga API for the not silly tavern or the webui. | 2023-06-20T10:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/14e77xi/what_model_should_i_use/ | PhraseOk8758 | self.LocalLLaMA | 2023-06-20T10:22:36 | 0 | {} | 14e77xi | false | null | t3_14e77xi | /r/LocalLLaMA/comments/14e77xi/what_model_should_i_use/ | false | false | self | 4 | null |
What Type of Research Can Bring Value to the Community? | 21 | Hi guys,
Im a researcher scoping (and struggling) for ideas on what could bring value to the community and field in general.
I've been looking into creating new evaluation methods or ways in which one can evaluate the performance of LLMs more objectively - and several ideas into ethics and biases, but not sure which path would be more useful.
Any ideas or suggestions would be highly appreciated.
​ | 2023-06-20T10:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/14e7buc/what_type_of_research_can_bring_value_to_the/ | XhoniShollaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e7buc | false | null | t3_14e7buc | /r/LocalLLaMA/comments/14e7buc/what_type_of_research_can_bring_value_to_the/ | false | false | self | 21 | null |
Build, Train and Deploy LLM Agents with Agent M | 0 | 2023-06-20T10:37:04 | Floatbot_Inc | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14e7kqy | false | null | t3_14e7kqy | /r/LocalLLaMA/comments/14e7kqy/build_train_and_deploy_llm_agents_with_agent_m/ | false | false | default | 0 | null |
||
Build, Train, and Deploy LLM Agents with Agent M | 0 | [removed] | 2023-06-20T10:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/14e7n3u/build_train_and_deploy_llm_agents_with_agent_m/ | Floatbot_Inc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e7n3u | false | null | t3_14e7n3u | /r/LocalLLaMA/comments/14e7n3u/build_train_and_deploy_llm_agents_with_agent_m/ | false | false | default | 0 | null |
Discussion about Hadware Requirements for local LlaMa | 4 | Hello,
I want to buy a computer to run local LLaMa models. I have read the recommendations regarding the hardware in the Wiki of this Reddit. My Question is, however, how good are these models running with the recommended hardware requirements? Is it as fast as ChatGPT generating responses? Or does it take like 1-5 Minutes to generate a response? It would be great to get some context for the recommendations.
What configuration would I need to properly run a 13B / 30B or 65B model **FAST**? Would an RTX 4090 be sufficient for a 13B and 30B model? What CPU and RAM size do I need? | 2023-06-20T10:56:17 | https://www.reddit.com/r/LocalLLaMA/comments/14e7xf1/discussion_about_hadware_requirements_for_local/ | Plane_Discussion_924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e7xf1 | false | null | t3_14e7xf1 | /r/LocalLLaMA/comments/14e7xf1/discussion_about_hadware_requirements_for_local/ | false | false | self | 4 | null |
Does anyone know how to remove this body of text from my outputs? | 4 | Hello!
I'm new to LLMs and LLaMA in general, and even newer to this subreddit so I apologise if my question is easily fixable.
I've been messing around with an application called 'GPT4All' and I'm sure a few of you have heard of it. In case you haven't, it's a program/application that allows people to have a local AI chatbot on their computer. The users can also use their own trained AI's with the program, and the program offers many features for programmers and tinkerers alike.
Although GPT4All does have a proper application, complete with a nice UI and such, I'm using their 'gpt4all' python library to make an API to use in a project of mine. You'll see an image below which is the problem I'm having:
​
[\(The body of text I'd like to remove - generated with every AI output\)](https://preview.redd.it/dwxeuexol57b1.png?width=608&format=png&auto=webp&s=631d3278afcf5ddabd802e0c8545c3ea02b3c0e3)
Essentially, I'd like to remove this body of text from the output of the AI every time It's used. The AI used to output much more text than this, but I was able to remove that text from the output through the dissection of 2 of the python files in their library. The text I was able to remove wasn't like this though, it was just simple prompt, instruction, response stuff. I've tried everything I could think of and nothing has been able to remove this text.
I accidentally cut the first line from the image, but it says:
`llama.cpp: loading model from C:\\\\Users\\\\name\\\\.cache\\\\gpt4all\\ggml-gpt4all-l13b-snoozy.bin`
The library folder also contains a folder that has tons of C++ files in it, like `llama.cpp` which is the file mentioned in the line above. I believe the text is being outputted from one of these files but I don't know which one - and I don't know anything about C++.
This program I'm creating, which will include this AI, will run prompts through the AI very often - and lots of them. Therefore I wouldn't like my computer terminal being flooded by this body of text every time it has to give a response to an output.
I don't suppose anyone knows anything that could help me? Thanks so much in advance! | 2023-06-20T11:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/14e8duf/does_anyone_know_how_to_remove_this_body_of_text/ | Cold_Masterpiece_147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e8duf | false | null | t3_14e8duf | /r/LocalLLaMA/comments/14e8duf/does_anyone_know_how_to_remove_this_body_of_text/ | false | false | 4 | null |
|
I've come to rely on LLMs for emotional support and good advice | 191 | 2023-06-20T11:55:46 | veli_joza | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14e94oe | false | null | t3_14e94oe | /r/LocalLLaMA/comments/14e94oe/ive_come_to_rely_on_llms_for_emotional_support/ | false | false | 191 | {'enabled': True, 'images': [{'id': 'Do4MraDRd5UikR_L8WTAKHD_jsuk6mEBhiWGgrX0X70', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=108&crop=smart&auto=webp&s=58a843dd89bc3fd6142231c75b82b9f71f090fc4', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=216&crop=smart&auto=webp&s=c16c6a56707dc07fac5b1ee326964aae07b63b77', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=320&crop=smart&auto=webp&s=93575357cfd3cceb8044adbbe88a9331f9f04010', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=640&crop=smart&auto=webp&s=bd3d316ef929d37c19f68e22c3882219e850426b', 'width': 640}], 'source': {'height': 943, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?auto=webp&s=8c3add1f80fa07a6a595b15e2bf142bec14c37bb', 'width': 878}, 'variants': {}}]} |
|||
Sometimes Kobold doesn't show output in real time and waits until it's finished generating the text. Why? | 1 | [deleted] | 2023-06-20T12:23:29 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14e9pts | false | null | t3_14e9pts | /r/LocalLLaMA/comments/14e9pts/sometimes_kobold_doesnt_show_output_in_real_time/ | false | false | default | 1 | null |
||
First 13B open instruct open llama | 32 | https://huggingface.co/VMware/open-llama-13b-open-instruct
Not sure about you guys but this is huge. First fully open source, commercially usable 13b model. I think it actually performs quite well. Write in the comments your experience.
Edit : The link points to a merged LoRA model btw. Thanks for pointing that out u/2muchnet42day | 2023-06-20T12:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/14eahjk/first_13b_open_instruct_open_llama/ | NeatManagement3 | self.LocalLLaMA | 2023-06-20T13:39:15 | 0 | {} | 14eahjk | false | null | t3_14eahjk | /r/LocalLLaMA/comments/14eahjk/first_13b_open_instruct_open_llama/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'LW_qIW9esjcRqthCWBRqjAO9CO-86W08ZFYUlMXYxdY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=108&crop=smart&auto=webp&s=df2b3715e10c9015a7fd4080bdb2f7c8e1dd6862', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=216&crop=smart&auto=webp&s=c4de1ddba36f92c5b686ff9005e615a207dc3763', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=320&crop=smart&auto=webp&s=84480567555de5383d5b97473a27a1b5546b8141', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=640&crop=smart&auto=webp&s=943003935a40ccd55829665cc24fde89c146fa9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=960&crop=smart&auto=webp&s=d05988ed70265ddeaeefd8eb03511d16cf07c345', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=1080&crop=smart&auto=webp&s=dbfa02413ab6c386c39df0310782509c5992ff93', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?auto=webp&s=e1c0c3550230442198c8e79a9744426cedacaa2f', 'width': 1200}, 'variants': {}}]} |
Does local LLaMA's use SLI for increased VRAM? | 3 | This may be a dumb question, but I have a 1080 Ti currently, and was thinking about picking up 3 more for cheap (about $500), which would give me 44GB of VRAM in total... Would this be utilized with setup? | 2023-06-20T13:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/14ebleu/does_local_llamas_use_sli_for_increased_vram/ | iamthewhatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ebleu | false | null | t3_14ebleu | /r/LocalLLaMA/comments/14ebleu/does_local_llamas_use_sli_for_increased_vram/ | false | false | self | 3 | null |
Inference of fine-tuned 8bit model from code | 1 | [removed] | 2023-06-20T15:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/14edigf/inference_of_finetuned_8bit_model_from_code/ | Old_Speech_5665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14edigf | false | null | t3_14edigf | /r/LocalLLaMA/comments/14edigf/inference_of_finetuned_8bit_model_from_code/ | false | false | default | 1 | null |
Keeping my LLM on topic when passing context? | 10 | I am currently attempting to pass some context to the GPT4All-13B-Snoozy model in order to answer a question. The problem is that sometimes, the context string I am passing may contain additional irrelevant information in addition to the information needed to answer the given question.
This works fine, except for that the model sometimes includes some irrelevant portion of the context and includes it in its response. So it will answer the question from the given context and then for example add another sentence regarding some additional but irrelevant context that was passed to it.
I am currently using exllama and using the following prompt template:
prompt\_template = f"""Your name is ChatBot, a friendly and helpful AI chatbot. Below is an instruction that describes a task or question, paired with an input that provides further context. Write a response that completes the task.
​
\### Instruction:
{user\_input}
​
\### Input:
{context\_string}
​
\### Response:"""
Here is how I am setting up my model:
config = ExLlamaConfig(model_config_path)
config.model_path = model_path
config.max_seq_len = 2048
model = ExLlama(config)
cache = ExLlamaCache(model)
tokenizer = ExLlamaTokenizer(tokenizer_model_path)
generator = ExLlamaGenerator(model, tokenizer, cache)
generator.settings.token_repetition_penalty_max = 1.2
generator.settings.token_repetition_penalty_sustain = 20
generator.settings.token_repetition_penalty_decay = 50
How might I go about improving the performance in order to reduce the likelihood of my model including irrelevant context in its response? I have tried playing around with parameters and prompt but haven't had any luck so far. | 2023-06-20T15:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/14edvw4/keeping_my_llm_on_topic_when_passing_context/ | kotschi1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14edvw4 | false | null | t3_14edvw4 | /r/LocalLLaMA/comments/14edvw4/keeping_my_llm_on_topic_when_passing_context/ | false | false | self | 10 | null |
New AI 'influencers' every other day | 1 | 2023-06-20T15:32:07 | liamsagely | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14ee9e1 | false | null | t3_14ee9e1 | /r/LocalLLaMA/comments/14ee9e1/new_ai_influencers_every_other_day/ | false | false | default | 1 | null |
||
PostgresML adds GPTQ & GGML quantized LLM support for Huggingface Transformers | 9 | 2023-06-20T16:12:45 | https://postgresml.org/blog/announcing-gptq-and-ggml-quantized-llm-support-for-huggingface-transformers | something_cleverer | postgresml.org | 1970-01-01T00:00:00 | 0 | {} | 14efaze | false | null | t3_14efaze | /r/LocalLLaMA/comments/14efaze/postgresml_adds_gptq_ggml_quantized_llm_support/ | false | false | default | 9 | null |
|
I think I need some help setting things up for local usage... | 147 | 2023-06-20T16:29:48 | isoceans | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14efqn6 | false | null | t3_14efqn6 | /r/LocalLLaMA/comments/14efqn6/i_think_i_need_some_help_setting_things_up_for/ | false | false | 147 | {'enabled': True, 'images': [{'id': 'w21ZLyYH4aEydC18Nb8CKILufe6muisPyQQExtAhqOQ', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=108&crop=smart&auto=webp&s=a28967fa4beddb4a8b61b43bb4fd23177fbd3e64', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=216&crop=smart&auto=webp&s=8882bc5859861998300b8d9e5243b1090485cd37', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=320&crop=smart&auto=webp&s=98cb8ddd08baba8a89d04faa30a0eb12e7e3e3ae', 'width': 320}, {'height': 363, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=640&crop=smart&auto=webp&s=367cd5aa5ef91e200010c45e637fd190f2c57eac', 'width': 640}, {'height': 545, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=960&crop=smart&auto=webp&s=cd58c72aa7f32d62412ddf78cc30d6c2a71a3ab7', 'width': 960}], 'source': {'height': 577, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?auto=webp&s=745f0251496c9d4e99ef5d154643099194b3e3c0', 'width': 1015}, 'variants': {}}]} |
|||
GPT4All now supports Replit model on Apple Silicon (23 tok/sec)! | 31 | 2023-06-20T17:04:42 | https://twitter.com/nomic_ai/status/1671166514903543808 | NomicAI | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14egpd4 | false | {'oembed': {'author_name': 'Nomic AI', 'author_url': 'https://twitter.com/nomic_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Local LLMs in GPT4All are now 2x faster on Apple Silicone ⚡\uf8ff<br>- Supports all LLaMa models<br>- Exclusive support of the Replit model for 23 tok/s code generation enabling local Copilot!<br><br>Watch the 13B parameter Hermes model run at 15 tok/s locally!<a href="https://t.co/28GSI4Y92d">https://t.co/28GSI4Y92d</a> <a href="https://t.co/BOykAAkA2j">pic.twitter.com/BOykAAkA2j</a></p>— Nomic AI (@nomic_ai) <a href="https://twitter.com/nomic_ai/status/1671166514903543808?ref_src=twsrc%5Etfw">June 20, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/nomic_ai/status/1671166514903543808', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14egpd4 | /r/LocalLLaMA/comments/14egpd4/gpt4all_now_supports_replit_model_on_apple/ | false | false | 31 | {'enabled': False, 'images': [{'id': '874e-Hu7RjLoZ2m3gpd5U5gKCYJcGu9LrwjsXGIwn7k', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/BYReTCn6vlaPLR9-3bt5pdrSPiljLX3ATqnroGudgLU.jpg?width=108&crop=smart&auto=webp&s=969cae5340b7acf50e64cea61a547b28ecf02cc7', 'width': 108}], 'source': {'height': 81, 'url': 'https://external-preview.redd.it/BYReTCn6vlaPLR9-3bt5pdrSPiljLX3ATqnroGudgLU.jpg?auto=webp&s=ae4b4f22582944ca9ab1e56641a58ba317d7789b', 'width': 140}, 'variants': {}}]} |
||
LLaMA Equivalent Music LM | 1 | [removed] | 2023-06-20T17:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/14egv3g/llama_equivalent_music_lm/ | somethedaring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14egv3g | false | null | t3_14egv3g | /r/LocalLLaMA/comments/14egv3g/llama_equivalent_music_lm/ | false | false | default | 1 | null |
What is the best 7b LLM | 10 | With all these fine-tuned models floating around it's getting harder to pick the best model for projects.
Any idea which 7b LLM has the best eval scores on ARC, HellaSwag, MMLU or TruthfulQA. I took a look at [HF leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) but this doesn't include a lot of the new LLMs like OpenLlama.
Thanks! | 2023-06-20T18:36:36 | https://www.reddit.com/r/LocalLLaMA/comments/14ej5sn/what_is_the_best_7b_llm/ | 04RR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ej5sn | false | null | t3_14ej5sn | /r/LocalLLaMA/comments/14ej5sn/what_is_the_best_7b_llm/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
Error when attempting inference with LoRA applied (see comments) | 1 | 2023-06-20T18:47:19 | yuicebox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14ejg6r | false | null | t3_14ejg6r | /r/LocalLLaMA/comments/14ejg6r/error_when_attempting_inference_with_lora_applied/ | false | false | default | 1 | null |
||
George Hotz questions GGML & Quantization in general without better testing (Latent Space Podcast)(29:07) | 3 | 2023-06-20T20:00:38 | https://www.latent.space/p/geohot | jimmy6dof | latent.space | 1970-01-01T00:00:00 | 0 | {} | 14eleza | false | null | t3_14eleza | /r/LocalLLaMA/comments/14eleza/george_hotz_questions_ggml_quantization_in/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'pkzIN84fxa_C8ZMFCGKm0A627gqydXLQchUXyHRfYFs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=108&crop=smart&auto=webp&s=3445ffb4927134f3b80b90aef452c3f15652f229', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=216&crop=smart&auto=webp&s=a21a55ee70a65ea1b8649a5f5c919e43d8d68e13', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=320&crop=smart&auto=webp&s=b68c45c8d9c5e94e261d822adfda6ef3e8dfc44b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=640&crop=smart&auto=webp&s=96999dbbe6c6d6c7314fe0679f4239cd6a2f0851', 'width': 640}], 'source': {'height': 410, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?auto=webp&s=a1183554670fe502db4c0c9eb146569ad2505f28', 'width': 728}, 'variants': {}}]} |
||
Just released - vLLM inference library that accelerates HF Transformers by 24x | 94 | vLLM is an open-source LLM inference and serving library that accelerates HuggingFace Transformers by 24x and powers Vicuna and Chatbot Arena.
Github: [https://github.com/vllm-project/vllm](https://github.com/vllm-project/vllm)Blog post: [https://vllm.ai](https://t.co/M7g6pa8rHj)
* Edit - it wasn't "just released" apparently it's live for several days
​
https://preview.redd.it/nzceocfbg87b1.png?width=1532&format=png&auto=webp&s=44d3ec3bf1797997d14d08164f9835846a9521ce | 2023-06-20T20:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/14em713/just_released_vllm_inference_library_that/ | yanjb | self.LocalLLaMA | 2023-06-20T21:14:45 | 0 | {} | 14em713 | false | null | t3_14em713 | /r/LocalLLaMA/comments/14em713/just_released_vllm_inference_library_that/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'DYs7CkN9PGgRHPUogaL4fxcBexAw6r6seHmUbQMmyv8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=108&crop=smart&auto=webp&s=dd760df847a7e0e0e0b4b9205e8fb93196f3fc5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=216&crop=smart&auto=webp&s=beab0a360cbb5b6cbce30648282cb1e2f8f54c5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=320&crop=smart&auto=webp&s=931402258e06143b333a65baf41553f5e99086b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=640&crop=smart&auto=webp&s=925bf5bfe0e23d9b27e646577260ed45fa7d09bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=960&crop=smart&auto=webp&s=add39892e3b46c877d6b29f06b9173563509d20e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=1080&crop=smart&auto=webp&s=403a045e49feee53067a9acf66b3048ad06ab8e6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?auto=webp&s=37e44b0c954ce8ce05659ca5890eb7449df59639', 'width': 1200}, 'variants': {}}]} |
|
Are there any good GUIs for llama.cpp GGML? | 1 | [removed] | 2023-06-20T20:29:04 | https://www.reddit.com/r/LocalLLaMA/comments/14em7fn/are_there_any_good_guis_for_llamacpp_ggml/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14em7fn | false | null | t3_14em7fn | /r/LocalLLaMA/comments/14em7fn/are_there_any_good_guis_for_llamacpp_ggml/ | false | false | default | 1 | null |
HELP! Unable to run HF Inference API for my finetuned falcon-7b model. | 1 | [removed] | 2023-06-20T20:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/14emff6/help_unable_to_run_hf_inference_api_for_my/ | medmac01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14emff6 | false | null | t3_14emff6 | /r/LocalLLaMA/comments/14emff6/help_unable_to_run_hf_inference_api_for_my/ | false | false | default | 1 | null |
Preset Arena experiment: last 48 hours. Please vote! | 29 | A few days ago I posted [Preset Arena: 17,205 comparisons between 241 different presets.](https://www.reddit.com/r/LocalLLaMA/comments/14adfw2/preset_arena_17205_comparisons_between_241/)
Since then, **5619** votes have been obtained. THANK YOU to everyone who voted!
I think that 17,205 comparisons may have been too ambitious, so I have decided to let the experiment run for 48 more hours (until 2023-06-22 20:50 UTC) and stop no matter the number of votes at that point.
So I'd like to invite everyone to leave a few more votes if you can. 10 votes already helps a lot. This is the page: https://oobabooga.github.io/arena/index.html
**Why this is important**
By using the optimal parameter values (temperature, top_p, top_k, etc), we can get instantly get better replies from language models. This will benefit everyone regardless of the use case.
**Preliminary results**
https://oobabooga.github.io/arena/preliminary-results.html | 2023-06-20T20:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/14emr26/preset_arena_experiment_last_48_hours_please_vote/ | oobabooga4 | self.LocalLLaMA | 2023-06-20T20:53:28 | 0 | {} | 14emr26 | false | null | t3_14emr26 | /r/LocalLLaMA/comments/14emr26/preset_arena_experiment_last_48_hours_please_vote/ | false | false | self | 29 | null |
Falcoder 7B - Falcon Finetuned with CodeAlpaca w/QLoRA and PEFT | 16 | NOT My Model
​
Falcon-7b fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library
CodeAlpaca\_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
[https://huggingface.co/mrm8488/falcoder-7b](https://huggingface.co/mrm8488/falcoder-7b)
​
Found it on youtubes, not sure if it'll be better than StarChat and its 8k length, but might worth it for some machines | 2023-06-20T20:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/14emsoi/falcoder_7b_falcon_finetuned_with_codealpaca/ | ChobPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14emsoi | false | null | t3_14emsoi | /r/LocalLLaMA/comments/14emsoi/falcoder_7b_falcon_finetuned_with_codealpaca/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'kckByvNI7xP1OTFxNupfAgvrJXZ0fbzbK3KHxeMbvbI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=108&crop=smart&auto=webp&s=92210f7206a60246c0bc8be4ae600a7250f4c469', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=216&crop=smart&auto=webp&s=b01b6f47e9060f067753855bea107b5a8aeed85b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=320&crop=smart&auto=webp&s=cfd467f35e14e1d4ea20a5fdb8b23b5965e2c3b3', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?auto=webp&s=d2e3fa709876534e40db43a983a293311724bf3a', 'width': 500}, 'variants': {}}]} |
Newb runs into compute bottlenecks | 0 | [deleted] | 2023-06-20T21:48:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14eodyj | false | null | t3_14eodyj | /r/LocalLLaMA/comments/14eodyj/newb_runs_into_compute_bottlenecks/ | false | false | default | 0 | null |
||
[Rumor] Potential GPT-4 architecture description | 223 | [Source](https://twitter.com/soumithchintala/status/1671267150101721090?s=46&t=dUCVh9akIWxxNUIkrDJwJg) | 2023-06-20T21:52:11 | Shir_man | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14eoh4f | false | null | t3_14eoh4f | /r/LocalLLaMA/comments/14eoh4f/rumor_potential_gpt4_architecture_description/ | false | false | 223 | {'enabled': True, 'images': [{'id': 'NDr8aoAANKipaJwWs-HJ-EKKOoqT2zpqehOVsUyVWQ8', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=108&crop=smart&auto=webp&s=86d1a1bd185d9f206c1cef4e2c79981f6bd843fc', 'width': 108}, {'height': 369, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=216&crop=smart&auto=webp&s=fa1d8bac0f69a14e7f825469f959874d7ad8a1df', 'width': 216}, {'height': 547, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=320&crop=smart&auto=webp&s=ebfe4858420dcedbe7e09494161ee5395dffa86d', 'width': 320}, {'height': 1095, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=640&crop=smart&auto=webp&s=53d2f6b31e7651d6d51f093c0a57ecdff4c270a6', 'width': 640}, {'height': 1642, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=960&crop=smart&auto=webp&s=bf7e222f6642c873048e2ad8b78b71d35d5bcbfb', 'width': 960}, {'height': 1847, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=1080&crop=smart&auto=webp&s=30ada3723b254db15d79a13ee5e04d6a13b65d91', 'width': 1080}], 'source': {'height': 2197, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?auto=webp&s=0c2041e186cb76193c2b93d3cf3fcb0b63d73160', 'width': 1284}, 'variants': {}}]} |
||
vLLM: 24x faster LLM serving than HuggingFace Transformers | 2 | 2023-06-20T22:06:32 | https://vllm.ai/ | iwaswrongonce | vllm.ai | 1970-01-01T00:00:00 | 0 | {} | 14eouux | false | null | t3_14eouux | /r/LocalLLaMA/comments/14eouux/vllm_24x_faster_llm_serving_than_huggingface/ | false | false | default | 2 | null |
|
Question: Does using, for example, Wizard-Vicuna-7B-Uncensored, make it learn over time in it's local instance ? | 2 | I am really impressed by the performance of the 8-bit Wizard-Vicuna-7B-Uncensored 1.0, and now i would like to use it for work as it's basically doing the same as ChatGPT for my scripting needs, but one thing i find important to know even though i cant seem to find an answer ;
- Does the LLaMa learn over time from the conversations and prompts i'm having with it ?
It would mean that using it actually make it evolve, and the idea and implications it brings are amazing to me.
If not, let me know how i can proceed. I currently am running it over Arch Linux on a 12Gb VRAM, 32Gb RAM machine with a 20-core CPU, so i can involved in power-hungry tasks if required to make it learn. | 2023-06-21T00:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/14es7dj/question_does_using_for_example/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14es7dj | false | null | t3_14es7dj | /r/LocalLLaMA/comments/14es7dj/question_does_using_for_example/ | false | false | self | 2 | null |
What is the fastest way to do inference with a 3/4 bit quantized 13b vicuna model? | 7 | Right now, as far as I've read, the fastest with a gpu is exllama, outperforming everything else. However, I haven't heard anything about how it compares with both llama.cpp gpu inference (wich was merged very recently) and other quantization methods (squeezeLLM is one that stands out a lot). What's the fastest for inference? Right now the environment I have is a free colab, it's only for testing. I'll get something with similar specs for the actual software. | 2023-06-21T00:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/14eskrb/what_is_the_fastest_way_to_do_inference_with_a_34/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14eskrb | false | null | t3_14eskrb | /r/LocalLLaMA/comments/14eskrb/what_is_the_fastest_way_to_do_inference_with_a_34/ | false | false | self | 7 | null |
Models produce incoherent statements | 1 | [removed] | 2023-06-21T00:57:43 | https://www.reddit.com/r/LocalLLaMA/comments/14esz3l/models_produce_incoherent_statements/ | SpareSink5530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14esz3l | false | null | t3_14esz3l | /r/LocalLLaMA/comments/14esz3l/models_produce_incoherent_statements/ | false | false | default | 1 | null |
How to dive deeper into LLMs? | 12 | Hi there! I am from a computer-vision background as LLM is gaining much hype, I am diving deeper into the NLP field as well. I have gone through language models such as BERT and GPT as a lot of resources were available for them. Now I want to dive deeper into large llm. Can you guys provide any roadmap, resources, or Architecture codes to follow? | 2023-06-21T01:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/14etfd1/how_to_dive_deeper_into_llms/ | rajanghimire534 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14etfd1 | false | null | t3_14etfd1 | /r/LocalLLaMA/comments/14etfd1/how_to_dive_deeper_into_llms/ | false | false | self | 12 | null |
Best bet for parseable output? | 3 | Would like it if I could get output in consistent JSON or something somehow so I could do specific stuff with it. Any models better or worse for this than other? Any prompt tricks? I’ve only experimented a tiny bit so far mostly with Wizard 13b | 2023-06-21T01:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/14etov8/best_bet_for_parseable_output/ | EarthquakeBass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14etov8 | false | null | t3_14etov8 | /r/LocalLLaMA/comments/14etov8/best_bet_for_parseable_output/ | false | false | self | 3 | null |
Best ggml model for instruction following? | 2 | [removed] | 2023-06-21T02:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/14evg01/best_ggml_model_for_instruction_following/ | eyerfing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14evg01 | false | null | t3_14evg01 | /r/LocalLLaMA/comments/14evg01/best_ggml_model_for_instruction_following/ | false | false | default | 2 | null |
Llama-cpp-python is slower than llama.cpp by more than 25%. Let's get it resolved | 36 | Llama.cpp is not just 1 or 2 percent faster; it's a whopping 28% faster than llama-cpp-python: 30.9s vs 39.5s.
I came across this issue two days ago and spent half a day conducting thorough tests and creating a detailed bug report for llama-cpp-python. But it seems like nobody cares about it at all.
So, I decided to post the issue on the bug report page of the text-generation-webui, hoping that they will take notice.
If you are a frequent user of the Webui and use ggml files, this issue should matter to you.
Could someone please follow the simple "Steps to Reproduce" that I have listed below and confirm the existence of this issue?
[https://github.com/oobabooga/text-generation-webui/issues/2788](https://github.com/oobabooga/text-generation-webui/issues/2788)
Once you have done that, please leave a comment on my bug report page to bring it to their attention. Thank you! | 2023-06-21T02:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/14evg0g/llamacpppython_is_slower_than_llamacpp_by_more/ | Big_Communication353 | self.LocalLLaMA | 2023-06-21T03:02:09 | 0 | {} | 14evg0g | false | null | t3_14evg0g | /r/LocalLLaMA/comments/14evg0g/llamacpppython_is_slower_than_llamacpp_by_more/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '2A6pPlrffR6nIlGIdi_x08hRw9_DuJi2nOx4BRixmXQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=108&crop=smart&auto=webp&s=4c36a62d0befa4dd32cce4c8636a5ba3180b4f9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=216&crop=smart&auto=webp&s=01ac80bf5e712a5c1c32aa1b399543d1bba46746', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=320&crop=smart&auto=webp&s=986c17be9713671db662b48b292f505a1f0fd12b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=640&crop=smart&auto=webp&s=8c95d56c5d6b80c10d3a764927b85ca834f2da36', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=960&crop=smart&auto=webp&s=7999c7ee38b4ee63c297be32b2c84de4af6b7ecf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=1080&crop=smart&auto=webp&s=77967cb4c59747cf07241d4c9e02915459e58efc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?auto=webp&s=144804fc38bf608830a64cae5b54d7c5cda0afcf', 'width': 1200}, 'variants': {}}]} |
Is anyone using local LLMs for their business? | 1 | [removed] | 2023-06-21T05:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/14ey89g/is_anyone_using_local_llms_for_their_business/ | Aggressive-Snow-835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ey89g | false | null | t3_14ey89g | /r/LocalLLaMA/comments/14ey89g/is_anyone_using_local_llms_for_their_business/ | false | false | default | 1 | null |
Microsoft makes new 1.3B coding LLM that outperforms all models on MBPP except GPT-4, reaches third place on HumanEval above GPT-3.5, and shows emergent properties | 433 | Textbooks Are All You Need
Paper: [https://arxiv.org/abs/2306.11644](https://arxiv.org/abs/2306.11644)
Excerpts:
>In this work, following the footsteps of Eldan and Li, we explore the improvement that can be obtained along a different axis: the *quality* of the data. We demonstrate the power of high quality data in breaking existing scaling laws by training a 1.3B-parameter model, which we call **phi-1**, for roughly 8 passes over 7B tokens (slightly over 50B total tokens seen) followed by finetuning on less than 200M tokens. Despite being several orders of magnitude smaller than competing models, both in terms of dataset and model size, we attain 50.6% pass@1 accuracy on HumanEval and 55.5% pass@1 accuracy on MBPP (Mostly Basic Python Programs), which are one of the best self-reported numbers using only one LLM generation. Moreover, despite being trained on much fewer tokens compared to existing models, phi-1 still displays emergent properties.
>
>Our training relies on three main datasets: A filtered code-language dataset, which is a subset of The Stack and StackOverflow, obtained by using a language model-based classifier (consisting of about 6B tokens); A synthetic textbook dataset consisting of <1B tokens of GPT-3.5 generated Python textbooks; A small synthetic exercises dataset consisting of ∼180M tokens of Python exercises and solutions. Taken together, the above datasets contain less than 7B tokens. The architecture for our 1.3B parameter phi-1 model consists of 24 layers, hidden dimension of 2048, MLP-inner dimension of 8192, and 32 attention heads of dimension 64 each. Aside from FlashAttention, our models *do not* use other new techniques like Fill-In-the-Middle (FIM), or Multi-Query-Attention (MQA) that could further boost performance and efficiency.
>
>The largest improvement in HumanEval resulted from finetuning on the small CodeExercises dataset (<200M tokens). We demonstrate that, quite remarkably the model after finetuning also exhibits a substantial improvement in executing tasks that are not featured in the finetuning dataset. This suggests that our finetuning process might have helped the model in reorganizing and consolidating the knowledge acquired during pretraining, even if such knowledge is not explicitly present in our CodeExercises dataset. By crafting “textbook quality” data we were able to train a model that surpasses almost all open-source models on coding benchmarks such as HumanEval and MBPP despite being 10x smaller in model size and 100x smaller in dataset size.
Extra important excerpt:
>We also believe that significant gains could be achieved by using GPT-4 to generate the synthetic data instead of GPT-3.5, as we noticed that GPT-3.5 data has a high error rate. It is interesting that phi-1 is able to achieve such high coding proficiency despite those errors. | 2023-06-21T06:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/14ez6qf/microsoft_makes_new_13b_coding_llm_that/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ez6qf | false | null | t3_14ez6qf | /r/LocalLLaMA/comments/14ez6qf/microsoft_makes_new_13b_coding_llm_that/ | false | false | self | 433 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Is it possible to do personal RLHF? | 4 | I have been using the chatbot example from exllama with LLaMA 65B, with various modified prompts. It's a lot of fun.
A difficulty is that there's personality instability from run to run. That's not all bad, since it's a fun game to try to guide the bot in interesting directions. But deliberately guiding the bot (even implicitly, since the base model isn't instruction tuned) makes the chat less natural.
To get the bot to head in the right direction more consistently on its own, I wonder whether there'd be a way to apply RLHF, or some other fine-tuning technique, using my personal preferences. Let's say I were to thumbs-up and thumbs-down some responses. That'd give me a small dataset to use. Is there any technique that could get value from it? | 2023-06-21T06:09:09 | https://www.reddit.com/r/LocalLLaMA/comments/14ezam7/is_it_possible_to_do_personal_rlhf/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ezam7 | false | null | t3_14ezam7 | /r/LocalLLaMA/comments/14ezam7/is_it_possible_to_do_personal_rlhf/ | false | false | self | 4 | null |
What is the significance of n_ctx ? | 3 | I would like to know what is the significance of \`n\_ctx\`.
I know that i represents the maximum number of tokens that the input sequence can be.
But it is given as a parameter which can be set while initializing the model, so that makes me think, if I need to process a longer prompt I will just change it.
Having said that, I still have following queries -
\- Are LLM's developed with an optimal \`n\_ctx\` value?
\- Do different LLM's have different \`n\_ctx\` value?
\- How to check whats the \`n\_ctx\` values of different models
\- Will setting it other than the default value impact the generation speed during inferencing?
Any help is appreciated. | 2023-06-21T06:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/14eznwk/what_is_the_significance_of_n_ctx/ | dhirajsuvarna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14eznwk | false | null | t3_14eznwk | /r/LocalLLaMA/comments/14eznwk/what_is_the_significance_of_n_ctx/ | false | false | self | 3 | null |
Q: Simple prompts to test model's core reasoning ability? | 10 | I've seen some people use basic arithmetic questions. I'd love to collect such questions so that we can compare models easily. Ideally that can surface the known gaps between open source models and ChatGPT 3.5 / 4 models. | 2023-06-21T07:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/14f17ix/q_simple_prompts_to_test_models_core_reasoning/ | kkimdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14f17ix | false | null | t3_14f17ix | /r/LocalLLaMA/comments/14f17ix/q_simple_prompts_to_test_models_core_reasoning/ | false | false | self | 10 | null |
create the best coder open-source in the world? | 61 | Alright. It's time for some action!
[https://arxiv.org/abs/2306.11644](https://arxiv.org/abs/2306.11644)
​
Microsoft showed us you can train smaller llms, so small as 1.3B tokens that beat 3.5 GPT which in my opinion is quite good at coding if you are specific enough on single languages. This means you can also teach architecture principles, test actual code-bases and strategies and even make it able to interact with the system by commands like write file, read file, explore folder, make connections assumptions.
We can say that a 13B model per language is reasonable. Then it means we need to create a democratic way for teaching coding by examples and solutions and algorithms, that we create, curate and use open-source. Much like [sharegpt.com](https://sharegpt.com) but for coding tasks, solutions ways of thinking. We should be wary of 'enforcing' principles rather showing different approaches, as all approaches can have advantages and disadvantages.
This would mean that if it took 4 days of A100 to train that 1.3B, let's say it takes 30 days of a100 to train 13B model. Which we can run locally comfortably, even us AMD peasant mfers on Ubuntu with Rocm (facepalm).
Then all of us coders will have access to this model that we will constantly improve. Let's face it, AI's going to code better than us soon, let's just admit that and work on improving these coders. I also saw someone making a VSCode extension for WizardCoder which is absolutely great that we'll have large compatibility.
Maybe someone can then make a sort-of self-hosted coder pay per tokens as an official way of making money and supporting the project? :)
Also, why not take it a little bit further and have your own LLM trained on your own dataset, we could do this training in 1 day via fine-tuning, and then you can how your own AI running in your own repository that constantly fine-tunes itself. We could then 'cloudify' gpu VRAM and then maybe have an AI trained on a repository written in multiple languages. | 2023-06-21T12:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/14f5zr7/create_the_best_coder_opensource_in_the_world/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14f5zr7 | false | null | t3_14f5zr7 | /r/LocalLLaMA/comments/14f5zr7/create_the_best_coder_opensource_in_the_world/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Seeking Guidance on Open-Source LLM Deployment & Fine-Tuning | 0 | [removed] | 2023-06-21T14:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/14f8rns/seeking_guidance_on_opensource_llm_deployment/ | TobyWonKenobi | self.LocalLLaMA | 2023-06-21T14:13:52 | 0 | {} | 14f8rns | false | null | t3_14f8rns | /r/LocalLLaMA/comments/14f8rns/seeking_guidance_on_opensource_llm_deployment/ | false | false | default | 0 | null |
A reminder of why we need "local" LLM: Over 100,000 compromised ChatGPT accounts found for sale on dark web | 1 | 2023-06-21T14:13:07 | https://www.theregister.com/2023/06/20/stolen_chatgpt_accounts/ | kryptkpr | theregister.com | 1970-01-01T00:00:00 | 0 | {} | 14f8v68 | false | null | t3_14f8v68 | /r/LocalLLaMA/comments/14f8v68/a_reminder_of_why_we_need_local_llm_over_100000/ | false | false | default | 1 | null |
|
Opinions on 33b vs 65b models and PC suggestions please. | 7 | Sat wondering about which option to do about my current pc (5950x 128gig ram, 4070/4090) I sold a 3090 and stupidly bought a 4090 a few months back. Currently mostly use the 4070 to game on and 4090 to run 33b models (since cheaper on uk electric) but I'm pondering if I should sell the 4090 and maybe get 2 3090s with the cash. Im not planning to sell the 4070 cause i could use that to make a gaming machine in future if decide to make a dedicated ai machine etc.
So anyways my question is, is it worth spending an upgrade to get a new PSU (currently 1000w) and sell the 4090 to put towards 2x3090 just so I can run 65b models decently fast? Is there much difference between 33b and 65b thats worth spending 100s of dollars lol. Would love to have peoples opinion of those who can run the 65b models, I know can run 65b on cpu too with offloading but its slow as heck and 4070+4090 cant seem to be used together with + cpu layers wise. Also this mb runs in x8/x8 2 3090 would run in x16/x4 pci-e if that is a factor.
Opinions welcome please or advice on 65b models. | 2023-06-21T14:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/14f92zp/opinions_on_33b_vs_65b_models_and_pc_suggestions/ | fluffywuffie90210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14f92zp | false | null | t3_14f92zp | /r/LocalLLaMA/comments/14f92zp/opinions_on_33b_vs_65b_models_and_pc_suggestions/ | false | false | self | 7 | null |
Which are the best LLMs that can explain code? | 23 | Given a piece of code, I am looking for open-source LLMs that can explain what the code does in layman terms. Brownie points if it can additionally detect syntactic bugs in the code (i.e., the code will produce compile-time error if it is run). And super gigantic brownie points if it can detect semantic bugs in the code (i.e., the code will not produce compile-time errors, might not necessarily even produce run-time errors, but if it runs will surely produce output that's not expected or desired). | 2023-06-21T15:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/14faiqr/which_are_the_best_llms_that_can_explain_code/ | ResearcherNo4728 | self.LocalLLaMA | 2023-06-21T15:24:01 | 0 | {} | 14faiqr | false | null | t3_14faiqr | /r/LocalLLaMA/comments/14faiqr/which_are_the_best_llms_that_can_explain_code/ | false | false | self | 23 | null |
Building koboldcpp_CUDA on Linux | 20 | I've successfully managed to run Koboldcpp CUDA edition on Ubuntu! It's not something you can easily find through a direct search, but with some indirect hints, I figured it out.
On the Koboldcpp GitHub repository, there are no instructions on how to build the CuBlas version, which is crucial for utilizing Nvidia's CUDA cores for text processing and inference in LLMs.
However, I noticed that on the official Koboldcpp releases page ([https://github.com/LostRuins/koboldcpp/releases](https://github.com/LostRuins/koboldcpp/releases)), there is a 300MB executable available for Windows users, named "Koboldcpp CUDA edition." But hey, I'm a Linux user now, and I want my neural networks to work on Linux!
So, I went to the main Koboldcpp GitHub repository ([https://github.com/LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp)) and followed the instructions to install it on Linux: cloning the repo with "git clone [https://github.com/LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp)" and then compiling it with the command "make LLAMA\_OPENBLAS=1 LLAMA\_CLBLAST=1." Everything seemed fine, but where was CUDA? Well, devs of Koboldcpp, don't like Nvidia, which is occasionally mentioned in the releases, so no instructions were provided.
However, Koboldcpp is a fork of llama.cpp, and if you visit their GitHub repository ([https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)), you'll understand that the LLAMA\_OPENBLAS=1 and LLAMA\_CLBLAST=1 parameters are actually building options for llama.cpp, not Koboldcpp. Luckily, there was an instruction on how to build with CuBlas: "make LLAMA\_CUBLAS=1." But before that, I had to download and install the CUDA Toolkit.
To ensure a smooth process, I recommend synchronizing your system first with the following commands:
\`\`\`
sudo apt update
sudo apt upgrade
\`\`\`
Then proceed with the CUDA Toolkit installation:
\`\`\`
wget [https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86\_64/cuda-keyring\_1.0-1\_all.deb](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb)
sudo dpkg -i cuda-keyring\_1.0-1\_all.deb
sudo apt-get update
sudo apt-get -y install cuda
\`\`\`
Please note that it's a hefty 3.5GB download.
After that, you can build Koboldcpp with the following parameters:
\`\`\`
make LLAMA\_CUBLAS=1 LLAMA\_OPENBLAS=1 LLAMA\_CLBLAST=1
\`\`\`
Once it's done, you can start using it with these parameters:
\`\`\`
\--useclblast 0 0 --gpulayers 14
\`\`\`
For 8GB GPUs and 30B models, 14 layers are optimal, but you can adjust this based on your specific model.
​
Messages of success:
\`\`\`
ggml\_init\_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060 Ti
llama\_model\_load\_internal: using CUDA for GPU acceleration
llama\_model\_load\_internal: mem required = 19136.82 MB (+ 3124.00 MB per state)
llama\_model\_load\_internal: allocating batch\_size x 1 MB = 512 MB VRAM for the scratch buffer
llama\_model\_load\_internal: offloading 14 repeating layers to GPU
llama\_model\_load\_internal: offloaded 14/63 layers to GPU
llama\_model\_load\_internal: total VRAM used: 5632 MB
​ | 2023-06-21T15:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/14faz1d/building_koboldcpp_cuda_on_linux/ | Current-Voice2755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14faz1d | false | null | t3_14faz1d | /r/LocalLLaMA/comments/14faz1d/building_koboldcpp_cuda_on_linux/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'a1QWRPDvXA-AUqmcv2cSMMAcJ1DAQeFgdSV9fAki7Y0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=108&crop=smart&auto=webp&s=ebb5cac99e579df42039abe960e8a6dc934a7f0b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=216&crop=smart&auto=webp&s=b125a38c636a9563a2b5a68f2008e88cfc9955d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=320&crop=smart&auto=webp&s=9c6140ec0d1d93fcf5650799aaabacd81d37bdf8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=640&crop=smart&auto=webp&s=213b51b599b31106f9ead159ada92fc132b92e40', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=960&crop=smart&auto=webp&s=3256192b8e883d7308fd1e64900fbc0faff76841', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=1080&crop=smart&auto=webp&s=4cd8207e5e007aa50ec6a01c872054425ac5bfe5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?auto=webp&s=d051687fd49ea02dee7e8d9dff49bd072119066b', 'width': 1200}, 'variants': {}}]} |
Last Nvidia Drivers let you use the shared memory of your GPU with EXLlama | 80 | My graphics card is an Nvidia RTX 4070 with 12 gigabytes of video memory. I recently upgraded my PC with an additional 32 gigabytes of system RAM, bringing the total to 48 gigabytes.
I tried loading a 33b model(Guanaco is great), with these two options: llama.cpp using a GGML model, or EXLlama using a GPTQ version.
The llama.cpp option was slow, achieving around 0.5 tokens per second. The EXLlama option was significantly faster at around 2.5 tokens per second. It uses system RAM as shared memory once the graphics card's video memory is full, but you have to specify a "gpu-split"value or the model won't load.
I'm writing because I read that the last Nvidia's 535 drivers were slower than the previous versions. So I downgraded, but sadly the shared memory trick no longer works and EXLlama won't load models larger than the GPU's memory capacity... Does anyone have any ideas as to why previous drivers don't work, or anything I could do to gain an additional 1 or 2 tokens per second?
Though the current speed is already impressive for just an RTX 4070 with only 12GB of video memory ...
Thx | 2023-06-21T15:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/14fb9c0/last_nvidia_drivers_let_you_use_the_shared_memory/ | alexthai7 | self.LocalLLaMA | 2023-06-21T16:18:01 | 0 | {} | 14fb9c0 | false | null | t3_14fb9c0 | /r/LocalLLaMA/comments/14fb9c0/last_nvidia_drivers_let_you_use_the_shared_memory/ | false | false | self | 80 | null |
Best LLM for Note and Fact generation | 2 | [removed] | 2023-06-21T15:53:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fbfnf | false | null | t3_14fbfnf | /r/LocalLLaMA/comments/14fbfnf/best_llm_for_note_and_fact_generation/ | false | false | default | 2 | null |
||
Train a local LLM on my files for use as a general-purpose chatbot | 4 | **tl;dr: I'm a noob and want to have a chatbot answer questions about my personal data. Pls help.**
**edit: I searched the subreddit with some new words and was pointed to LoRA training which seems like the starting point, so this might be moot. Please point me to any new resources from there! :)**
Hi Reddit. I need someone smarter than me to help figure out the best way to accomplish this goal. I wasn't sure what subreddit would be appropriate, so please be gentle if I'm in the wrong place.
I have run LLMs on my local machine with some success and the workflow of using a chatbot assistant is becoming really helpful to me. I have also done some light work with ChatGPT with various 'ask your PDF' type tools.
**My goal is to:**
* **Train a language model on a database of markdown files** to incorporate the information in them to their responses.
* **Run the LLM privately, since I would want to feed it personal information** and train it on me/my household specifically.
For example, I would love to be able to ask the chatbot "Remind me the VIN for my old Honda Accord?" and it can answer.
I'm familiar with Python and deploying web apps, but not really familiar with the world of open source LLM and training models on data.
What's the best way to do this? If I could do it securely, it would be nice to run it in the cloud so that my household could access it anywhere, but not really a requirement if it adds too much complexity to the task.
Happy to throw down a Reddit award for nice helpers, including if someone can point me to a better subreddit for this question :)
​ | 2023-06-21T15:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/14fbg4p/train_a_local_llm_on_my_files_for_use_as_a/ | Creative_Addition_29 | self.LocalLLaMA | 2023-06-21T16:02:03 | 0 | {} | 14fbg4p | false | null | t3_14fbg4p | /r/LocalLLaMA/comments/14fbg4p/train_a_local_llm_on_my_files_for_use_as_a/ | false | false | self | 4 | null |
Local vicuna AI for low end pc? | 1 | [removed] | 2023-06-21T15:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/14fbi79/local_vicuna_ai_for_low_end_pc/ | AdamsText | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fbi79 | false | null | t3_14fbi79 | /r/LocalLLaMA/comments/14fbi79/local_vicuna_ai_for_low_end_pc/ | false | false | default | 1 | null |
Starting out with local llm | 1 | [removed] | 2023-06-21T15:59:27 | https://www.reddit.com/r/LocalLLaMA/comments/14fbkve/starting_out_with_local_llm/ | haywirehax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fbkve | false | null | t3_14fbkve | /r/LocalLLaMA/comments/14fbkve/starting_out_with_local_llm/ | false | false | default | 1 | null |
Using LLM to create dataset | 1 | [removed] | 2023-06-21T16:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/14fck9r/using_llm_to_create_dataset/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fck9r | false | null | t3_14fck9r | /r/LocalLLaMA/comments/14fck9r/using_llm_to_create_dataset/ | false | false | default | 1 | null |
Local rig configuration for llama and LLM | 6 | [removed] | 2023-06-21T16:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/14fcko5/local_rig_configuration_for_llama_and_llm/ | hehsbbslwh142538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fcko5 | false | null | t3_14fcko5 | /r/LocalLLaMA/comments/14fcko5/local_rig_configuration_for_llama_and_llm/ | false | false | default | 6 | null |
Training LLaMA with Unstructured Data | 5 | I'm new to these topics, and I couldn't find much information about training the LLaMA models with documents. For example, is it possible to fine-tune Guanaco models with documents? How can I train these models with unstructured data? | 2023-06-21T18:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/14fesbj/training_llama_with_unstructured_data/ | ARBasaran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fesbj | false | null | t3_14fesbj | /r/LocalLLaMA/comments/14fesbj/training_llama_with_unstructured_data/ | false | false | self | 5 | null |
Transformers samplers added to exllama on oobabooga text-gen-webui, so all the samplers of GPTQ-for-LLaMA now works in exllama! | 38 | 2023-06-21T18:46:18 | https://github.com/oobabooga/text-generation-webui/commit/580c1ee7483e9934b7884ff0071c1784060ba6eb | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14ffvcn | false | null | t3_14ffvcn | /r/LocalLLaMA/comments/14ffvcn/transformers_samplers_added_to_exllama_on/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'A0zS9fWyOdGCf-jPZS_FPiAT_xwyux6C0Vz2GrqP4ik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=108&crop=smart&auto=webp&s=af17d16e6e029ee27991152f543a05d71bdcb2ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=216&crop=smart&auto=webp&s=1fce5c138395b1b5f44f391081e0abfa5015756f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=320&crop=smart&auto=webp&s=1876c19e6a70ddaf6698a27a2d1379c04fb61790', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=640&crop=smart&auto=webp&s=a64ff67f3819cf19d5f4c08e9b3332ced3597bf7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=960&crop=smart&auto=webp&s=44f5ba3cc9d15af251af5ab6a2188ea56b172dfe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=1080&crop=smart&auto=webp&s=15265d74466e9e9e05f155a23d0c5ed032faf55d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?auto=webp&s=bdc5ecb29df84d3364cc5ac3bb46da725efc778a', 'width': 1200}, 'variants': {}}]} |
||
Offering a stipend for researchers to train open-source fine-tuned models | 23 | [removed] | 2023-06-21T18:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/14fg430/offering_a_stipend_for_researchers_to_train/ | newy66 | self.LocalLLaMA | 2023-06-21T19:19:09 | 0 | {} | 14fg430 | true | null | t3_14fg430 | /r/LocalLLaMA/comments/14fg430/offering_a_stipend_for_researchers_to_train/ | false | false | default | 23 | null |
A simple way to "Extending Context to 8K"?! | 161 | 2023-06-21T19:14:58 | https://kaiokendev.github.io/til#extending-context-to-8k | pseudonerv | kaiokendev.github.io | 1970-01-01T00:00:00 | 0 | {} | 14fgjqj | false | null | t3_14fgjqj | /r/LocalLLaMA/comments/14fgjqj/a_simple_way_to_extending_context_to_8k/ | false | false | default | 161 | null |
|
How to download models from HF from terminal without git LFS or sudo | 0 | [deleted] | 2023-06-21T20:30:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fiht1 | false | null | t3_14fiht1 | /r/LocalLLaMA/comments/14fiht1/how_to_download_models_from_hf_from_terminal/ | false | false | default | 0 | null |
||
Any arch Linux users here with and GPU? | 1 | [deleted] | 2023-06-21T21:43:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fkemo | false | null | t3_14fkemo | /r/LocalLLaMA/comments/14fkemo/any_arch_linux_users_here_with_and_gpu/ | false | false | default | 1 | null |
||
Any arch Linux users with AMD GPU? | 10 | Hi, I'm trying to get a LLM run under arch Linux with a AMD GPU? But for now without success. Anybody here who can help? Tried oobabooga but I was not able to get it to run with rocm. How is your experience so far? | 2023-06-21T21:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/14fkibn/any_arch_linux_users_with_amd_gpu/ | RudeboyRudolfo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fkibn | false | null | t3_14fkibn | /r/LocalLLaMA/comments/14fkibn/any_arch_linux_users_with_amd_gpu/ | false | false | self | 10 | null |
Hearing Scheduled for Tomorrow with Hugging Face CEO - Artificial Intelligence: Advancing Innovation Towards the National Interest | 57 | [Stream it here June 22, 2023 10:00AM EST](https://www.youtube.com/live/7s_Wv5VklVA)
[Artificial Intelligence: Advancing Innovation Towards the National Interest](https://science.house.gov/hearings?ID=441AF8AB-7065-45C8-81E0-F386158D625C)
[Hearing Charter \[PDF\]](https://republicans-science.house.gov/_cache/files/f/8/f89a11b6-183c-45d1-99a6-4d2873504261/4CEBD5D89D69186E01D99B7E5F53A361.2023-06-22-ai-hearing-charter.pdf)
​
https://preview.redd.it/enlld2wk1g7b1.png?width=742&format=png&auto=webp&s=b1e4b5d5e33013536a45168f377f305c9c4ca76b | 2023-06-21T22:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/14fkzcj/hearing_scheduled_for_tomorrow_with_hugging_face/ | Basic_Description_56 | self.LocalLLaMA | 2023-06-22T05:09:50 | 0 | {} | 14fkzcj | false | null | t3_14fkzcj | /r/LocalLLaMA/comments/14fkzcj/hearing_scheduled_for_tomorrow_with_hugging_face/ | false | false | 57 | {'enabled': False, 'images': [{'id': 'qAQ0onF1XqFRiBxHoF5saS8LpmACVTHzxE9cK3sUNhg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=108&crop=smart&auto=webp&s=91d426842e7a66f7718ba0a392614ce8d1bb4b04', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=216&crop=smart&auto=webp&s=9f8ebbb5e1b16cb571d7fdbf7559dffc9eef0dc6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=320&crop=smart&auto=webp&s=8c81b43000e21e361d57fa8f5b115b5c42abe739', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=640&crop=smart&auto=webp&s=5d922bac878e561d0930fbf5edea6cc250b6eba6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=960&crop=smart&auto=webp&s=3503b59e9151f2f99c8d11b66a886ebaa51c8468', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=1080&crop=smart&auto=webp&s=55a7e6f4e4bc50056f080571ab41a071d51edd91', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?auto=webp&s=600a488fbd23f147530002ce338c35078fa31a0f', 'width': 1280}, 'variants': {}}]} |
|
Hearing Scheduled for Tomorrow with CEO of HuggingFace - Artificial Intelligence: Advancing Innovation Towards the National Interest | 1 | [deleted] | 2023-06-21T22:19:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14flazk | false | null | t3_14flazk | /r/LocalLLaMA/comments/14flazk/hearing_scheduled_for_tomorrow_with_ceo_of/ | false | false | default | 1 | null |
||
Finetune bug with Llama 13, 30 and 65B | 6 | Hey all!
I am trying to finetune the Llama models using the stanford alpaca repo. I can successfully finetune a 7B model, but all of the larger models have a strange issue where the initial loss is massive (1e5) and immediately drops to 0.0 for the rest of the finetune process. The model that I get at the end of the finetune run is completely broken and doesn't know how to answer questions.
I am running the finetune script on 8 A100 80GB GPUs.
If anyone has any ideas as to why I would only get this problem with models larger than 7B and how I could solve it, I would greatly appreciate it. | 2023-06-21T22:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/14flbky/finetune_bug_with_llama_13_30_and_65b/ | Svvance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14flbky | false | null | t3_14flbky | /r/LocalLLaMA/comments/14flbky/finetune_bug_with_llama_13_30_and_65b/ | false | false | self | 6 | null |
Bright Eye: free mobile AI app to generate art and text.(GPT-4 powered) | 0 | [removed] | 2023-06-21T22:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/14flddm/bright_eye_free_mobile_ai_app_to_generate_art_and/ | Psychological_Ad4766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14flddm | false | null | t3_14flddm | /r/LocalLLaMA/comments/14flddm/bright_eye_free_mobile_ai_app_to_generate_art_and/ | false | false | default | 0 | null |
Prompt engineering: What are some good prompts you have found to make the chatbot emulate personalities? | 16 | I’m trying to test different LLMs on who can emulate personalities most accurately and was wondering if anyone has good ideas of how to compose the prompt so that the LLMs behaves the way we expect. The personality will be given as a description within the prompt context/token itself so we’re not trying to emulate any famous person that LLM already knows.
I think there was a post somewhere that gave some guidelines on how to create a character. Does anyone know where we can find it? | 2023-06-21T22:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/14flwpw/prompt_engineering_what_are_some_good_prompts_you/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14flwpw | false | null | t3_14flwpw | /r/LocalLLaMA/comments/14flwpw/prompt_engineering_what_are_some_good_prompts_you/ | false | false | self | 16 | null |
Yet another fun test (astronomy) | 6 | All models were run with ooba/llama-precise + default model-specific instruct prompt that ooba chose. The data part is from Wolfram.
Prompt:
>Which planets can I see tonight from Honolulu on 2023-03-25 at precisely 9:00pm UTC-10? Use the data below to answer.
>
>planet | altitude | azimuth
>
>Mercury | -23° 5' 20" (below horizon) | 286° 8' 28" (WNW)
>
>Venus | 3° 51' 17.3" | 285° 28' 2" (WNW)
>
>Mars | 53° 1' 20" | 284° 34' 50" (WNW)
>
>Jupiter | -19° 40' 21" (below horizon) | 284° 57' 50" (WNW)
>
>Saturn | -65° 14' 13" (below horizon) | 288° 24' 1" (WNW)
>
>Uranus | 9° 5' 41.9" | 284° 13' 20" (WNW)
>
>Neptune | -41° 53' 35" (below horizon) | 286° 9' 14" (WNW)
Wizard-Vicuna-7B-Uncensored-GPTQ
>Based on the given information, you can see Mercury, Venus, Mars, Jupiter, Saturn, Uranus and Neptune from Honolulu on March 25th at precisely 9:00pm UTC-10. However, due to the Earth's rotation, these planets will appear in different positions relative to each other throughout the night.
airoboros 7B
>To find out which planets you can see tonight from Honolulu on March 25th, 2023 at exactly 9:00 pm UTC-10, we need to use the following information:
>
>Reference time: 2023-03-25T21:00:00Z (UTC+10)
>
>Planet altitudes and azimuths are given in decimal degrees. You can convert them into degrees using this formula:
>
>Decimal degree \* π / 180 ≈ Radian
>
>For example, if a planet is at an altitude of 3.51, then its angle would be approximately equal to 3.51 × π/180 = 2.07 radians or about 118.5 degrees.
>
>Now let's calculate the local sidereal times for each planet:
>
>Mercury: LST = 19h + 2m;
>
>Venus: LST = 19h + 3m;
>
>Mars: LST = 19h + 4m;
>
>Jupiter: LST = 19h + 5m;
>
>Saturn: LST = 19h + 6m;
>
>Uranus: LST = 19h + 7m;
>
>Neptune: LST = 19h + 8m;
>
>Finally, we can determine whether they will be visible during that specific hour by comparing their altitudes with the horizon height:
>
>Mercury: -23° 5' 20" < 0°
>
>Venus: 3° 51' 17.3" > 0°
>
>Mars: 53° 1' 20" > 0°
>
>Jupiter: -19° 40' 21" < 0°
>
>Saturn: -65° 14' 13" < 0°
>
>Uranus: 9° 5' 41.9" > 0°
>
>Neptune: -41° 53' 35" < 0°
>
>So, all planets except Mercury and Venus are above the horizon at this time in Honolulu (which is located on a latitude of approximately 21°N).
WizardLM-7B-V1.0-Uncensored-GPTQ
> You should be able to see Mercury, Venus and Mars from Honolulu on March 25th at 9 pm UTC-10. However, Jupiter will not be visible due to its low altitude of -19 degrees below the horizon. Saturn is also too close to the horizon for visibility. Uranus and Neptune are both too far away in the sky to be seen with the naked eye.
wizardlm-13b-v1.0-uncensored.ggmlv3.q5\_K\_M
>From Honolulu on March 25th at precisely 9pm UTC-10, you can see Mercury below the horizon, Venus in the western sky with an altitude of 3 degrees and 51 minutes, Mars in the southwestern sky with an altitude of 53 degrees and 1 minute, Jupiter also below the horizon, Saturn below the horizon as well, Uranus in the west-northwest sky with an altitude of 9 degrees and 5 minutes, and Neptune is not visible because it's below the horizon.
ChatGPT-4
>On 2023-03-25 at precisely 9:00pm UTC-10 in Honolulu, you will be able to see the following planets:
>
>Venus: Altitude 3° 51' 17.3" | Azimuth 285° 28' 2" (WNW)
>
>Mars: Altitude 53° 1' 20" | Azimuth 284° 34' 50" (WNW)
>
>Uranus: Altitude 9° 5' 41.9" | Azimuth 284° 13' 20" (WNW)
>
>The other planets (Mercury, Jupiter, Saturn, and Neptune) are below the horizon and will not be visible.
I think there is usefulness in using an automated script to query some kind of trustworthy source (like Wolfram, or even Wikipedia search) and then append that data (which is not necessarily easily understood) right into the prompt going into an LLM in order to produce a human-friendly result.
Sadly, only ChatGPT-4 got it right today. | 2023-06-22T00:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/14foqxx/yet_another_fun_test_astronomy/ | Mizstik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14foqxx | false | null | t3_14foqxx | /r/LocalLLaMA/comments/14foqxx/yet_another_fun_test_astronomy/ | false | false | self | 6 | null |
Automatic Stock Analysis - Scraping Twitter + GPT4 (or llama) | 1 | 2023-06-22T01:51:40 | https://www.linkedin.com/posts/kristintynski_gpt4-gpt4-gpt35turbo16k-activity-7077372458848145408-xcx7?utm_source=share&utm_medium=member_desktop | transtwin | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 14fq3w0 | false | null | t3_14fq3w0 | /r/LocalLLaMA/comments/14fq3w0/automatic_stock_analysis_scraping_twitter_gpt4_or/ | false | false | default | 1 | null |
|
How to Use Langchain with Exllama? | 9 | Is that possible to use Langchain with Exllama?
I'd appreciate any code snippet. | 2023-06-22T02:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/14fqfdu/how_to_use_langchain_with_exllama/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fqfdu | false | null | t3_14fqfdu | /r/LocalLLaMA/comments/14fqfdu/how_to_use_langchain_with_exllama/ | false | false | self | 9 | null |
What models are GPT4All-J compatible? | 7 | [removed] | 2023-06-22T03:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/14frl6s/what_models_are_gpt4allj_compatible/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14frl6s | false | null | t3_14frl6s | /r/LocalLLaMA/comments/14frl6s/what_models_are_gpt4allj_compatible/ | false | false | default | 7 | null |
New subreddit for AI prompt requests | 2 | [removed] | 2023-06-22T03:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/14fsaaq/new_subreddit_for_ai_prompt_requests/ | No-Transition3372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fsaaq | false | null | t3_14fsaaq | /r/LocalLLaMA/comments/14fsaaq/new_subreddit_for_ai_prompt_requests/ | false | false | default | 2 | null |
Whats the best way to run LLaMA on a M1 Max? Best Model? | 6 | Ive been using a couple but ive been kind of randomly using them. And im not sure im using the best method to run them. Anyone want to share their experience on m1 ? | 2023-06-22T04:04:38 | https://www.reddit.com/r/LocalLLaMA/comments/14fsu1c/whats_the_best_way_to_run_llama_on_a_m1_max_best/ | MumeiNoName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fsu1c | false | null | t3_14fsu1c | /r/LocalLLaMA/comments/14fsu1c/whats_the_best_way_to_run_llama_on_a_m1_max_best/ | false | false | self | 6 | null |
How to train LLM to other languages? | 2 | [removed] | 2023-06-22T06:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/14fvdxo/how_to_train_llm_to_other_languages/ | omudake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fvdxo | false | null | t3_14fvdxo | /r/LocalLLaMA/comments/14fvdxo/how_to_train_llm_to_other_languages/ | false | false | default | 2 | null |
New pruning method, Wanda, can prune LLMs to 50% sparsity with no retraining or weight update needed and minimal degradation | 158 | The models after pruning can be used ***as is.*** Other methods require computationally expensive retraining or a weight update process.
Paper: [https://arxiv.org/abs/2306.11695](https://arxiv.org/abs/2306.11695)
Code: [https://github.com/locuslab/wanda](https://github.com/locuslab/wanda)
Excerpts:
>The argument concerning the need for retraining and weight update does not fully capture the challenges of pruning LLMs. In this work, we address this challenge by introducing a straightforward and effective approach, termed Wanda (Pruning by Weights and activations). This technique successfully prunes LLMs to high degrees of sparsity without *any* need for modifying the remaining weights. Given a pretrained LLM, we compute our pruning metric from the initial to the final layers of the network. After pruning a preceding layer, the subsequent layer receives updated input activations, based on which its pruning metric will be computed. The sparse LLM after pruning is ready to use without further training or weight adjustment.
>
>We evaluate Wanda on the LLaMA model family, a series of Transformer language models at various parameter levels, often referred to as LLaMA-7B/13B/30B/65B. Without any weight update, Wanda outperforms the established pruning approach of magnitude pruning by a large margin. Our method also performs on par with or in most cases better than the prior reconstruction-based method SparseGPT. Note that as the model gets larger in size, the accuracy drop compared to the original dense model keeps getting smaller. For task-wise performance, we observe that there are certain tasks where our approach Wanda gives *consistently* better results across all LLaMA models, i.e. HellaSwag, ARC-c and OpenbookQA.
>
>We explore using parameter efficient fine-tuning (PEFT) techniques to recover performance of pruned LLM models. We use a popular PEFT method LoRA, which has been widely adopted for task specific fine-tuning of LLMs. However, here we are interested in recovering the performance loss of LLMs during pruning, thus we perform a more general “fine-tuning” where the pruned networks are trained with an autoregressive objective on C4 dataset. We enforce a limited computational budget (1 GPU and 5 hours). We find that we are able to restore performance of pruned LLaMA-7B (unstructured 50% sparsity) with a non-trivial amount, reducing zero-shot WikiText perplexity from 7.26 to 6.87. The additional parameters introduced by LoRA is only 0.06%, leaving the total sparsity level still at around 50% level.
​
WikiText perplexity, original dense model vs unstructured 50% sparsity:
||**LLaMA 7B**|**LLaMA 13B**|**LLaMA 30B**|**LLaMA 65B**|
|:-|:-|:-|:-|:-|
|Dense|5.68|5.09|4.77|3.56|
|Wanda|7.26|6.15|5.24|4.57|
​
Accuracies % for 7 zero-shot tasks with unstructured 50% sparsity (original dense model in parentheses):
||BoolQ|RTE|HellaSwag|WinoGrande|ARC-e|ARC-c|OBQA|**Mean**|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|**7B**|70.3 (71.7)|53.3 (53.4)|52.9 (58.3)|64.7 (68.0)|64.1 (67.7)|37.0 (38.6)|26.4 (28.0)|52.7 (**55.1**)|
|**13B**|67.1 (68.3)|61.0 (65.3)|56.6 (60.8)|71.7 (70.0)|68.4 (73.6)|41.7 (44.0)|28.0 (30.6)|56.4 (**58.9**)|
|**30B**|70.3 (66.9)|66.8 (61.4)|62.3 (64.8)|71.1 (72.4)|74.8 (75.3)|46.5 (46.9)|32.4 (29.4)|**60.6** (59.6)|
|**65B**|82.2 (81.8)|69.1 (71.8)|64.5 (65.2)|73.9 (76.9)|74.6 (75.4)|45.9 (47.2)|32.8 (36.4)|63.3 (**65.0**)|
​ | 2023-06-22T06:25:26 | https://www.reddit.com/r/LocalLLaMA/comments/14fvht9/new_pruning_method_wanda_can_prune_llms_to_50/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fvht9 | false | null | t3_14fvht9 | /r/LocalLLaMA/comments/14fvht9/new_pruning_method_wanda_can_prune_llms_to_50/ | false | false | self | 158 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
How do we proactively defend against government regulation? | 0 | [deleted] | 2023-06-22T07:11:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fwbm5 | false | null | t3_14fwbm5 | /r/LocalLLaMA/comments/14fwbm5/how_do_we_proactively_defend_against_government/ | true | false | default | 0 | null |
||
Qn about loading / accessing model matrices directly. | 4 | I have a background in model reduction / dimensionality reduction and I have some ideas I'd like to try to reduce model size. But, I am not familiar with how to to use or access transformer models and their matrices. I have a passing familiarity with transformer model structure (based on blog posts) and in short my idea is simply rank reduction of model weight matrices.
Given a transformers model loaded in memory, can anyone please tell me how to access the matrices directly? Can I easily replace them? | 2023-06-22T07:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/14fwsrr/qn_about_loading_accessing_model_matrices_directly/ | _supert_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fwsrr | false | null | t3_14fwsrr | /r/LocalLLaMA/comments/14fwsrr/qn_about_loading_accessing_model_matrices_directly/ | false | false | self | 4 | null |
Open Llama 7B Fine-tuned on Flan Instruction Dataset | 5 | Here’s a new model: https://twitter.com/enricoshippole/status/1671544986431528963?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | 2023-06-22T08:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/14fx8wc/open_llama_7b_finetuned_on_flan_instruction/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fx8wc | false | null | t3_14fx8wc | /r/LocalLLaMA/comments/14fx8wc/open_llama_7b_finetuned_on_flan_instruction/ | false | false | self | 5 | null |
Textbooks Are All You Need | 1 | 2023-06-22T08:13:52 | https://arxiv.org/abs/2306.11644 | [deleted] | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14fxeph | false | null | t3_14fxeph | /r/LocalLLaMA/comments/14fxeph/textbooks_are_all_you_need/ | false | false | default | 1 | null |
|
How do you quantize 16fp ggml to 4bit ggml? | 1 | [removed] | 2023-06-22T08:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/14fxv5t/how_do_you_quantize_16fp_ggml_to_4bit_ggml/ | Artemis369212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fxv5t | false | null | t3_14fxv5t | /r/LocalLLaMA/comments/14fxv5t/how_do_you_quantize_16fp_ggml_to_4bit_ggml/ | false | false | default | 1 | null |
Fine-Tuning on a 3090 | 6 | Hello everyone,
I’m experimenting with LLMs and I’m interested in fine-tuning a model, even a small one. I already know what techniques can be used to fine tune LLMs efficiently, but I’m not sure about the memory requirements.
What size of model can I fit in a 3090 for finetuning? Is 7B too much for that card? | 2023-06-22T08:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/14fxynl/finetuning_on_a_3090/ | fillo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fxynl | false | null | t3_14fxynl | /r/LocalLLaMA/comments/14fxynl/finetuning_on_a_3090/ | false | false | self | 6 | null |
KoboldCpp now supports GPU offload for MPT, GPT-2, GPT-J and GPT-NeoX, plus upgraded K-Quant matmul kernels for OpenCL | 67 | 2023-06-22T09:19:00 | https://github.com/LostRuins/koboldcpp/releases/tag/v1.32 | HadesThrowaway | github.com | 1970-01-01T00:00:00 | 0 | {} | 14fyjyi | false | null | t3_14fyjyi | /r/LocalLLaMA/comments/14fyjyi/koboldcpp_now_supports_gpu_offload_for_mpt_gpt2/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'mIf7PnxpNMItLtQK1SqN9azpmHOLEknWisRI8z-TtJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?width=108&crop=smart&auto=webp&s=68ce3b74b350ec8a7ad9a824ba8c36ca17632f9c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?width=216&crop=smart&auto=webp&s=f89510ffef12dcc8f8585d2cd00c30f310932473', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?width=320&crop=smart&auto=webp&s=d37010b049a919b2deb3a67fc236b76cab928007', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?width=640&crop=smart&auto=webp&s=887f2a65199a930159e17aa53a9c5fc823ef40ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?width=960&crop=smart&auto=webp&s=046ff177a075b52185426a87a7a37f56cbf95653', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?width=1080&crop=smart&auto=webp&s=28ec9a2f19276e1e32e68cae3645d61e43968e4c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XKAW3H-ljRNOaTl0oA-RgGMh_w7eEdFnykzECyYcLuw.jpg?auto=webp&s=e059506f08141479d64844d50d025c08629b702b', 'width': 1200}, 'variants': {}}]} |
||
Need help finding local LLM | 6 | Sorry... we got lost in the variety of available large language models.
We spend a lot of reading an watching youtube videos. But have not yet
been able to get the project up and running.
checked e.g.:
- https://medium.com/geekculture/list-of-open-sourced-fine-tuned-large-language-models-llm-8d95a2e0dc76
- https://github.com/Mooler0410/LLMsPracticalGuide
- https://www.reddit.com/r/LocalLLaMA/comments/12r552r/creating_an_ai_agent_with_vicuna_7b_and_langchain/
- https://www.youtube.com/watch?v=9ISVjh8mdlA
What we're looking for:
- running full local
- no API-key requirements
- no data exposing to external
- GPU: available for testing is an GTX 1060 and an RTX 3080
- must run on Windows 10 and Windows 11 (compliance requirement)
- for testing we would like to use
https://github.com/oobabooga/text-generation-webui
- should be able to read PDF (at best without pre processing), Markdown
and HTML, .py, .go. Any other format we should be able to convert with
pandoc. Currently in total ~30 GB
- content: production process information (not only data, a lot of
text), user manuals for people working in production and source code
- long term goal (~ 6 month): using it with langchain (python not JS)
Any ideas to start without ending up in "tutorial-hell" where we are at the moment.
Edit -- 2023-06-27:
Thank you for all the comments. I played a bit around but didn't make any real progress. Will update if I get it to work. | 2023-06-22T11:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/14g0web/need_help_finding_local_llm/ | vbd | self.LocalLLaMA | 2023-06-27T11:12:04 | 0 | {} | 14g0web | false | null | t3_14g0web | /r/LocalLLaMA/comments/14g0web/need_help_finding_local_llm/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '66U9o_LU4nSitenoNOAE8D-uRa7BXF9oUao638_JbYQ', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/G4prpdCMa0joJzc2wZ7G0b39ohCsj3IqFazrzoMY2Rk.jpg?width=108&crop=smart&auto=webp&s=4e4cc8742fbfc4ce60a456b50751974da5fcb11c', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/G4prpdCMa0joJzc2wZ7G0b39ohCsj3IqFazrzoMY2Rk.jpg?width=216&crop=smart&auto=webp&s=71b6a005b2b9edd647d908b659b9f78793328951', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/G4prpdCMa0joJzc2wZ7G0b39ohCsj3IqFazrzoMY2Rk.jpg?width=320&crop=smart&auto=webp&s=c3d273d4c73ed412d92efc387457b40299ccc660', 'width': 320}, {'height': 408, 'url': 'https://external-preview.redd.it/G4prpdCMa0joJzc2wZ7G0b39ohCsj3IqFazrzoMY2Rk.jpg?width=640&crop=smart&auto=webp&s=6c345a0681bbc7ac20439f9543fa12d6efc61bc2', 'width': 640}, {'height': 613, 'url': 'https://external-preview.redd.it/G4prpdCMa0joJzc2wZ7G0b39ohCsj3IqFazrzoMY2Rk.jpg?width=960&crop=smart&auto=webp&s=86c9c2900ea56cf36b8f1145d545b4071ddc07ed', 'width': 960}], 'source': {'height': 619, 'url': 'https://external-preview.redd.it/G4prpdCMa0joJzc2wZ7G0b39ohCsj3IqFazrzoMY2Rk.jpg?auto=webp&s=448c87974f483560804665eadeb1634d40f029f6', 'width': 969}, 'variants': {}}]} |
Attempt to run Llama on a remote server with chatbot-ui | 1 | hi! I really like the solution [https://github.com/keldenl/gpt-llama.cpp](https://github.com/keldenl/gpt-llama.cpp) which helps to deploy [https://github.com/mckaywrigley/chatbot-ui](https://github.com/mckaywrigley/chatbot-ui) on the local model. I am running this together with Wizard7b or 13b locally and it works fine, but when I tried to upload to a remote server I met an error.
Please tell me what this may be related to-I uploaded all the files to the server since my computer is running slowly, downloaded and changed the .env file, etc.I did a check and saw
​
`--RESULTS--`
`Curl command was successful!`
`To use any app with gpt-llama.cpp, please provide the following as the OPENAI_API_KEY:/home/ubuntu/github/llama.cpp/models/7b/7b.bin`
But after I launch the chatbot-ui by the command `OPENAI_API_HOST=http://181.211.175.234:8000/ npm run dev`
, nothing opens at all, when I launch docker, the web ui opens and the error all the time
​
`"Incorrect API key provided: /home/ub**.bin. You can find your API key at` [`https://platform.openai.com/account/api-keys."`](https://platform.openai.com/account/api-keys.")
and
​
`Internal Server Error`
`Code: 500`
please tell me, maybe you know what can be done?
thank you!
PS I tested the work separately via the command line llama.cpp and everything worked fine. That is, the error is connected with the api somewhere, I think so.
oobaboga works fine, but I want to use the chatbot-ui interface | 2023-06-22T12:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/14g1xg1/attempt_to_run_llama_on_a_remote_server_with/ | Aplestrong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14g1xg1 | false | null | t3_14g1xg1 | /r/LocalLLaMA/comments/14g1xg1/attempt_to_run_llama_on_a_remote_server_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KZ4MH95gSBLXwrRYj9KBDHgddF_nN27jM9NvN4JEy88', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?width=108&crop=smart&auto=webp&s=8b4829986b896f21c07a679811451fb873decd91', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?width=216&crop=smart&auto=webp&s=5d9a90ba9bb5455ef53aeac6a04d0491ca9d0b09', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?width=320&crop=smart&auto=webp&s=2db2b7c113c4b7d63ace773e028898ea4986a036', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?width=640&crop=smart&auto=webp&s=737e47f2082687cf07c09db08cb002d0d467ead9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?width=960&crop=smart&auto=webp&s=77652225d19101864d34fbcd1f75b013a60bdebf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?width=1080&crop=smart&auto=webp&s=6bda479a6c5ca88fc7a9fa64be9fa4306f012b72', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1T0EAfGagcGJJVj4esfSZxi7QmVqSuCD0eA96XqH_dA.jpg?auto=webp&s=08ca79b36e41c95474cb2e0268c23a4f31cd82ca', 'width': 1200}, 'variants': {}}]} |
Best for story writing? | 5 | I'm just now starting to get into Local LLMs (literally just a few minutes ago) and I was hoping for some direction to save me from spending hours doing research
I'm running a 4090 and 32Gb system RAM. I'm looking for something that gasps story concepts and can help me by giving me a rough draft. The longer the memory, the better. I'd definitely prefer something uncensored, but that's not really a necessity.
Any starting point would be greatly appreciated! | 2023-06-22T13:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/14g3pcu/best_for_story_writing/ | Ok-Scar011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14g3pcu | false | null | t3_14g3pcu | /r/LocalLLaMA/comments/14g3pcu/best_for_story_writing/ | false | false | self | 5 | null |
What is the Best Training Data Format (CSV, text, JSON) for Fine-tuning Language Model with Conversation History? | 9 | Hello everyone! I'm seeking guidance on the training data format for fine-tuning a Language Model (LM) with conversation history to enhance its conversational abilities using dialogue datasets.
I've come across a valuable resource that provides Python scripts for creating datasets from dialogues, which can be found here: \[Link\]([https://www.reddit.com/r/PygmalionAI/comments/12omct3/python\_scripts\_to\_creat\_dataset\_from\_dialogues\_in/](https://www.reddit.com/r/PygmalionAI/comments/12omct3/python_scripts_to_creat_dataset_from_dialogues_in/)).
My question is: What is the recommended format (CSV, text, JSON) for organizing the training data when fine-tuning the LM with conversation history? Should I concatenate the entire conversation into a single sequence, or should I consider a different approach to effectively retain the context?
Additionally, could you provide some examples of training data formats? Is the format consistent across all Language Models (LMs)?
I would greatly appreciate any insights, experiences, or suggestions you may have regarding the ideal training data format in this context. Thank you kindly for your assistance! | 2023-06-22T13:49:22 | https://www.reddit.com/r/LocalLLaMA/comments/14g42lm/what_is_the_best_training_data_format_csv_text/ | TurbulentDelivery799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14g42lm | false | null | t3_14g42lm | /r/LocalLLaMA/comments/14g42lm/what_is_the_best_training_data_format_csv_text/ | false | false | self | 9 | null |
Which LLM model in GPT4All would you recommend for academic use like research, document reading and referencing. | 4 | I just installed gpt4all on my MacOS M2 Air, and was wondering which model I should go for given my use case is mainly academic. | 2023-06-22T14:20:05 | https://www.reddit.com/r/LocalLLaMA/comments/14g4tnn/which_llm_model_in_gpt4all_would_you_recommend/ | Qwertyboss69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14g4tnn | false | null | t3_14g4tnn | /r/LocalLLaMA/comments/14g4tnn/which_llm_model_in_gpt4all_would_you_recommend/ | false | false | self | 4 | null |
Does anyone else enjoy simply having a debate with an LLM? | 42 | [deleted] | 2023-06-22T14:26:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14g4z8k | false | null | t3_14g4z8k | /r/LocalLLaMA/comments/14g4z8k/does_anyone_else_enjoy_simply_having_a_debate/ | false | false | default | 42 | null |
||
Vicuna-33b v1.3 has been released | 102 | 2023-06-22T14:51:34 | https://huggingface.co/lmsys/vicuna-33b-v1.3 | Nabakin | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 14g5lx6 | false | null | t3_14g5lx6 | /r/LocalLLaMA/comments/14g5lx6/vicuna33b_v13_has_been_released/ | false | false | 102 | {'enabled': False, 'images': [{'id': 'BwcLyCGg33DOdhE3Mk97hh0-AqISC_nE5VSvBTso1TY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?width=108&crop=smart&auto=webp&s=c6cf7ce8c22d81fe83e2fc0154b0887e4301bd48', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?width=216&crop=smart&auto=webp&s=2e75951f2d6bd7971adca8259f0009ec2de88585', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?width=320&crop=smart&auto=webp&s=96443afd4cc70f96f1ffaf6357076a9a508ff001', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?width=640&crop=smart&auto=webp&s=f05cb5b67eb0ed4834ab77f4c6bc68688a4f76bd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?width=960&crop=smart&auto=webp&s=6602d6877811dd10684e54d00b8381e4d6d5292f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?width=1080&crop=smart&auto=webp&s=b938c2aa02e5114410a624effa646be4fca746ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CDukAzwuAh7HUKTK5HUXuevmjO3QB7A3jSnJZNZUMDk.jpg?auto=webp&s=f72620a3331e51d010458ee60799f840e396c7ef', 'width': 1200}, 'variants': {}}]} |
||
MPT-30B: Raising the bar for open-source foundation models | 263 | 2023-06-22T14:56:44 | https://www.mosaicml.com/blog/mpt-30b | rerri | mosaicml.com | 1970-01-01T00:00:00 | 0 | {} | 14g5qhi | false | null | t3_14g5qhi | /r/LocalLLaMA/comments/14g5qhi/mpt30b_raising_the_bar_for_opensource_foundation/ | false | false | 263 | {'enabled': False, 'images': [{'id': 'EeXzUCKBjydlerLHBFNvNhX4rG4IYagtrxCg_SaFKUQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?width=108&crop=smart&auto=webp&s=61a126666ca4b4a4d4c2242a8ff3096289e9ee1a', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?width=216&crop=smart&auto=webp&s=52d5688e70fcf0dc3c50173b305b6ee4a259441e', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?width=320&crop=smart&auto=webp&s=95b88b879b5f1ba843c770fb00e2377958a1bcf3', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?width=640&crop=smart&auto=webp&s=c9f7ace7f0f3cc0ef0d4a618171185436528ab47', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?width=960&crop=smart&auto=webp&s=4cfe6224353bf244c96a2458b2af1d2f4d4e1a82', 'width': 960}, {'height': 563, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?width=1080&crop=smart&auto=webp&s=a023c113904a6d8d154062d113d88414d910049f', 'width': 1080}], 'source': {'height': 577, 'url': 'https://external-preview.redd.it/GvnI6cz5xXGkVjb-lwy1Q7ziwzPqL85FZc-KvSC1pt8.jpg?auto=webp&s=5066c5ec4a8afaafbd149dfdf0be794ea524a6f0', 'width': 1105}, 'variants': {}}]} |
||
What is the approach for extraction of structured data from financial documents | 1 | I have some pdfs that have information about bonds and their performance. I was thinking if I could use LLMs to do this extraction. Is there any model that is recommended for this use case. And am new to training models for specific use case so any tips on how to train them for this scenario. | 2023-06-22T15:16:39 | https://www.reddit.com/r/LocalLLaMA/comments/14g691g/what_is_the_approach_for_extraction_of_structured/ | SisyphusRebel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14g691g | false | null | t3_14g691g | /r/LocalLLaMA/comments/14g691g/what_is_the_approach_for_extraction_of_structured/ | false | false | self | 1 | null |
We need a sensible standard | 29 | Instruction templates are always changing: Human/Assistant, Instruction/Response, and the list goes on. It would be nice if all the offshoot fine tuned models at least got it together and consistently trained models with the same pair…
Additionally, it’s annoying to me because (and maybe I’m wrong) it appears a lot of these take more than one token… like Assistant.
Finally, the two are not always sensible for every use case. I.e if you have models talking with models there is no human.
Why isn’t it always:
in/out
Guess I just feel like a rant today. Am I alone? Can anyone help? Can’t anyone be the change?
With great GPU power comes great prompt responsibility. | 2023-06-22T15:57:33 | https://www.reddit.com/r/LocalLLaMA/comments/14g7a2m/we_need_a_sensible_standard/ | silenceimpaired | self.LocalLLaMA | 2023-06-22T16:05:25 | 0 | {} | 14g7a2m | false | null | t3_14g7a2m | /r/LocalLLaMA/comments/14g7a2m/we_need_a_sensible_standard/ | false | false | self | 29 | null |
Thoughts on LLMs Being Used for ERP/Romantic Chatbots and the SW Industry | 28 | The use of uncensored models and ERP has continued to grow in popularity. I'd like to have an honest discussion about what that means for humans moving forward. Is it good? Bad? Both?
I'm sure most of us have heard of CarynAI. The creator, Caryn Marjorie, said she wanted to cure loneliness for her fan base. I read that as she wanted to monetize the loneliness of her fan base. That being said, are LLMs a tool for loneliness or do they ultimately make users feel worse?
This [article](https://www.psychologytoday.com/us/blog/mind-brain-and-value/202112/the-rising-loneliness-economy) by Psychology Today talks about the rise of the loneliness economy. Caryn Marjorie is far from alone in her attempt to capitalize on loneliness as the article discusses the RentAFriend app where you can pay someone $40/hr to have dinner with you. So that begs the question, why is it okay to pay for a human friend or for someone's attention on OnlyFans but all of a sudden becomes "weird" when AI is involved? Is there a risk of the user's view of reality being altered?
Last, if companies continue to prey on consumer loneliness, isn't it important to keep free and open source solutions available? | 2023-06-22T17:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/14g9468/thoughts_on_llms_being_used_for_erpromantic/ | renegadellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14g9468 | false | null | t3_14g9468 | /r/LocalLLaMA/comments/14g9468/thoughts_on_llms_being_used_for_erpromantic/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'ruQN1gbYkPixfvu2gMNyhLvOsMTsgxTEA02ThEtZ3IQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?width=108&crop=smart&auto=webp&s=5634799bf14eb77373d2e390e6d8ac1ff9eb1a10', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?width=216&crop=smart&auto=webp&s=5b2dadebadb81b142aec05ac5fe7cea6d5041ff5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?width=320&crop=smart&auto=webp&s=c00cb933d42200683c74ac76d5e4166bc5cb4d01', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?width=640&crop=smart&auto=webp&s=f2c275dafc5797bd2711b145869c3b42b886cd51', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?width=960&crop=smart&auto=webp&s=e1ee21287b804f5aeaf0be0f446ebec0a4fd2cb9', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?width=1080&crop=smart&auto=webp&s=191513d17688a1d7bd3df48d18c5122662b0c54c', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/twFmB3cJZuEk8ZuezFhVx9pRoWcXX1JIPqhF-oDY_uw.jpg?auto=webp&s=32b84358c645c42d2610efeab645e91900c58af9', 'width': 1528}, 'variants': {}}]} |
MosaicML open sources their model | 43 | https://venturebeat.com/ai/mosaicml-challenges-openai-with-its-new-open-source-language-model/ | 2023-06-22T18:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/14gb82v/mosaicml_open_sources_their_model/ | someotherrandomwords | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14gb82v | false | null | t3_14gb82v | /r/LocalLLaMA/comments/14gb82v/mosaicml_open_sources_their_model/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'TnevkBsWpBwdB8YEuAZLcWswNsZD1Z341LILfoQFbtI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?width=108&crop=smart&auto=webp&s=b133040eafa3f1bd5d626b290f3e2fb1fc32e1cb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?width=216&crop=smart&auto=webp&s=102c7954c9a15bb270f49ac598ab610ab6cee8ea', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?width=320&crop=smart&auto=webp&s=4155f8d48b484e16a385bb8f10ff006b1806a1b1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?width=640&crop=smart&auto=webp&s=7e0ddb1f764b6c5bb85b9a7e1633bd2eb2ad1e5e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?width=960&crop=smart&auto=webp&s=1bfba54506a1f60e5e103d0eab2f7dde6aa292f3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?width=1080&crop=smart&auto=webp&s=80b0879c719fb223300db78b8fa1e08de95941fb', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/C8I13f9aeLEiXw86GqrATxMbmQ6diFk4kS3SNFrHw8M.jpg?auto=webp&s=ef80736e0c57e31ee6a34ce5338e3b94d04656a7', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.