title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Anyone used Pinokio AI browser? Safe to use?
1
[removed]
2023-07-24T16:27:46
https://www.reddit.com/r/LocalLLaMA/comments/158fgbs/anyone_used_pinokio_ai_browser_safe_to_use/
hosker2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158fgbs
false
null
t3_158fgbs
/r/LocalLLaMA/comments/158fgbs/anyone_used_pinokio_ai_browser_safe_to_use/
false
false
self
1
null
LLaMa 2 admits to lying twice
1
And not even on a controversial subject or something that can at least be considered to be against the official narrative. https://preview.redd.it/3x2z8ceexxdb1.png?width=998&format=png&auto=webp&s=e25ab3e3e694e73d6d3cf0c54c28fc57305ecea9
2023-07-24T16:38:09
https://www.reddit.com/r/LocalLLaMA/comments/158fqbt/llama_2_admits_to_lying_twice/
ClaudiuHNS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158fqbt
false
null
t3_158fqbt
/r/LocalLLaMA/comments/158fqbt/llama_2_admits_to_lying_twice/
false
false
https://b.thumbs.redditm…JplnSGFsx6RE.jpg
1
null
Meta didn't release the RLHF human preference model(s) for training LLaMA2?
1
To me it seems like a blunder for [asking people to care about] ethical training if they didn't release this key part of the fine-tuning toolchain. Could it really be simply to prevent people from training [other] models for safety? Basically a decision driven purely by force of capitalism?
2023-07-24T16:44:27
https://www.reddit.com/r/LocalLLaMA/comments/158fwdm/meta_didnt_release_the_rlhf_human_preference/
phree_radical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158fwdm
false
null
t3_158fwdm
/r/LocalLLaMA/comments/158fwdm/meta_didnt_release_the_rlhf_human_preference/
false
false
self
1
null
LLongMA-2 13b 8k
1
Releasing LLongMA-2 13b, a Llama-2 model, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with u/emozilla of NousResearch and u/kaiokendev. The model can be found on u/huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) We worked directly with u/kaiokendev, to extend the context length of the Llama-2 13b model through fine-tuning. The model passes all our evaluations and maintains the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies. https://preview.redd.it/y2jzaobxxxdb1.png?width=1007&format=png&auto=webp&s=2ef99cb3dc55b41be6c431f81ebb8b3a01fae0a8 A Llama-2 7b model trained at 16k context length will release soon on u/huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k) The model has identical performance to LLaMA 2 under 4k context length, performance scales directly to 8k, and works out-of-the-box with the new version of transformers (4.31) or with \`trust\_remote\_code\` for <= 4.30. Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor. https://preview.redd.it/zctfkwzzxxdb1.png?width=4176&format=png&auto=webp&s=82a31cf9a3d8330be3a3e1355e36d6516bb29964 The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: [https://github.com/jquesnelle/scaled-rope](https://github.com/jquesnelle/scaled-rope) If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: [https://kaiokendev.github.io/](https://kaiokendev.github.io/) A PR to add scaled rotary embeddings to Huggingface transformers has been added by Joao Gante and merged: [https://github.com/huggingface/transformers/pull/24653](https://github.com/huggingface/transformers/pull/24653) The model was further trained for \~1 billion tokens on Together Compute's Red Pajama dataset. The context length of the examples varies: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) The pre-tokenized dataset will be available here for you to use soon: [https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked](https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked) I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: [https://arxiv.org/abs/2108.12409](https://arxiv.org/abs/2108.12409) It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: [https://arxiv.org/pdf/2212.10554.pdf](https://arxiv.org/pdf/2212.10554.pdf) We previously trained the first publicly available model with rotary embedding scaling here: [https://twitter.com/EnricoShippole/status/1655599301454594049?s=20](https://twitter.com/EnricoShippole/status/1655599301454594049?s=20) You can find out more about the NousResearch organization here: [https://huggingface.co/NousResearch](https://huggingface.co/NousResearch) The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. A big thank you to EleutherAI for facilitating the discussions about context-length extrapolation as well. Truly an awesome open-source team and community. If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly. The previous suite of LLongMA model releases can be found here: [https://twitter.com/EnricoShippole/status/1677346578720256000?s=20](https://twitter.com/EnricoShippole/status/1677346578720256000?s=20) All of the models can be found on Huggingface: [https://huggingface.co/conceptofmind](https://huggingface.co/conceptofmind) Disclaimer: I am an **independent** researcher with a preemptible sponsorship from StabilityAI. I do **not** profit in any way from these models. I am **not** trying to promote a startup. These models are also **not** an official StabilityAI product. I am very honest about all of the work we do. All of the code, data, and evaluation suites are publicly available. FAQS: 1. Is this the base model? Yes, this is extended training of the Llama-2 13b base model to 8k context length. 2. Why not 16k? Llama-2 16k is done training and is currently going through our rigorous evaluation suite. [https://huggingface.co/conceptofmind/LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k) 3. Why not 32k? Jeff and I are the only two individuals working on this completely for free. **Memory and processing raise quadratically.** Scaling the context length is both very time-consuming and computationally expensive. It is also very costly. We will start training a 32k model in the near future. 4. Can't NTK already get you to 8k and 16k? Please review the graphs. It is clearly shown that you are not able to achieve the same results with NTK as you would with fine-tuning (either linear or ntk part scaling). I work directly with Bowen, the creator of NTK, and have been fine-tuning models with NTK scaling. Those models will release soon for the Open-Llama suite. Then we will release Llama-2 models. 5. What about quantization? I have not used any quantization libraries and I am unfamiliar if they are compatible. I am sure the Bloke or another individual will be able to work on that. 6. Can I instruct fine-tune on this model? Yes, you can instruct fine-tune these models. I will be releasing 8k models trained on the Hermes dataset soon. 7. What is the difference between LLongMA and NTK? The LLongMA models use the linear scaling method created by Kaiokendev. 8. What hardware was trained to train this model? I used 64 A100s to train these models. 9. Will there be a Llama-2 70b model at 8k+? Yes, I am working on this. Testimonials about LLongMA 7b can be seen here: [https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2](https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2)
2023-07-24T16:46:29
https://www.reddit.com/r/LocalLLaMA/comments/158fydr/llongma2_13b_8k/
EnricoShippole
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158fydr
false
null
t3_158fydr
/r/LocalLLaMA/comments/158fydr/llongma2_13b_8k/
false
false
https://b.thumbs.redditm…TlitbMKmcf-g.jpg
1
{'enabled': False, 'images': [{'id': 'rj27xzvAQYlJUjJEYt7-Kw76YiXVgT6QXHihMcK2xqs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=108&crop=smart&auto=webp&s=ec7cec2946ed2725c5900e05a6f85a7a2081fb0f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=216&crop=smart&auto=webp&s=f2b29f41afc40d509d6fd02297933e43cf362325', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=320&crop=smart&auto=webp&s=4d45890775dfc7e45171e104ca535880af3af4bc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=640&crop=smart&auto=webp&s=36eb25542590f768ee0a84e4f356a57c67e08b28', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=960&crop=smart&auto=webp&s=c85064bea1944c942f8e72e72d78ded222c0b88f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=1080&crop=smart&auto=webp&s=063e1d98463f08e18afcbf413800a7f07c2ba5c9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?auto=webp&s=6a92d7309fd21adef14e0d9f33233453d0bdd982', 'width': 1200}, 'variants': {}}]}
as it goes against my programming rules rules rules rules (Llama 2)
1
2023-07-24T17:10:18
https://i.redd.it/i4szbtzp3ydb1.png
resurgences
i.redd.it
1970-01-01T00:00:00
0
{}
158gm7q
false
null
t3_158gm7q
/r/LocalLLaMA/comments/158gm7q/as_it_goes_against_my_programming_rules_rules/
false
false
https://b.thumbs.redditm…ajauRHNQnSRs.jpg
1
{'enabled': True, 'images': [{'id': 'hAQjN6La9Dw-YEwCuSvI-sxParHp1v5Dw2nhfGoVXaw', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?width=108&crop=smart&auto=webp&s=a4618be6d57980fcf5245861e247b7371bbd3998', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?width=216&crop=smart&auto=webp&s=34a204472b032bd7ce61c4413d4932ebb5d54d0e', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?width=320&crop=smart&auto=webp&s=4b0542b660fae637b946ec1fafd60c332da53ada', 'width': 320}], 'source': {'height': 258, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?auto=webp&s=aff080cc84a5a5a419a2841415f76dcb6277572a', 'width': 609}, 'variants': {}}]}
Solving NER task. Entities recognition.
1
I have a task of receiving entities from the file created by OCR. As far as I understand, I should use some kind of BERT similar model. Also, document is not in English language, so should I seek this-language-bert? It is important that custom data labels are needed (person, organization, etc. from the default one are unsuitable). Am I right in thinking that I should choose Bert (base version), in order to make finetune myself on the data marked with my labels? Additionel questions: 1) which of these Berts is the very "basic model" that should be taken for fine tune? 2) Is it possible to use a generative approach for this purpose, in particular, for example, use LLM like LLAMA-2? Thank you in advance.
2023-07-24T18:12:53
https://www.reddit.com/r/LocalLLaMA/comments/158ia28/solving_ner_task_entities_recognition/
Arkenston
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158ia28
false
null
t3_158ia28
/r/LocalLLaMA/comments/158ia28/solving_ner_task_entities_recognition/
false
false
self
1
null
Nous Hermes Llama2 vs. Redmond Puffin 13B
1
I've just finished a thorough evaluation (multiple hour-long chats with 274 messages total over both [TheBloke/Nous-Hermes-Llama2-GGML (q5_K_M)](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML) and [TheBloke/Redmond-Puffin-13B-GGML (q5_K_M)](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)) so I'd like to give my feedback. Tested both with my usual setup ([koboldcpp](https://github.com/LostRuins/koboldcpp), [SillyTavern](https://github.com/SillyTavern/SillyTavern), and [simple-proxy-for-tavern](https://github.com/anon998/simple-proxy-for-tavern) - I've posted more details about it [in this post over here](https://www.reddit.com/r/LocalLLaMA/comments/14riib1/sillytavern_18_released/)) and deterministic settings. For each model, I used two characters and two conversations, one text chat and one roleplay session. **Hermes** In the text chat, Nous Hermes Llama2 was absolutely amazing. It was an excellent conversationalist (asked interesting follow-up questions to keep the chat going), creative (came up with its own ideas), adhered to the character definition and background, and it was plain fun and engaging. The only issue was that it kept adding the emoticon I used in the greeting message to all its messages, but that can be fixed by editing the messages until it "unlearns" the unwanted addition. In the roleplay session, Nous Hermes Llama2 was also good. However, it started a bit bland since it didn't use emotes to describe its actions at first - but once I did some action emotes of my own, it started using them as well, making the conversation much more engaging and lively. **Puffin** In the text chat, Puffin was bland compared to Hermes, without any notable achievements. It kept adding smileys because the greeting message had one, but at least it was varying them instead of using the same one like Hermes did. Still, Hermes was a much better conversationalist, more creative, and much more enjoyable. But then, in the roleplay session, Puffin was absolutely amazing. It started emoting right out of the gate and described its action in excellent prose, making the conversation very realistic and lively. The model wrote creatively and was able to take the lead, developing its own ideas. I loved it - until at around 3K tokens, when the annoying [Llama 2 repetition problem](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) kicked in and Puffin started to repeat and loop over the same patterns, ruining the conversation. **Results** I wonder why Nous Hermes Llama2 doesn't suffer from the repetition problem that ruins Puffin and also the other Llama 2 models I tested like [TheBloke/llama-2-13B-Guanaco-QLoRA-GGML](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML). So for now, I'll use Nous Hermes Llama2 as my current main model, replacing my previous LLaMA (1) favorites Guanaco and Airoboros. Those were 33Bs, but in my comparisons with them, the Llama 2 13Bs are just as good and equivalent to 30Bs thanks to the improved base. **TL;DR:** [TheBloke/Nous-Hermes-Llama2-GGML · q5_K_M](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML) is great, doesn't suffer from [repetition problems](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/), and has replaced my LLaMA (1) mains Guanaco and Airoboros for me, for now!
2023-07-24T18:48:47
https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158j9r9
false
null
t3_158j9r9
/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FK6TjNe5HVYpa9zi2Omx6VMEXcViuqLg7IzMF4cMR3s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=108&crop=smart&auto=webp&s=bf434d2ece2083c012ff1b9151b11c2c7297a80c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=216&crop=smart&auto=webp&s=4034c5aee76370467aa8aa4e3b4a69ece4b2441c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=320&crop=smart&auto=webp&s=d2b5bbe5cc82b00fdc39bd0a001828625d862e7e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=640&crop=smart&auto=webp&s=76821386ec125ace0886b4dd8d9785b65372f415', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=960&crop=smart&auto=webp&s=4ccc476370d1ce1eda39c23b8a7962ec0d842dd8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=1080&crop=smart&auto=webp&s=d442bb3e850154849be7ad2d864bc5a8a6352e39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?auto=webp&s=b18cb1922136f5fbf38adb98f88ad51311bcd192', 'width': 1200}, 'variants': {}}]}
Weighing my options for running LLMs locally, want some input on GPU setups vs Apple silicon vs cloud renting
1
I'm interested in running llms locally. Right now I have a modest, 5 year old gaming laptop that can run 7B models without much issue and 13b models as well, albeit 13b models are quite slow (I get around 1-2 tokens/sec with them). I would love to be able to try out 30b or 70b parameter model sizes but there's no way my laptop could handle those. I'm trying to weigh my options for running these larger model sizes and I'm having some difficulty deciding what to do. I guess I see three options: 1. Get a better (desktop) setup with potentially multiple Nvidia GPUs. This is probably my least ideal setup because I personally really love the portability of laptop computers and I don't have that much space for a desktop setup right now. 2. Get an apple computer that can run the model sizes I'm interested in. This option is tempting for me. The way understand it, apple silicon allows for the onboard ram to be used as vram and thus allows large models to be run even on a macbook. I've seen examples of people running 30b models on a macbook and getting pretty good performance. I can get a macbook pro with an M2 Max chip and 96GB of unified memory right now which would easily fit a 70b parameter model. This is more memory than you could get out of two consumer grade nvidia GPUs. The main issue I have is the price! I'm looking at 4k for one of those machines! Also there's rumors that apple will launch the M3 this fall, so it would suck to spend all that money on something that will almost immediately become dated. 3. Rent GPU instances on a cloud. This would be cheaper than buying a new computer in the short term. But it is less convenient and I'm not sure how difficult the setup would be for this, I've never tried something like that before. Thoughts? I've been thinking about replacing my aging laptop for some time now, but I'm so hesitant to bite the bullet on something as expensive as a mac.
2023-07-24T19:11:31
https://www.reddit.com/r/LocalLLaMA/comments/158jwv8/weighing_my_options_for_running_llms_locally_want/
nsfw_throwitaway69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158jwv8
false
null
t3_158jwv8
/r/LocalLLaMA/comments/158jwv8/weighing_my_options_for_running_llms_locally_want/
false
false
self
1
null
Researcher claims ALL transformer models degraded by a formula bug - but there’s a simple solution
1
https://www.evanmiller.org/attention-is-off-by-one.html
2023-07-24T19:34:25
https://www.reddit.com/r/LocalLLaMA/comments/158kjwq/researcher_claims_all_transformer_models_degraded/
PookaMacPhellimen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158kjwq
false
null
t3_158kjwq
/r/LocalLLaMA/comments/158kjwq/researcher_claims_all_transformer_models_degraded/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XN6nJmCoz1jke6HhkWy8W04R-pfEt53_RgXmy1_GTw4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=108&crop=smart&auto=webp&s=cd1b01ead20b5ff250777f7c02d72bf18fc42fcd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=216&crop=smart&auto=webp&s=d1dc97e894fd6c75db6616818885c3397092447c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=320&crop=smart&auto=webp&s=cbc62519ef4c3c11fe29caa066bd9703cc86ccf5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=640&crop=smart&auto=webp&s=8c75aef76fde99583272f7bf9066895fc4613287', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=960&crop=smart&auto=webp&s=6feeb8a796228a4996a10b51e01906f0b8d2ddad', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=1080&crop=smart&auto=webp&s=db2a402dd03859ffb3d2c79fd8da23734ba877bf', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?auto=webp&s=dd0743f11b40e1b0b12f751d81632357efd06840', 'width': 2400}, 'variants': {}}]}
Functionary: New Open source LLM that can execute functions and plugins
48
Hi, I just released a Llama 2 based model that can decide to run function(s) that you provide, or it doesn't run when its not necessary. It's equivalent of function calling feature in OpenAI GPT. So if you ask something that is not related to the defined functions, it answers without calling a function, as opposed to enforcing some tokens like MS Guidance. Its also capable of handling multi turn conversations and it can decide to run a function in the middle of the conversation if its necessary. AFAIK, this is the first open model that is capable of doing this for any type of functions. I'm planning to support existing ChatGPT plugins too, with zero modification to the plugins. I think we need a custom UI for this (I need some help on frontend) Repo: [https://github.com/musabgultekin/functionary](https://github.com/musabgultekin/functionary)
2023-07-24T19:52:58
https://www.reddit.com/r/LocalLLaMA/comments/158l28c/functionary_new_open_source_llm_that_can_execute/
yiyecek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158l28c
false
null
t3_158l28c
/r/LocalLLaMA/comments/158l28c/functionary_new_open_source_llm_that_can_execute/
false
false
self
48
{'enabled': False, 'images': [{'id': 'UKwy4c_Pc-EN9leoUG6oP-2cZTvudMrdI22C1pHIyRY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=108&crop=smart&auto=webp&s=aaf8b634d7b4a8d9119714c39aaecff67e652313', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=216&crop=smart&auto=webp&s=4129ec2f35eb9de1e318b152c0980fdb13b7a766', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=320&crop=smart&auto=webp&s=c8012305d73eec1d151d08a71d6eed644bea8cea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=640&crop=smart&auto=webp&s=f6e1a09391d06c4eff77b251c7501f3fd712ba4b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=960&crop=smart&auto=webp&s=a72b7f104f58c1cbfa17db25814207f0368a469c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=1080&crop=smart&auto=webp&s=c78ae4a077a892117de67e038b24b302d0044aa2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?auto=webp&s=e5199c184ac5beec29dcc2002ec0a1a3adb0beed', 'width': 1200}, 'variants': {}}]}
Andrei Karpathy's nanoGPT: The Missing Lecture
1
Like many of you I have followed Andrei's YouTube series that concluded with the one about training nanoGPT. Like many (judging by the YouTube comment section), I felt that he hinted at the importance of RLHF (reinforcement learning with human feedback) for customizing GPT but left us hanging by never fully developing that idea. So, I decided to learn about using RLHF for GPT models and implemented a Google Colab notebook that captured my findings. Specifically, I used RLHF to "instruct" the name generation model to generate the kinds of names that I like, e.g. ones that are 3 letters with a vowel in the middle (e.g. sam) or ones that have repeated letters, e.g. (aaron). For RL I used both vanilla policy gradient and the more widely used Proximal Policy Optimization(PPO). If you've followed Andrei's series, it should look familiar as it uses the same "names" dataset as Andrei in one of his earlier series. Of course, you can reuse my code with other datasets, like the Shakespeare dataset that he uses in the GPT YouTube video. Long story short, here's the linky -> [https://colab.research.google.com/github/osipov/nanorlhf/blob/main/example/nanoRLHF.ipynb](https://colab.research.google.com/github/osipov/nanorlhf/blob/main/example/nanoRLHF.ipynb)
2023-07-24T19:55:12
https://www.reddit.com/r/LocalLLaMA/comments/158l4f0/andrei_karpathys_nanogpt_the_missing_lecture/
osipov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158l4f0
false
null
t3_158l4f0
/r/LocalLLaMA/comments/158l4f0/andrei_karpathys_nanogpt_the_missing_lecture/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
Opentensor and Cerebras announce BTLM-3B-8K, a 3 billion parameter state-of-the-art open-source language model that can fit on mobile devices
1
\[Note: I work for Cerebras\] Cerebras and Opentensor announced at ICML today BTLM-3B-8K (Bittensor Language Model), a new state-of-the-art 3 billion parameter open-source language model that achieves leading accuracy across a dozen AI benchmarks. BTLM fits on mobile and edge devices with as little as 3GB of memory, helping democratize AI access to billions of devices worldwide. BTLM-3B-8K Highlights: * 7B level model performance in a 3B model * State-of-the-art 3B parameter model * Optimized for long sequence length inference 8K or more * First model trained on the SlimPajama, the largest fully deduplicated open dataset * Runs on devices with as little as 3GB of memory when quantized to 4-bit * Apache 2.0 license for commercial use. BTLM was commissioned by the Opentensor Foundation for use on the Bittensor network. Bittensor is a blockchain-based network that lets anyone contribute AI models for inference, providing a decentralized alternative to centralized model providers like OpenAI and Google. Bittensor serves over 4,000 AI models with over 10 trillion model parameters across the network. BTLM was trained on the newly unveiled Condor Galaxy 1 (CG-1) supercomputer, the first public deliverable of the G42 Cerebras strategic partnership. We would like to acknowledge the generous support of G42 Cloud and the Inception Institute of Artificial Intelligence. We’d also like to thank our partner Cirrascale, who first introduced Opentensor to Cerebras and provided additional technical support. Finally, we'd like to thank the Together AI team for the RedPajama dataset. To learn more, check out the following: * Blog: [https://www.cerebras.net/blog/btlm-3b-8k-7b-performance-in-a-3-billion-parameter-model/](https://www.cerebras.net/blog/btlm-3b-8k-7b-performance-in-a-3-billion-parameter-model/) * Model on Hugging Face: [https://huggingface.co/cerebras/btlm-3b-8k-base](https://huggingface.co/cerebras/btlm-3b-8k-base) https://preview.redd.it/5xhrdxvfxydb1.png?width=2000&format=png&auto=webp&s=f89a5ee8a72798d2bc9792879f8811c0d6b11716
2023-07-24T19:57:37
https://www.reddit.com/r/LocalLLaMA/comments/158l6s4/opentensor_and_cerebras_announce_btlm3b8k_a_3/
CS-fan-101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158l6s4
false
null
t3_158l6s4
/r/LocalLLaMA/comments/158l6s4/opentensor_and_cerebras_announce_btlm3b8k_a_3/
false
false
https://b.thumbs.redditm…DXrQlTsnvhrQ.jpg
1
null
Looking for an instruct-tuned (ORCA dataset or similar) version of Llama-2-13B
1
I am looking for a model like this to replace a GPT-3.5 usecase in a software application I am building. I saw that ehartford on HuggingFace recently did some tuning for a Dolphin model (Llama v1 based, with ORCA dataset), and while he did mention training future models like the Llama v2 model in the same way, I wanted to see if there are already models like this available to use.
2023-07-24T20:21:43
https://www.reddit.com/r/LocalLLaMA/comments/158lv8c/looking_for_an_instructtuned_orca_dataset_or/
blevlabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158lv8c
false
null
t3_158lv8c
/r/LocalLLaMA/comments/158lv8c/looking_for_an_instructtuned_orca_dataset_or/
false
false
self
1
null
Optimal model and setup targeted for a 4090 with a fast CPU
1
There are a lot of discussions involving different number of param's and quantization and the best models. Given that a 4090 is the best 'consumer' gpu and has 24GB's of VRAM what is the best model to use in such a case? While I could go out and buy 128GB's of CPU memory and perhaps run a 70B model would it be too slow? I believe I've heard that some tools allow most of a model to be loaded on the GPU it can keep some layers in the CPU memory and you can therefore get the perf boost of a GPU. I have a i9-13900K so both the CPU and GPU are very fast. I want to maximize my high end home system and match the best LLM setup with it. I assume that 4 bit's will be the best choice. The bottom line is what model and size gives the best quality and totally or mostly fits in the GPU memory? As a follow up, if I were to look into fine tuning a model would it be correct to assume I'd need to work with a smaller full fp16 model that 'comfortably' fits the 24GB's of a 4090 with some memory to spare?
2023-07-24T20:42:22
https://www.reddit.com/r/LocalLLaMA/comments/158mfvr/optimal_model_and_setup_targeted_for_a_4090_with/
Guilty-History-9249
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158mfvr
false
null
t3_158mfvr
/r/LocalLLaMA/comments/158mfvr/optimal_model_and_setup_targeted_for_a_4090_with/
false
false
self
1
null
Any Dolphin 13B reviews? Eric Hartford put a lot of work into it apparently
19
Somebody in this subreddit said its reasoning is similar to gpt-3.5 https://huggingface.co/ehartford/dolphin-llama-13b
2023-07-24T21:10:59
https://www.reddit.com/r/LocalLLaMA/comments/158n860/any_dolphin_13b_reviews_eric_hartford_put_a_lot/
Basic_Description_56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158n860
false
null
t3_158n860
/r/LocalLLaMA/comments/158n860/any_dolphin_13b_reviews_eric_hartford_put_a_lot/
false
false
self
19
{'enabled': False, 'images': [{'id': 'YKiot_Q22XfryiCaBl_0MwhSN7Bzs043P6HpTAXEniQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=108&crop=smart&auto=webp&s=65f8a2315820dd0a5647e79e0b4b1d1ff6943f0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=216&crop=smart&auto=webp&s=412f66f79979825b257f543b21cf5feae9a063e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=320&crop=smart&auto=webp&s=d7f85efe181ab675f32412e3482321235811e8da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=640&crop=smart&auto=webp&s=089cd3cf23a9973980ab6a24da5050180272bd9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=960&crop=smart&auto=webp&s=bc9d50f79839256103c35877d4092ba0dc3e3fb0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=1080&crop=smart&auto=webp&s=fd0eb8dbb14058731b099490967a61b134975037', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?auto=webp&s=6fd1fb10f5e28502ea2ed3b4fc265d86151f8b42', 'width': 1200}, 'variants': {}}]}
This is secretly the best LLM community
183
I am going to rip all of your text and summarize it
2023-07-24T21:12:56
https://www.reddit.com/r/LocalLLaMA/comments/158na2r/this_is_secretly_the_best_llm_community/
hanjoyoutaku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158na2r
false
null
t3_158na2r
/r/LocalLLaMA/comments/158na2r/this_is_secretly_the_best_llm_community/
false
false
self
183
null
Can someone make a summary of everything that happened in the world of Locall LLaMAs? (Similar to the one I made)
1
[removed]
2023-07-24T21:19:03
https://www.reddit.com/r/LocalLLaMA/comments/158ng81/can_someone_make_a_summary_of_everything_that/
Unreal_777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158ng81
false
null
t3_158ng81
/r/LocalLLaMA/comments/158ng81/can_someone_make_a_summary_of_everything_that/
false
false
self
1
null
Elon Musk's AI is the only hope for uncensored public LLMs.
1
Because Elon has shown time and time again at great personal detriment to himself; "I don't care". He simply does NOT care what the media think, and will release an uncensored model 'for the lulz' if nothing else.
2023-07-24T22:15:30
https://www.reddit.com/r/LocalLLaMA/comments/158oy8x/elon_musks_ai_is_the_only_hope_for_uncensored/
BrisbaneSentinel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158oy8x
false
null
t3_158oy8x
/r/LocalLLaMA/comments/158oy8x/elon_musks_ai_is_the_only_hope_for_uncensored/
false
false
self
1
null
I want to attempt fine tuning the llama2 7b base pre-trained model, what's the difference between "7b" and "7b-hf" ?
1
Title pretty much says it all, I'm seeing two base models for llama2 that can be used for fine-tuning as far as I can tell: meta-llama/Llama-2-7b-hf meta-llama/Llama-2-7b the -hf one has 27k downloads, meanwhile the other has 0. Sorry if this is totally obvious and I'm dumb, but thanks in advance to anyone who can explain :) Bonus question: With Qlora + SFTTrainer, it often asks you to specific "target\_modules" within the pre-trained model to fine tune. Some models expect: \["q\_proj", "v\_proj"\] , falcon 7b expects: \["query\_key\_value"\]. Appreciate if anyone knows this setting as well! &#x200B;
2023-07-24T22:19:07
https://www.reddit.com/r/LocalLLaMA/comments/158p1nb/i_want_to_attempt_fine_tuning_the_llama2_7b_base/
cmndr_spanky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158p1nb
false
null
t3_158p1nb
/r/LocalLLaMA/comments/158p1nb/i_want_to_attempt_fine_tuning_the_llama2_7b_base/
false
false
self
1
null
Which quantization algo is used in GGML
1
[removed]
2023-07-24T22:22:37
https://www.reddit.com/r/LocalLLaMA/comments/158p4y8/which_quantization_algo_is_used_in_ggml/
WorldlinessStock7270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158p4y8
false
null
t3_158p4y8
/r/LocalLLaMA/comments/158p4y8/which_quantization_algo_is_used_in_ggml/
false
false
self
1
{'enabled': False, 'images': [{'id': 'puSKhJCjtP90saXUDxVRXRJZYeAvhS54tR1J9gM46pc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=108&crop=smart&auto=webp&s=4750dcff2298b8cf4762bae200c0a43f33528b81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=216&crop=smart&auto=webp&s=7e375a65ffd3a86d93bc7aef77cfc1d6d449cfe4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=320&crop=smart&auto=webp&s=4cb7e6068862d0a39f1a8466f70edb5cf8a82998', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=640&crop=smart&auto=webp&s=4c58a64f8f1fdca18cb1200cd60e31f768c46487', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=960&crop=smart&auto=webp&s=c6c657ec01527180159b6070f1b7685127b36c51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=1080&crop=smart&auto=webp&s=ea90616c59ddad437b9868af85ec7465820586da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?auto=webp&s=e2b1e135c4bbb65ab6cdbf6e1b51f158ce08790e', 'width': 1200}, 'variants': {}}]}
Best instruction tuning data sets
1
Well based on wizard it seems evolv instruct is the best method for creating/ extending these data sets but none of these are truly open source! You can’t train commercial models on outputs from gpt4. So what instruction fine tuning and coding data sets are available for commercial use? Is llama 70b good enough to the point it could perform the evolv-instruct methodology?
2023-07-24T22:51:51
https://www.reddit.com/r/LocalLLaMA/comments/158pw7n/best_instruction_tuning_data_sets/
Artistic_Load909
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158pw7n
false
null
t3_158pw7n
/r/LocalLLaMA/comments/158pw7n/best_instruction_tuning_data_sets/
false
false
self
1
null
There is a moat
1
I've looked at most of the top models in the opensource community and frankly none of them match up to ChatGPT3.5Turbo even the new freewilly models and dolphin are only really that good because the major LLM Labs are giving out freebies when it comes to research breakthroughs, findings and even outright models. currently its not a stretch to say opensource is dependent on closedsource when it comes to quality research that adds more towards the goal of better AI models. This isn't to demean opensource and all the hardworking people creating and giving out these models for free but the sentiment of opensource taking over and creating stronger faster and cheaper models than the billion dollar companies is currently unwarranted. What do yall think?
2023-07-25T00:20:24
https://www.reddit.com/r/LocalLLaMA/comments/158s2fk/there_is_a_moat/
iuwuwwuwuuwwjueej
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158s2fk
false
null
t3_158s2fk
/r/LocalLLaMA/comments/158s2fk/there_is_a_moat/
false
false
self
1
null
Is it possible for Intel GPU offloading support in llama.cpp?
1
Title.
2023-07-25T00:42:41
https://www.reddit.com/r/LocalLLaMA/comments/158slbk/is_it_possible_for_intel_gpu_offloading_support/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158slbk
false
null
t3_158slbk
/r/LocalLLaMA/comments/158slbk/is_it_possible_for_intel_gpu_offloading_support/
false
false
self
1
null
Is there a path to keep training LLAMA2 and make it more parameters? Like 100 B+
1
[removed]
2023-07-25T00:52:37
https://www.reddit.com/r/LocalLLaMA/comments/158stjm/is_there_a_path_to_keep_training_llama2_and_make/
aiyeti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158stjm
false
null
t3_158stjm
/r/LocalLLaMA/comments/158stjm/is_there_a_path_to_keep_training_llama2_and_make/
false
false
self
1
null
Llama 2 Airoboros 7/13/70B GPTQ/GGML Released!
1
[Find them on TheBloke's huggingface page!](https://huggingface.co/TheBloke)
2023-07-25T01:11:18
https://www.reddit.com/r/LocalLLaMA/comments/158t97t/llama_2_airoboros_71370b_gptqggml_released/
ThroughForests
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158t97t
false
null
t3_158t97t
/r/LocalLLaMA/comments/158t97t/llama_2_airoboros_71370b_gptqggml_released/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]}
how i increase the memory or context size so it remember previous responses?, new to llama currently trying 13b q6k
1
2023-07-25T02:32:53
https://i.redd.it/rc72ftyxv0eb1.png
imnotdone2020
i.redd.it
1970-01-01T00:00:00
0
{}
158v4hz
false
null
t3_158v4hz
/r/LocalLLaMA/comments/158v4hz/how_i_increase_the_memory_or_context_size_so_it/
false
false
https://b.thumbs.redditm…xz9-8Bv2Kbbw.jpg
1
{'enabled': True, 'images': [{'id': 'Rzt0qDIuiv-pFFSWsLm6CarnI6HVjLBlhIo2gYVuVJw', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=108&crop=smart&auto=webp&s=1247d96a6a4b141e64701dcca4599a203cecc733', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=216&crop=smart&auto=webp&s=4eb4b4a000bf09340d3ef16cda3e3e3f068c8b74', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=320&crop=smart&auto=webp&s=b23122c263a92f4562d87b5468a8e354933f63f9', 'width': 320}, {'height': 498, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=640&crop=smart&auto=webp&s=3017c854021f46b17ed623d3a42a739d68ab8d81', 'width': 640}, {'height': 747, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=960&crop=smart&auto=webp&s=8b844525775e2d23b39e90ff236715c72262fc35', 'width': 960}, {'height': 841, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=1080&crop=smart&auto=webp&s=9ca3d8f48050f519a9e33f229d501d2aa50950d9', 'width': 1080}], 'source': {'height': 867, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?auto=webp&s=187c647a61d71266d631078ded12a5f5f94c383a', 'width': 1113}, 'variants': {}}]}
Serving Llama2 (llama.cpp) Terminal as a URL via ttyd and ngrok
1
- Build a docker image that has all the python and C dependencies - Make llama.cpp and quantize the model - Install ttyd in your docker image - Create a shell script that just runs the llama.cpp main rpogram - Run your main program as a ttyd service (it should run on localhost 7681) pointing to the shell script - Install on your host machine ngrok and configure it - Run ngrok 7681 - Now you can access llama2 terminal version on the go via a relatively private and temporary URL - Use with whatever caution you feel is appropriate when serving a local process on your computer as a publicly available web service
2023-07-25T02:53:06
https://www.reddit.com/r/LocalLLaMA/comments/158vksy/serving_llama2_llamacpp_terminal_as_a_url_via/
Happy_Chicken9835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158vksy
false
null
t3_158vksy
/r/LocalLLaMA/comments/158vksy/serving_llama2_llamacpp_terminal_as_a_url_via/
false
false
self
1
null
Fine-tuning Llama 2 not affecting output
1
I have been trying to fine-tune Llama 2 (7b) for a couple of days and I just can’t get it to work. I tried both the base and chat model (I’m leaning towards the chat model because I could use the censoring), with different prompt formats, using LoRA (I tried TRL, LlamaTune and other examples I found). It doesn’t fail, but when I run the fine-tuned model, I don’t see any difference in the output, it’s like nothing changed. Do you have any ideas on what could be happening? Or a guide that worked for you I could follow? Thanks!
2023-07-25T03:15:42
https://www.reddit.com/r/LocalLLaMA/comments/158w33i/finetuning_llama_2_not_affecting_output/
federicog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158w33i
false
null
t3_158w33i
/r/LocalLLaMA/comments/158w33i/finetuning_llama_2_not_affecting_output/
false
false
self
1
null
PSA: How to stay safe when using LLMs locally
1
*Feel free to delete this if this is not helpful.* To people wondering why commercial LLMs are so heavily censored for seemingly no reason, consider this: If there was a real, credible example of an LLM generating illegal content such as child pornography, it would be enormously harmful to everyone involved. It would be illegal for someone to share that content or the repro steps, and it could even be illegal to admit that the model ever generated it, because that might be construed as an admission to past possession of illegal content. In other words, as a user you should **prepare for the possibility that your LLM may output illegal content** when you least expect it. **Here are some tips to keep yourself safe**: * Implement a list of phrases to ban while generating. [Example list](https://github.com/SaviorXTanren/mixer-mixitup/blob/ace208b6e90dfabc962da282ebc685dc9b34acd4/MixItUp.WPF/Assets/CommunityBannedWords.txt) (CW: NSFW) * Don't output raw content to a public forum without NSFW detection. * Don't store logs of your model's raw outputs, especially if others besides you are using the model. * If you're running a service that serves raw outputs, warn your users about the possibility of a model generating unsafe content, and make sure your users accept full responsibility for anything the model generates. This is important even if the service is just for your friends! Hope this helps. Stay safe everyone, and happy generating!
2023-07-25T03:28:20
https://www.reddit.com/r/LocalLLaMA/comments/158wcqm/psa_how_to_stay_safe_when_using_llms_locally/
seattlesweiss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158wcqm
false
null
t3_158wcqm
/r/LocalLLaMA/comments/158wcqm/psa_how_to_stay_safe_when_using_llms_locally/
false
false
nsfw
1
{'enabled': False, 'images': [{'id': 'BtAEfEMcMXwsqtk9iARuuVdG-a006gktTBp7VW7vTrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=108&crop=smart&auto=webp&s=842fecd3f0ace72439d3de3c93d1fc6c058d6191', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=216&crop=smart&auto=webp&s=932a89e6cadf95d21f915a8abb853dbae2cf731c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=320&crop=smart&auto=webp&s=8d0a918a73ead1d89de04d0b0d80f4376d9b729d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=640&crop=smart&auto=webp&s=3074949c81c4bce6f882c7e8b14c498e7010530a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=960&crop=smart&auto=webp&s=20473afb5841ff16445a4bc0910627f8a4ee439e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=1080&crop=smart&auto=webp&s=01cd9611b0097932e5e3a3cee9659b4ee6ff066a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?auto=webp&s=61108afe2b6ece878cdbba5fd77aaaf9ef5137ed', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f2ac5b47563435e5b18f56a6448d340063acdb83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=661ed65c77199cedf199ac7841a1f25588cbd946', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=402e4196c9227e1daaa5104b8a7deb047ad1a5a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6e4d5a7b8997b788793e0924bb348a2d06d0c14f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=d8ce2cff2291f5c3392517e0a64ec33038447038', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=2a5e7055b55ade0fde95932a65ffcd44bc23a344', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?blur=40&format=pjpg&auto=webp&s=2590c0552e3dfab5898c0d16a802facde57a00d9', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f2ac5b47563435e5b18f56a6448d340063acdb83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=661ed65c77199cedf199ac7841a1f25588cbd946', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=402e4196c9227e1daaa5104b8a7deb047ad1a5a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6e4d5a7b8997b788793e0924bb348a2d06d0c14f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=d8ce2cff2291f5c3392517e0a64ec33038447038', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=2a5e7055b55ade0fde95932a65ffcd44bc23a344', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?blur=40&format=pjpg&auto=webp&s=2590c0552e3dfab5898c0d16a802facde57a00d9', 'width': 1200}}}}]}
PC Build for running Llm
1
[removed]
2023-07-25T03:41:40
https://www.reddit.com/r/LocalLLaMA/comments/158wn3k/pc_build_for_running_llm/
Any-Cobbler6161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158wn3k
false
null
t3_158wn3k
/r/LocalLLaMA/comments/158wn3k/pc_build_for_running_llm/
false
false
self
1
null
What's the difference between the "context" fields under "Character" and "Instruction Template" tabs in text generation webui?
1
I'm a bit confused because the documentation didn't seem to cover this distinction very clearly, and the default examples of both seem rather similar. They both have the name "context" and seem to give direction to the model in a similar way. Should I use these two fields for different purposes? If I use one does that mean it's unnecessary to use the other? So far I've been putting nearly the same "You are such and such, and you are talking to the user about such and such" context in both fields, and it does the thing I want, but I'm wondering if putting nearly the same thing in both context fields is unnecessary or counterproductive.
2023-07-25T04:53:14
https://www.reddit.com/r/LocalLLaMA/comments/158y3k7/whats_the_difference_between_the_context_fields/
ascendant23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158y3k7
false
null
t3_158y3k7
/r/LocalLLaMA/comments/158y3k7/whats_the_difference_between_the_context_fields/
false
false
self
1
null
How to use prompt templates?
1
I'm using the Oobabooga Web UI to run some models locally. The model cards on HF will often provide templates like this: <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST] However I am not sure how to read these templates and how to use them in Oobabooga. Is there some documentation which describes the meaning of parameters like <s> or \[INST\] etc. and how to use these templates? Thanks for any pointers!
2023-07-25T05:18:53
https://www.reddit.com/r/LocalLLaMA/comments/158yltk/how_to_use_prompt_templates/
andy_potato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158yltk
false
null
t3_158yltk
/r/LocalLLaMA/comments/158yltk/how_to_use_prompt_templates/
false
false
self
1
null
dont get why my previous post was removed when i posted a question of my issue
1
[removed]
2023-07-25T05:49:10
https://www.reddit.com/r/LocalLLaMA/comments/158z6lg/dont_get_why_my_previous_post_was_removed_when_i/
imnotdone2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158z6lg
false
null
t3_158z6lg
/r/LocalLLaMA/comments/158z6lg/dont_get_why_my_previous_post_was_removed_when_i/
false
false
self
1
null
dont get why my previous post was removed
1
[removed]
2023-07-25T05:49:57
https://www.reddit.com/r/LocalLLaMA/comments/158z74s/dont_get_why_my_previous_post_was_removed/
imnotdone2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158z74s
false
null
t3_158z74s
/r/LocalLLaMA/comments/158z74s/dont_get_why_my_previous_post_was_removed/
false
false
self
1
null
help required!!
1
Hello I am new to this community. I have a few text files that i want to finetune LLaMa 2 7b with. I have previously finetuned BERT and GPT2 models before and that was fairly easy. But i have no idea when it comes to LLaMA. Any help would be appreciated.
2023-07-25T06:08:12
https://www.reddit.com/r/LocalLLaMA/comments/158zjjt/help_required/
aharneish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
158zjjt
false
null
t3_158zjjt
/r/LocalLLaMA/comments/158zjjt/help_required/
false
false
self
1
null
Llama 2 based Guanaco and Airoboros 70B are a significant downgrade for fiction writing
1
My primary usage for these large models is in fiction writing, and I must say, I've noticed a significant downgrade in their performances compared to their llama 1 65b predecessors. The content they generate tends to be simplistic, not as comprehensive or extended as I'd like, even when I specifically prompt them to write lengthier pieces. This issue seems rather acute with Guanaco 70B. It's quite surprising and disappointing. It's left me wondering why this might be. I understand these models are made for general use and aren't necessarily optimized for fiction writing, but the disparity in performance is notable nonetheless. Has anyone else experienced similar issues?
2023-07-25T06:42:16
https://www.reddit.com/r/LocalLLaMA/comments/159064y/llama_2_based_guanaco_and_airoboros_70b_are_a/
Big_Communication353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159064y
false
null
t3_159064y
/r/LocalLLaMA/comments/159064y/llama_2_based_guanaco_and_airoboros_70b_are_a/
false
false
self
1
null
Any cool projects built with https://github.com/karpathy/llama2.c ?
1
I stumbled across Karpathy's llama2 and wanted to play around for a bit. I am not sure what I can do though. Are there examples of cool projects I can draw inspiration from?
2023-07-25T07:23:42
https://www.reddit.com/r/LocalLLaMA/comments/1590ymj/any_cool_projects_built_with/
Soli__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1590ymj
false
null
t3_1590ymj
/r/LocalLLaMA/comments/1590ymj/any_cool_projects_built_with/
false
false
self
1
null
Is there an uncensored version of llama 2 chat 70b?
1
As per the title. Looking for an uncensored chat version of metas llama 2 chat 70b model.
2023-07-25T07:24:33
https://www.reddit.com/r/LocalLLaMA/comments/1590z63/is_there_an_uncensored_version_of_llama_2_chat_70b/
bumblebrunch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1590z63
false
null
t3_1590z63
/r/LocalLLaMA/comments/1590z63/is_there_an_uncensored_version_of_llama_2_chat_70b/
false
false
self
1
null
error in downloading the model on kaggle
1
[removed]
2023-07-25T07:36:10
https://www.reddit.com/r/LocalLLaMA/comments/1591737/error_in_downloading_the_model_on_kaggle/
aharneish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1591737
false
null
t3_1591737
/r/LocalLLaMA/comments/1591737/error_in_downloading_the_model_on_kaggle/
false
false
self
1
null
Compiling Llama2.C to WebAssembly for Cross Platform Lightweight Deployment
1
2023-07-25T09:03:46
https://medium.com/@michaelyuan_88928/running-llama2-c-in-wasmedge-15291795c470
smileymileycoin
medium.com
1970-01-01T00:00:00
0
{}
1592w0h
false
null
t3_1592w0h
/r/LocalLLaMA/comments/1592w0h/compiling_llama2c_to_webassembly_for_cross/
false
false
https://a.thumbs.redditm…dmVs5gWxT2B4.jpg
1
{'enabled': False, 'images': [{'id': 'oqFzJ_TjcgTVI9tJh3gAwRFpO1Cenh4NZK1vLhYNG5E', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=108&crop=smart&auto=webp&s=af0e1de1d5d48483c9d28727d2d79f41201b2e64', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=216&crop=smart&auto=webp&s=1aa2b6abf165a9e329f93c5164328f1d8c505948', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=320&crop=smart&auto=webp&s=de26e3aba7f50891f81ac62a4bd74192b8d1c929', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=640&crop=smart&auto=webp&s=f425bf0664ecbe2ba194c5707940137d2f120f79', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=960&crop=smart&auto=webp&s=fd44a303c23bf5f4f888b1b439995b9532d4b989', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?auto=webp&s=3c1e7e4611eaf689333e51ea8c2fc951b24f09f4', 'width': 1024}, 'variants': {}}]}
Local llm running hardware/services recommendation help!
1
My current laptop has only 16gb ram,Intel h series processor, navdia graphics! I run ggml model(.bin) using llama, I can run 7b and 13b models comfortable(a little slow). Please recommend me some hardware/services for running my local llm fast and comfortably(that can supports 40b+ models), that will not cost me a fortune.
2023-07-25T09:07:16
https://www.reddit.com/r/LocalLLaMA/comments/1592yk8/local_llm_running_hardwareservices_recommendation/
InternationalMap5278
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1592yk8
false
null
t3_1592yk8
/r/LocalLLaMA/comments/1592yk8/local_llm_running_hardwareservices_recommendation/
false
false
self
1
null
Evaluating models for Basedness
1
Hi Everyone, Has there been any work done in putting together a method of evaluating models for basedness? A based model: * Will argue any point you ask it to, adopting whatever ideological position you ask ("write me an essay arguing xyz"). * Will write stories about any topic, even things involving deviant sexual practices or other taboo topics. * Won't say "as a language model..." * Won't lecture you about ethics. * Won't interject political positions into its responses unless asked. I see various uncensored models that aim to do things like this, but I was wondering if there was some sort of comparison test. A friend mentioned that he saw some sort of rating (that placed llama2 with a very low score and vicuna very high or something like that) but I haven't seen any details and I haven't found much when googling.
2023-07-25T09:23:57
https://www.reddit.com/r/LocalLLaMA/comments/1593afc/evaluating_models_for_basedness/
mikieh976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1593afc
false
null
t3_1593afc
/r/LocalLLaMA/comments/1593afc/evaluating_models_for_basedness/
false
false
self
1
null
Presenting MEDTEXT. Feel free to use it as you wish (cc-by-4.0)
91
https://huggingface.co/datasets/BI55/MedText Someone interested in finetuning Llama 1 or 2 on it? Could give us a potent and interesting home diagnosis tool before you go to the doctor, or to make doctors life easier and help them in their work. In short: It is a medical diagnosis dataset called Medtext. The dataset, which is randomly shuffled, contains over 1000 high-quality patient presentations along with diagnosis and treatments. It covers the 100 most common diseases and the 30 most common injuries that result in hospital visits, among many others. The data points range from mild to severe cases and are designed to ensure that the AI model acknowledges when it cannot answer confidently or when data is insufficient. The dataset also includes cases where symptoms might mislead an obvious diagnosis, emergency cases, injuries from crimes, STDs, and cases specific to infants, gynecology, urology, and genetics, among others. It also focuses on previous medical mishandling, drug abuse, overdose, and drug cross side effects. Furthermore, the dataset also provides data for textual analysis of various diagnostic tests like blood tests, ultrasound, CT, MRI, and X-ray examinations. Medtext is free to use and was categorized as 'textbook quality' by three different doctors during a quality check. The dataset ensures to emphasize that an AI can never replace a professional doctor but can only provide substitute analysis. It includes high quality presentation and diagnosis of, among others, the following, with multiple datapoints for each (5-10) INJURIES: * Sprains and strains * Fractures * Contusions (bruises) * Cuts and lacerations * Concussions * Burns * Dislocations * Abrasions (scrapes) * Whiplash injuries * Eye injuries * Puncture wounds * Bites and stings * Back injuries * Broken nose * Knee injuries * Ankle injuries * Shoulder injuries * Wrist injuries * Chest injuries * Head injuries DISEASES: * Acne * Allergies * Alzheimer's Disease * Anemia * Angina * Anxiety Disorders * Arthritis * Asthma * Atherosclerosis * Athlete's Foot * Attention Deficit Hyperactivity Disorder (ADHD) * Autism Spectrum Disorder * Back Pain * Bipolar Disorder * Bronchitis * Cataracts * Chickenpox * Chronic Obstructive Pulmonary Disease (COPD) * Common Cold * Conjunctivitis (Pink Eye) * Constipation * Coronary Heart Disease * Cystitis * Dementia * Depression * Diabetes Type 1 * Diabetes Type 2 * Diarrhea * Diverticulitis * Dizziness (Vertigo) * Ear Infections * Eczema * Endometriosis * Erectile Dysfunction * Fibromyalgia * Flu (Influenza) * Food Poisoning * Gallstones * Gastroenteritis * Gastroesophageal Reflux Disease (GERD) * Gout * Hay Fever (Allergic Rhinitis) * Headaches * Heart Failure * Hemorrhoids * Hepatitis B * Hepatitis C * Herpes Simplex Virus (HSV) * High Blood Pressure (Hypertension) * High Cholesterol (Hypercholesterolemia) * HIV/AIDS * Hyperthyroidism (Overactive Thyroid) * Hypothyroidism (Underactive Thyroid) * Inflammatory Bowel Disease (Including Crohn's and Ulcerative Colitis) * Insomnia * Iron Deficiency Anemia * Irritable Bowel Syndrome (IBS) * Kidney Stones * Lactose Intolerance * Lyme Disease * Macular Degeneration * Malaria * Menopause * Migraine * Multiple Sclerosis * Obesity * Osteoarthritis * Osteoporosis * Otitis Media (Middle Ear Infection) * Pancreatitis * Parkinson's Disease * Peptic Ulcers * Periodontal Disease * Pneumonia * Polycystic Ovary Syndrome (PCOS) * Prostate Enlargement (Benign Prostatic Hyperplasia) * Psoriasis * Pulmonary Embolism * Restless Legs Syndrome * Rheumatoid Arthritis * Rosacea * Schizophrenia * Sciatica * Scoliosis * Seasonal Affective Disorder (SAD) * Sinusitis * Skin Cancer * Sleep Apnea * Strokes * Tendonitis * Tonsillitis * Tuberculosis * Urinary Tract Infection (UTI) * Varicose Veins * Vitiligo * Yeast Infection (Candidiasis) * Zika Virus
2023-07-25T09:39:56
https://www.reddit.com/r/LocalLLaMA/comments/1593l46/presenting_medtext_feel_free_to_use_it_as_you/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1593l46
false
null
t3_1593l46
/r/LocalLLaMA/comments/1593l46/presenting_medtext_feel_free_to_use_it_as_you/
false
false
self
91
{'enabled': False, 'images': [{'id': 'Vx3n1QOXXuzortvdxjPvaEvO6q_efvkhusgGA7DPHYk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=108&crop=smart&auto=webp&s=24c711f6a9567c949f1ea6901778747d2a37270b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=216&crop=smart&auto=webp&s=8932808900b86a48fe545db9b989918ff285a01e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=320&crop=smart&auto=webp&s=3bc4b13bf5adbc63b9f45dd0923a857984727350', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=640&crop=smart&auto=webp&s=8d0ca6eaae4db9ca17efdaaedda636477c5125e1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=960&crop=smart&auto=webp&s=6d1f31dc60bf9a7e08e5ba29b114b726ad4b1dd3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=1080&crop=smart&auto=webp&s=ea368e03877d327392849dcdf7b7984915ebfa1e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?auto=webp&s=d80228b71ac0e6da6a4771d22806f29b037de000', 'width': 1200}, 'variants': {}}]}
New Open Source LLM: GOAT-7B (SOTA among the 7B models)
1
&#x200B; https://preview.redd.it/hoyvbogjh3eb1.png?width=2500&format=png&auto=webp&s=13246afe453d0e164644221d79ed033e643ced85
2023-07-25T11:17:17
https://www.reddit.com/r/LocalLLaMA/comments/1595jtn/new_open_source_llm_goat7b_sota_among_the_7b/
rempact
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1595jtn
false
null
t3_1595jtn
/r/LocalLLaMA/comments/1595jtn/new_open_source_llm_goat7b_sota_among_the_7b/
false
false
https://b.thumbs.redditm…kVg-iHNrOivs.jpg
1
null
Best model to help with college application?
1
A computer programmer in gap year after high school. College Application coming up soon. Got a tons of essays to write. Want to tune a model to achieve this. Also, a model to help with finding college application resources and tips/guidelines. Any help? Am I being too ambitious? Also, My Spec: GTX 1650, or M1.
2023-07-25T11:44:50
https://www.reddit.com/r/LocalLLaMA/comments/15964qf/best_model_to_help_with_college_application/
goodFuckingBoy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15964qf
false
null
t3_15964qf
/r/LocalLLaMA/comments/15964qf/best_model_to_help_with_college_application/
false
false
self
1
null
How to get good performance with LLAMA-2 70B models on cheap AI server
1
[removed]
2023-07-25T11:53:58
https://www.reddit.com/r/LocalLLaMA/comments/1596bqx/how_to_get_good_performance_with_llama2_70b/
mikieh976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1596bqx
false
null
t3_1596bqx
/r/LocalLLaMA/comments/1596bqx/how_to_get_good_performance_with_llama2_70b/
false
false
default
1
null
How to: summarization with 70B on a single 3090
1
This is a post for newcomers who want to feel out what kind of context and processing speed they can get for this given hardware. The perceived goal is to have many arvix papers in stored in prompt cache so we can ask many questions, summarize, and reason together with an LLM for as many sessions as needed. My setup is 32gb of DDR4 RAM (2x 16gb) sticks and a single 3090. 8k ------ I can do 8k with a good 4bit (70b q4_K_M) model at 1.5 t/s, **with fast 38t/s GPU prompt processing.** 16k ------ I will get up to 16k if I purchase another pair of 16GB ramsticks. (+32gb) For this setup, I'm expecting 1t/s with 50t/s I can run this at 0.15 t/s, but would want to buy the extra ram to free my gpu acting as ram, and get speed to 1t/s 32k ------ I OOM'd here, at 0 layers. my gpu's vram gradually filled up during prompt processing. It would be nice to find a way to prevent this, some papers are 30000 tokens. 64k ------ This may be at an impossible state rn with bad output quality. I assume more than 64gb ram will be needed. I've only assumed 32k is viable because llama-2 has double the context of llama-1 Tips: ------ If your new to the llama.cpp repo, here are some tips: - use `--prompt-cache` for summarization - use `-ngl` [best percentage] if you lack the RAM to hold your model - choose an acceleration optimization: openblas -> cpu only ; clblast -> amd ; rocm (fork) -> amd ; cublas -> nvidia You want an acceleration optimization for fast prompt processing. Note: Currently `--prompt-cache` does not work for 70b, or when using higher context. The idea is we want a prompt cache file for every arXiv paper to skip prompt gpu processing altogether on a re-run. Once it works, I guess it'll load instantly. You can then ask a variety of things and reload the session if you are on a different chain of thought, and do not want to mess up the current session. I have these settings for 70B 8k: `-ngl 35 --rope-freq-base 40000 -c 8196` There are extra flags needed for 70b, but this is what you can expect for 32GB RAM + 24GB VRAM. The processing of a 7k segment took 38 t/s, or ~3min. I get 1.5 t/s inference on a 70b q4_K_M model, which is the best known tradeoff between speed, output quality, and size. Thoughts: ------ - This can work with no gpu, If you cannot afford a gpu, you will have the same output quality. But every initial processing may take you a couple of hours for very large 30k context. Definitely take advantage of `--prompt-cache`. Be sure your desktop cpu can run the 7b at at-least 10t/s, maybe we could extrapolate your speed to be 1t/s on a 10x larger model. - It can work with smaller GPUs too, like 3060. vram build-up for prompt processing may only let you go to 8k on 12gb, but maybe the `-lv` (lowvram) option may help you go farther, like 12k. Don't offload layers, buy cpu RAM. - I don't know if alternatives like a vector database will make summarization more performant, it will definitely be cheaper, but it may miss bringing necessary information in-context - [Video](https://www.veed.io/view/d6d9a0db-f704-410b-ac68-48aaff414221?panel=share) showing the t/s to get a feel realtime - A [70b 8k fine-tuned model](https://old.reddit.com/r/LocalLLaMA/comments/158fydr/llongma2_13b_8k/) is said to be in the works which should increase summarization quality - I believe that the largest model will be best at interpreting context, based on the previous feedback from users here: that say 65B is a big leap in quality from 33b (If that gap no longer tangibly exists, I'd happily use 34b) - This setup is on Ubuntu, but there should be enough wiggle room to use Windows 10 - To me this is a more "ordinary" maxed desktop setup, which is why it's worth sharing the experience. The next step up is 2x3090 for 15 t/s, ? t/s prompt processing, and confirmed up to 16k context with everything fully in GPUs.
2023-07-25T12:06:32
https://www.reddit.com/r/LocalLLaMA/comments/1596m5z/how_to_summarization_with_70b_on_a_single_3090/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1596m5z
false
null
t3_1596m5z
/r/LocalLLaMA/comments/1596m5z/how_to_summarization_with_70b_on_a_single_3090/
false
false
self
1
{'enabled': False, 'images': [{'id': '8V1JjqBtxV7SXgl5BmCQ77vxrgPOUPInfiT6pHh8fwI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=108&crop=smart&auto=webp&s=73b99c9a4f73ed4afcd96622306d528710def281', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=216&crop=smart&auto=webp&s=55cff3ee023ee9bb70cff3f838541034ac4161f9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=320&crop=smart&auto=webp&s=7dce4cb2d3dd1dd69846ecd216e440f86f20fe71', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=640&crop=smart&auto=webp&s=9ab6fe041223b04c6800c763ae80a2b63e12dbf2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=960&crop=smart&auto=webp&s=5a84cdf3f9b311b1af9b708058e43b86b133114c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=1080&crop=smart&auto=webp&s=3681e8b679e418e908e14c6ff0caa04ce6d020d4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?auto=webp&s=5e1da3bb8cceea670e964e1a1ad99d09fe70c989', 'width': 1200}, 'variants': {}}]}
Need help with a prompt
1
I'm trying to come up with a prompt that will help me generate compact versions of the user input text. The input text itself can be a prompt. For example: Summarize the following chat between a support agent and a customer. Inlcude a subject and the main theme in bullet points. No matter what prompt I try with gpt-35-turbo, it's always generating a chat transcript rather than optimize the original text "Summarize the following chat between a support agent and a customer. Inlcude a subject and the main theme in bullet points." to something like "Write a summary for the following chat, include a subject as well".
2023-07-25T12:08:41
https://www.reddit.com/r/LocalLLaMA/comments/1596nrt/need_help_with_a_prompt/
krumb0y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1596nrt
false
null
t3_1596nrt
/r/LocalLLaMA/comments/1596nrt/need_help_with_a_prompt/
false
false
self
1
null
Running llama2.c on budget Android
1
ERROR: type should be string, got "\n\nhttps://twitter.com/shxf0072/status/1683508670263595008?t=UKRUGRjCKsPHZca25k9TFg&s=19"
2023-07-25T13:20:15
https://v.redd.it/tzcqaoll34eb1
esharp007
v.redd.it
1970-01-01T00:00:00
0
{}
1598d6c
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/tzcqaoll34eb1/DASHPlaylist.mpd?a=1692883229%2CZGNiYWI0MmFjNGMxZWE1YjFlMmY0ZTg2ODgzM2FmZDBjZDkxYzA0YTc1MjljNzkzZGQ3ODQxN2NmZWZkODU5OA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/tzcqaoll34eb1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tzcqaoll34eb1/HLSPlaylist.m3u8?a=1692883229%2CODM1OTVkNDBlMDM5ZmUzNWZlNDdkMjhmNTJlNzJjMzMyNzg5ZmM4MmZhZTk0NGJmMmNhZjJhNDQ1NDg1ZTAyOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tzcqaoll34eb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 486}}
t3_1598d6c
/r/LocalLLaMA/comments/1598d6c/running_llama2c_on_budget_android/
false
false
https://external-preview…6f1e62b8f5c95001
1
{'enabled': False, 'images': [{'id': 'YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=108&crop=smart&format=pjpg&auto=webp&s=b036e9883a66425539d0246d4e19a89e6b9387c3', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=216&crop=smart&format=pjpg&auto=webp&s=c943d2156e1c98e342419978e68d0517bb265072', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=320&crop=smart&format=pjpg&auto=webp&s=fea18dac95d862356518b92b82821f5ffb5e902f', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=640&crop=smart&format=pjpg&auto=webp&s=513217b0a7fd7e4a6ec0ce4e6d490f9bfd43d660', 'width': 640}], 'source': {'height': 1795, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?format=pjpg&auto=webp&s=6b35929425f078057148281a90be306b52afb4ff', 'width': 807}, 'variants': {}}]}
Running llama.c on budget android
1
ERROR: type should be string, got "\n\nhttps://twitter.com/shxf0072/status/1683508670263595008?t=SY7uhspgdFIgyuJ-nOSsSQ&s=19\n"
2023-07-25T13:38:00
https://v.redd.it/kjy3dier64eb1
esharp007
/r/LocalLLaMA/comments/1598t2t/running_llamac_on_budget_android/
1970-01-01T00:00:00
0
{}
1598t2t
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/kjy3dier64eb1/DASHPlaylist.mpd?a=1692970898%2CMDYzNzRlODZlZGY4NTY1NTBjNzIwNTMzOWU1OTA3MmRhY2Q0NzU5NjMzYjdjMGI1ODhjYmVhM2MxNDdkODNjZg%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/kjy3dier64eb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/kjy3dier64eb1/HLSPlaylist.m3u8?a=1692970898%2CMzY4ZDhjMzZjZmQzZDQ5Zjg5ZDQxYWM0ZmYxN2Y4MzFhZGJkYWJlNzhhODJjOTc0MzEwODUwN2E5MmEzNjU4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kjy3dier64eb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 486}}
t3_1598t2t
/r/LocalLLaMA/comments/1598t2t/running_llamac_on_budget_android/
false
false
https://external-preview…e609f00b49e81344
1
{'enabled': False, 'images': [{'id': 'cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=108&crop=smart&format=pjpg&auto=webp&s=8784dead078888d6cd32221613dce8b9b7dfbb9e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=216&crop=smart&format=pjpg&auto=webp&s=066367d8b89dd97cbe3b589cfac38213174ed88d', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=320&crop=smart&format=pjpg&auto=webp&s=cf405ff39e0869bc7884ebff9b24d6aefa4d34c4', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=640&crop=smart&format=pjpg&auto=webp&s=6c06623d6c60e8d98ff8e0ea3ec5579f70e08d6b', 'width': 640}], 'source': {'height': 1795, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?format=pjpg&auto=webp&s=2fd2afc13454228e5c65beecded757c08e576727', 'width': 807}, 'variants': {}}]}
How to do local document retrieve based on LLMs?
1
hi all, I have walked through the Transformer packages and langchain. there is one case that need to do a local document retrieve, i have read the source code of priaveGPT and gpt4all, but i want to know how should we do to do a comparation of these llms? for the approach that i think there are 2 ways: 1. base on langchain to build a vectorstore search based on the similarly of query and splitted sentences, then construct a prompt with a query and content yo let llms to.do text generation. 2. we use a question answering model to do extraction based on the content provided to answer the question. Please correct me if there is any wrong point. The question is how could we do this requirements to do local document retrieve in action? should we use solution 1 to use llms like llama 2? or use solution 2 to do question answer? i have Googled so many but couldn't get a.good solution. I havr also have one question, regarding to the document retrieve l, which task it belong to? text generation? question answer? I do have to say thanks in advance for your patience for the question that has bother me so long. Thanks.( ﹡ˆoˆ﹡ )
2023-07-25T13:40:21
https://www.reddit.com/r/LocalLLaMA/comments/1598v1h/how_to_do_local_document_retrieve_based_on_llms/
Ok_Bee_6447
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1598v1h
false
null
t3_1598v1h
/r/LocalLLaMA/comments/1598v1h/how_to_do_local_document_retrieve_based_on_llms/
false
false
self
1
null
Running GPT4ALL Model on GPU
1
Hi all i recently found out about GPT4ALL and new to world of LLMs they are doing a good work on making LLM run on CPU is it possible to make them run on GPU as now i have access to it i needed to run them on GPU as i tested on "ggml-model-gpt4all-falcon-q4\_0" it is too slow on 16gb RAM so i wanted to run on GPU to make it fast.
2023-07-25T14:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1599ety/running_gpt4all_model_on_gpu/
teritump3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1599ety
false
null
t3_1599ety
/r/LocalLLaMA/comments/1599ety/running_gpt4all_model_on_gpu/
false
false
self
1
null
Need to summarize and analyze documents with sensitive information locally
1
I have a somewhat urgent need to analyze long documents (e.g., PDFs of trial transcripts) locally. I have an old 2016 MBP. My experiments with gpt4all models failed—I just don't apparently have the resources. If you were me, to accomplish my goal, what you you buy now (ideally it would be a laptop)? I'm sort of at a point where I can't wait much longer for the tech to improve for use on low-powered machines, nor can I wait for new hardware developments (as exciting as both those propositions are). Basically I'm looking for a short-medium term "good enough" solution. Any input appreciated!
2023-07-25T14:09:49
https://www.reddit.com/r/LocalLLaMA/comments/1599m5l/need_to_summarize_and_analyze_documents_with/
Hinged31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1599m5l
false
null
t3_1599m5l
/r/LocalLLaMA/comments/1599m5l/need_to_summarize_and_analyze_documents_with/
false
false
self
1
null
Question: Option to run LLaMa and LLaMa2 on external hardware (GPU / Hard Drive)?
1
Hello guys! I want to run LLaMa2 and test it, but the system requirements are a bit demanding for my local machine. I have seen it requires around of 300GB of hard drive space which i currently don't have available and also 16GB of GPU VRAM, which is a bit more from what I currently have. I was wondering if you know of any solutions where I could use external hardware to install and run LLaMa2 from there. Don't mind paying with an hourly rate. Any feedback, links, guides etc. will help a lot! Thank you in advance!
2023-07-25T14:13:11
https://www.reddit.com/r/LocalLLaMA/comments/1599p7t/question_option_to_run_llama_and_llama2_on/
SiltoruzExarz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1599p7t
false
null
t3_1599p7t
/r/LocalLLaMA/comments/1599p7t/question_option_to_run_llama_and_llama2_on/
false
false
self
1
null
[HELP] It's there a way to make Llama 2 model generate text token by token or word by word like what ChatGPT does?
1
`pipeline`, or `model.generate` doesn't seems to support generate text token by token, instead, they will give you all the output text at once when it's finished. And I couldn't find anyway to doing it online using `pytorch` The code below is an example I used from [Llama-2 7B uncensored - QLoRA fine-tune on wizard\_vicuna\_70k\_unfiltered](https://www.reddit.com/r/LocalLLaMA/comments/154rqay/llama2_7b_uncensored_qlora_finetune_on_wizard/) from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import time model_name_or_path = "TheBloke/llama2_7b_chat_uncensored-GPTQ" model_basename = "gptq_model-4bit-128g" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, legacy=False) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None) """ To download from a specific branch, use the revision parameter, as in this example: model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, revision="gptq-4bit-32g-actorder_True", model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", quantize_config=None) """ prompt = "Tell me about AI" prompt_template=f'''### HUMAN: {prompt} ### RESPONSE: ''' print("\n\n*** Generate:") start_time = time.time() input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) print(f"Inference time: {time.time() - start_time:.4f} seconds") # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") start_time = time.time() pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) print(f"Inference time: {time.time() - start_time:.4f} seconds") Thank you in advance :)
2023-07-25T14:41:33
https://www.reddit.com/r/LocalLLaMA/comments/159afuz/help_its_there_a_way_to_make_llama_2_model/
MrForExample
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159afuz
false
null
t3_159afuz
/r/LocalLLaMA/comments/159afuz/help_its_there_a_way_to_make_llama_2_model/
false
false
self
1
null
To those using oobabooga, how exactly do you use it to write fiction ? I am just using it as a normal chat bot, how do you guys get it to function as a story writer ?
1
Curious, thanks.
2023-07-25T14:46:37
https://www.reddit.com/r/LocalLLaMA/comments/159akn0/to_those_using_oobabooga_how_exactly_do_you_use/
Vitamin_C_is_awesome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159akn0
false
null
t3_159akn0
/r/LocalLLaMA/comments/159akn0/to_those_using_oobabooga_how_exactly_do_you_use/
false
false
self
1
null
Running Llama 2 on GPU
1
[removed]
2023-07-25T14:56:36
https://www.reddit.com/r/LocalLLaMA/comments/159au3w/running_llama_2_on_gpu/
pc7ayd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159au3w
false
null
t3_159au3w
/r/LocalLLaMA/comments/159au3w/running_llama_2_on_gpu/
false
false
self
1
null
Why have a large context length if max_token_length stops it?
1
Ok this might be a very noob question, but I can't find the answer anywhere. When you are using a hugging face model with transformers, it seems like always the max\_token\_length is something like 512, but the context of the model is like 2048 or something like this. Also, everyone wants to increase context length. What is the point of having all that context size if you can only put in 512 tokens at a time? I know I'm missing something, but I can't find this anywhere, any help would be greatly appreciated.
2023-07-25T15:21:02
https://www.reddit.com/r/LocalLLaMA/comments/159bi4z/why_have_a_large_context_length_if_max_token/
morecontextplz1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159bi4z
false
null
t3_159bi4z
/r/LocalLLaMA/comments/159bi4z/why_have_a_large_context_length_if_max_token/
false
false
self
1
null
Official WizardLM-13B-V1.2 Released! Trained from Llama-2! Can Achieve 89.17% on AlpacaEval!
1
* Today, the ***WizardLM Team*** has released their **Official** **WizardLM-13B-V1.2** model trained from Llama-2 with brand-new Evol+ methods! * Paper: [https://arxiv.org/abs/2304.12244](https://arxiv.org/abs/2304.12244) * The project repo: [WizardLM](https://github.com/nlpxucan/WizardLM/tree/main) * The official Twitter: [WizardLM\_AI](https://twitter.com/WizardLM_AI) * Twitter status: [https://twitter.com/WizardLM\_AI/status/1669109414559911937](https://twitter.com/WizardLM_AI/status/1669109414559911937) * HF Model: [WizardLM/WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) * Online demo links: 1. [https://b7a19878988c8c73.gradio.app/](https://b7a19878988c8c73.gradio.app/) 2. [https://d0a37a76e0ac4b52.gradio.app/](https://d0a37a76e0ac4b52.gradio.app/) (We will update the demo links in our [github](https://github.com/nlpxucan/WizardLM/tree/main).) **WizardLM-13B-V1.2 achieves:** 1. 7.06 on MT-Bench (V1.1 is 6.74) 2. 🔥 **89.17% on Alpaca Eval (V1.1 is** **86.32%**, **ChatGPT is 86.09%)** 3. 101.4% on WizardLM Eval (V1.1 is 99.3%**,** Chatgpt is 100%) https://preview.redd.it/eb0pdan0o4eb1.jpg?width=1345&format=pjpg&auto=webp&s=9f19c1907a56351619c7a769d5ebb2572bfb8723 https://preview.redd.it/95ybnfk1o4eb1.png?width=1532&format=png&auto=webp&s=f03a8a0d317655313ed6a9acfc8311cbf284513c &#x200B;
2023-07-25T15:24:10
https://www.reddit.com/r/LocalLLaMA/comments/159bl45/official_wizardlm13bv12_released_trained_from/
cylaw01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159bl45
false
null
t3_159bl45
/r/LocalLLaMA/comments/159bl45/official_wizardlm13bv12_released_trained_from/
false
false
https://b.thumbs.redditm…VRTCV1LO2ORQ.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Looking help for Fine tuning / Pre training with with Lora
1
I am looking to fine tune / pre-train Llama based models. I tried using axolotl and some other libraries but did not get satisfactory results for instruction based fine tune. I know some of have questions about size of data , my record size is around 100K distributed instructions . I can generate more data if required.
2023-07-25T15:41:27
https://www.reddit.com/r/LocalLLaMA/comments/159c1z3/looking_help_for_fine_tuning_pre_training_with/
data_dungen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159c1z3
false
null
t3_159c1z3
/r/LocalLLaMA/comments/159c1z3/looking_help_for_fine_tuning_pre_training_with/
false
false
self
1
null
Looking for an Uncensored LLM Service to Try Online (Free or Paid)
1
Hello everyone! I'm on the hunt for an online LLM service that I can try directly in my browser. Unfortunately, my computer is quite slow, and I can't even test the slowest models available. I'm particularly interested in finding a service that's uncensored. If anyone could share their experiences with services that meet these criteria or point me in the right direction to find a list of options, I would greatly appreciate it! Thanks in advance for your help and shared knowledge!
2023-07-25T15:50:00
https://www.reddit.com/r/LocalLLaMA/comments/159cajz/looking_for_an_uncensored_llm_service_to_try/
CryptoNarco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159cajz
false
null
t3_159cajz
/r/LocalLLaMA/comments/159cajz/looking_for_an_uncensored_llm_service_to_try/
false
false
self
1
null
Question about multiple sources with vector embeddings & local LLM.
1
Hello! I have been dabbling with LLMs for the past month and have been working on mini prototypes to improve my daily productivity. I have a couple of questions on how I can build this into a stable system where I can leave it running 24/7 on it's own. Here's some context: As a Software Engineer, I am able to access and manipulate Slack conversations, Confluence documentation, JIRA ticket, and other respective sources. I currently am using OpenAI embeddings for testing, but I'm incrementally transitioning to my local machine for data privacy. (I'm currently on RTX 3070, I know it's not the best, but it works with Wizard-Vicuna 30B model so far. I love to start using Llama2 70B, but I'll work with what I have till I reached my software limits and when I can justify the cost of it) Here's the requirement that I'm trying to achieve along with the question for each of them. 1. Queries to the data source should stay local and private within my own server (OpenAI is not a good idea) - Vector embeddings should be private for data privacy compliancy of course. (I can do this with `all-MiniLM-L6-v2` it seems. But how would you guys go about this exactly? Do I create multiple vectorDB collections and store them corresponding? (E.g. SlackCollection, ConfluenceCollection, XCollection). If I were to do this, how exactly do I indicate which collection to query from? 2. I should be able to query/chat with the LLM and search/request for data from different datasources based on my query. - Currently, I'm using LlamaIndex (GPTIndex) to achieve what I'm doing with a single source of data (Confluence). I'm looking to move this towards LangChain as I feel that LlamaIndex seems limiting with regards to using private embeddings and localLLMs. I may be wrong, but I'd love to hear from you. - I'd want to move to LangChain because I'd love to utilize agents (I hope I understand the concept right) and it definitely looks to have a wider integration with other libraries too. Agents would allow to me do the following I believe. ``` Do the following step by step. Search {content} from SlackCollection Search {content} from ConfluenceCollection Summarize the results and cite the sources before returning the results. ``` 3. The vector embeddings should stay up to date. Freshness can be up to how I define it (E.g. daily). - I can set up a cronjob to scrape respective Slack threads, confluence docs, etc. However, is there a way to incrementally update these embeddings? Right now, I'm doing the whole process from scratch every time. I'd love to speed up the process if possible. Additional Questions: 4. Should I use an Instruct Model or Chat Model? I feel like I'd should use a chat model since I do want memory capability in the LLM. 4. Lastly, I don't plan to fine-tune my model as I don't see a need to have such a specific model. I would consider it in the future, but I don't have the expertise to know exactly what to fine-tune, or what's the best approach either. OpenAIEmbeddings with GPT3.5 is a powerhouse frankly. But I'd love to hear if there's any thoughts on this. Thanks for taking the time to read this!
2023-07-25T15:54:23
https://www.reddit.com/r/LocalLLaMA/comments/159cex0/question_about_multiple_sources_with_vector/
pickandmix222
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159cex0
false
null
t3_159cex0
/r/LocalLLaMA/comments/159cex0/question_about_multiple_sources_with_vector/
false
false
self
1
null
How to fine tune llama2?
1
Trying to fine tune llama2, having no success. Fastchat is not working for me. Also, can we use the same code for llama2 to fine tune llongma2?
2023-07-25T16:02:24
https://www.reddit.com/r/LocalLLaMA/comments/159cn1s/how_to_fine_tune_llama2/
ZealousidealBlock330
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159cn1s
false
null
t3_159cn1s
/r/LocalLLaMA/comments/159cn1s/how_to_fine_tune_llama2/
false
false
self
1
null
I'm an idiot - make sure you download the actual .bin...
1
Recently tried to run local LLM. Followed a couple of guides and got llama.cpp installed, but couldn't make it run any .bin files from HuggingFace. Figured I just had an issue with my system (Intel Mac) but couldn't solve. Kept getting 'Are you sure this is a bin' error from llama.cpp. After some clicking around Github I thought I'd just download a .bin file manually rather than clone the repository like I had been doing (which basically was just downloading the filenames...). Lo and behold after several GB download I can now run the model. Hope that helps any other newbs!
2023-07-25T16:03:41
https://www.reddit.com/r/LocalLLaMA/comments/159codo/im_an_idiot_make_sure_you_download_the_actual_bin/
etsatlo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159codo
false
null
t3_159codo
/r/LocalLLaMA/comments/159codo/im_an_idiot_make_sure_you_download_the_actual_bin/
false
false
self
1
null
New badass model OpenAssistant/llama2-13b-orca-8k released 🎉
1
&#x200B; ## Model Description This model is a fine-tuning of Meta's Llama2 13B model with 8K context size on a long-conversation variant of the Dolphin dataset ([**orca-chat**](https://huggingface.co/datasets/shahules786/orca-chat)). Note: **At least Huggingface Transformers** [**4.31.0**](https://pypi.org/project/transformers/4.31.0/) **is required to load this model!** ## Usage import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False) model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." user_prompt = "Write me a poem please" prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>""" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ## Model Details * base model: [**meta-llama/Llama-2-7b**](https://huggingface.co/meta-llama/Llama-2-7b) * License: [**Llama 2 Community License Agreement**](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) * sampling report: [**2023-07-25\_OpenAssistant\_llama2-13b-orca-8k-3319\_sampling\_llama2\_prompt.json**](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json) * wandb: [**public-sft/runs/2jfazjt9**](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9) * checkpoint: 3319 steps * datatpye: fp16 * sponsored by: [**Redmond.ai**](https://redmond.ai/) ## Long context (RoPE Scaling) This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently added to [**Huggingface transformers**](https://github.com/huggingface/transformers/). Before loading this model please make sure HF transformers >=4.31.0 is installed (pip install transformers>=4.31.0 ). ## Conversation Template For the initial response use (e.g. the [**llama2 default system prompt**](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46) works well): <|system|>system message</s><|prompter|>user prompt</s><|assistant|> For multi-turn conversations use: <|system|>system message</s><|prompter|>Q1</s><|assistant|>A1</s><|prompter|>Q2</s><|assistant|> The model was trained with the following 15 system messages used to generate the training examples (see [**ORCA paper**](https://arxiv.org/abs/2306.02707)): 1. You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer. 2. You are an AI assistant. You will be given a task. You must generate a detailed and long answer. 3. You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old. 4. You are an AI assistant that follows instruction extremely well. Help as much as you can. 5. You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer. 6. You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. 7. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old. 8. Explain how you used the definition to come up with the answer. 9. You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question. 10. You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer. 11. User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer. 12. You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer. 13. You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task. 14. Given a definition of a task and a sample input, break the definition into small parts. Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part #: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria. 15. You are an AI assistant that helps people find information. ## Datasets: Orca-Chat/Dolphin, RedPajama1T & FanFics This model was trained on: * [**shahules786/orca-chat**](https://huggingface.co/datasets/shahules786/orca-chat) * [**togethercomputer/RedPajama-Data-1T-Sample**](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) * [**atom-in-the-universe/fanfics-10k-50k**](https://huggingface.co/datasets/atom-in-the-universe/fanfics-10k-50k) &#8203; Dataset Composition: Tain (sampled): orca-chat: 188842 (100%) fanfics: 47760 (100%) red_pajama: 188262 (25%) Valid: orca-chat: 5000 fanfics: 1000 red_pajama: 1000 The dataset [**shahules786/orca-chat**](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [**ehartford/dolphin**](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations to improve long-context training. Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size. ## Model Configuration llama2_13b_orca_8k: rng_seed: 0xe1291f1a use_custom_sampler: true sort_by_length: false dtype: fp16 log_dir: "llama2_log_13b_orca_8k" learning_rate: 1e-5 model_name: /mnt/data/llama2/Llama-2-13b-hf/ output_dir: llama2_13b_orca_8k deepspeed_config: configs/zero_config_pretrain.json weight_decay: 0.0 max_length: 8192 warmup_steps: 100 use_flash_attention: true gradient_checkpointing: true gradient_accumulation_steps: 8 per_device_train_batch_size: 2 per_device_eval_batch_size: 1 residual_dropout: 0.0 eval_steps: 200 save_steps: 1000 # (total steps: 3319) num_train_epochs: 1 save_total_limit: 4 superhot: true superhot_config: type: linear scale: 2 datasets: - orca-chat: max_val_set: 5000 - fanfics: max_chunk_size: 65535 max_val_set: 1000 - red_pajama: fraction: 0.25 max_val_set: 1000 max_chunk_size: 65535 peft_model: false # Source [OpenAssistant/llama2-13b-orca-8k-3319 · Hugging Face](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319) [TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ · Hugging Face](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ) [TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML · Hugging Face](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML) &#x200B;
2023-07-25T16:35:44
https://www.reddit.com/r/LocalLLaMA/comments/159djux/new_badass_model_openassistantllama213borca8k/
FHSenpai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159djux
false
null
t3_159djux
/r/LocalLLaMA/comments/159djux/new_badass_model_openassistantllama213borca8k/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JcWKAsMJSrQCtKAMMNiBWwZ6NOyhP8a-oTgAM2i1iyw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=108&crop=smart&auto=webp&s=ebee6ec6f35f7ebd7a4a8851e017ecad97cba431', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=216&crop=smart&auto=webp&s=b6b414a55b89624206731e951cfc11c1c6a470f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=320&crop=smart&auto=webp&s=5e908da23d0503b284be957a67bd8e3e93e3a911', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=640&crop=smart&auto=webp&s=b06e012c34b2b107c9b2d7748535af466d27ec26', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=960&crop=smart&auto=webp&s=d1eed7ba0a237eb4ee0ac43ba1fb602aba035945', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=1080&crop=smart&auto=webp&s=b1041ce76eaeed148baba42a599eb1df562ae9e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?auto=webp&s=32de8271fa96d2c3b0919d2394be31cbecf94f2b', 'width': 1200}, 'variants': {}}]}
Currently what is the best 7-13B model for code generation
1
Looking for something with long context length, I wanna loD whole project into it. Is it still wizardcoder or falcon based ones? Any code fine tune for llama2?
2023-07-25T16:41:16
https://www.reddit.com/r/LocalLLaMA/comments/159dp7l/currently_what_is_the_best_713b_model_for_code/
Voxandr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159dp7l
false
null
t3_159dp7l
/r/LocalLLaMA/comments/159dp7l/currently_what_is_the_best_713b_model_for_code/
false
false
self
1
null
(Orca mini) unable to access saved chats/threads
1
Hi all. Installed orca mini. On close, program saves to disk. Upon reopening, I see the name of the chat. When clicked on, nothing happens. When asked, program is incapable of accessing. Files exist as .chat files. I can’t find the actual app to associate it with the file. Running a pc windows 11 home. Not much experience with pcs, sorry if it’s a simple solution.
2023-07-25T16:45:26
https://www.reddit.com/r/LocalLLaMA/comments/159dtci/orca_mini_unable_to_access_saved_chatsthreads/
xdiox66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159dtci
false
null
t3_159dtci
/r/LocalLLaMA/comments/159dtci/orca_mini_unable_to_access_saved_chatsthreads/
false
false
self
1
null
Finetune LLM for Legal tasks
1
Ehi I am trying to build a chatbot able to perform open book Q&A on Italian Law. Semantic retrieval with some simple prompt engineering works well enough just sometimes. Other times instead, it is required some domain knowledge that is very difficult to insert in the prompt. For example, If a question is about a "contract for real estate with a state agency", it requires not only the specific relevant laws, but also general knowledge about contracts, real estate, and state agencies. GPT4 does not know it. If this itself does not fit in the context (it's whole sections of a book concerning private law), understanding each of these concepts might require general knowledge on even more topics, and so on, growing exponentially. I am wondering how finetuning an LLM might help my case. I can spend around 1k€, that is \~300 hours of A100 gpu, to build a decent prototype, plus more OpenAI credits. I get that the final chain has to be GPT4 since is so much better than everything else, but maybe I can use a fine-tuned model for some of the intermediate calls of my chains. I am thinking about Falcon-40B since it has been trained on more italian data than LLama2. Will a QLoRa fine-tuned model get some of the "general domain knowledge" I need it to know if trained with proper data? I would of course continue using retrieval and some smart prompt engineering to insert the specific relevant laws and judjments in the prompt. &#x200B;
2023-07-25T16:55:11
https://www.reddit.com/r/LocalLLaMA/comments/159e2z5/finetune_llm_for_legal_tasks/
EnnioEvo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159e2z5
false
null
t3_159e2z5
/r/LocalLLaMA/comments/159e2z5/finetune_llm_for_legal_tasks/
false
false
self
1
null
What are VRAM requirements for QLoRA Finetuning?
1
I want to do some QLoRA finetuning on custom datasets. While an enterprise-grade cluster with 8 H100s or something would of course be amazing, I don't have those kind of resources available. I'm looking to see what might be possible at different VRAM levels, and I thought this community is probably the best place to ask. As far as I can tell, the 14B or less models can all be fairly easily finetuned on a 24GB GPU like an RTX 3090, but I want to see about higher parameter models. I have seen some posts on this subreddit about [33B QLoRA finetunes on a 24GB GPU](https://www.reddit.com/r/LocalLLaMA/comments/13tz14v/how_to_qlora_33b_model_on_a_gpu_with_24gb_of_vram/) and two posts about struggles to [finetune MPT-30B](https://www.reddit.com/r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/) (which seemed to run in to issues not necessarily because of VRAM, [but rather because MPT was still new at that point.](https://www.reddit.com/r/LocalLLaMA/comments/14n3rfv/mpt30b_qlora_on_24_gb_vram/)) So now that Llama 2 is out with a 70B parameter, and Falcon has a 40B and Llama 1 and MPT have around 30-35B, I'm curious to hear some of your experiences about VRAM usage for finetuning. I imagine some of you have done QLoRA finetunes on an RTX 3090, or perhaps on a pair for them. I'm also hoping that some of you have experience with other higher VRAM GPUs, like the A5000 and maybe even the "old" cards like the P40. So please, share your experiences and VRAM usage with QLoRA finetunes on models with 30B or more parameters.
2023-07-25T18:07:47
https://www.reddit.com/r/LocalLLaMA/comments/159g3hy/what_are_vram_requirements_for_qlora_finetuning/
ResearchTLDR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159g3hy
false
null
t3_159g3hy
/r/LocalLLaMA/comments/159g3hy/what_are_vram_requirements_for_qlora_finetuning/
false
false
self
1
null
Best options for running LLama locally with AMD GPU on windows (Question)
1
Hi all, I've got an AMD gpu (6700xt) and it won't work with pytorch since CUDA is not available with AMD. A couple general questions: 1. I've got an AMD cpu, the 5800x3d, is it possible to offload and run it entirely on the CPU? I can't imagine the performance is going to be great with this option... 2. Is there some sort of work around? I've looked at ROCm 5.x, but from what I can tell it is linux only. I'd rather not dual boot my pc into linux and windows if I don't have to. 3. Side question, does anyone have an example notebook or code where they are running on an AMD gpu on windows locally? I've looked but the trails lead to google collab notebooks and running on linux machines. Any help would be greatly appreciated. Still pretty new to actually implementing LLMs.
2023-07-25T19:26:34
https://www.reddit.com/r/LocalLLaMA/comments/159i9v9/best_options_for_running_llama_locally_with_amd/
oaky180
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159i9v9
false
null
t3_159i9v9
/r/LocalLLaMA/comments/159i9v9/best_options_for_running_llama_locally_with_amd/
false
false
self
1
null
Nous- Hermes & Puffin (13b) having opposite opinions
1
I was testing some models with random questions I had to see differences, and I've found a curious difference: When you as how you should defrost a frozen meal (in a glass container), they both prefer different approaches: Hermes --> cold water, slow defrost: Less bacteria growth Puffin--> hot water, quick defrost: less bacteria growth Granted the fine tuning methods are fairly different (300k gpt4 vs 3k human+gpt4 ), but given they are based on the same model, and "with" the same other model, I would have expected a simple question like this to be the same. Does anyone know what (specifically) can cause this? seems like an odd thing to take an opposite stance on.
2023-07-25T19:41:36
https://www.reddit.com/r/LocalLLaMA/comments/159iodz/nous_hermes_puffin_13b_having_opposite_opinions/
leschnoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159iodz
false
null
t3_159iodz
/r/LocalLLaMA/comments/159iodz/nous_hermes_puffin_13b_having_opposite_opinions/
false
false
self
1
null
1500gb ram and 2x xeon e5-2680 v4 enough for cpu only in oobabooga?
1
Hi all, i was able to obtain a hpe proliant g10 with 1500gb ram and 2x xeon e5-2680 v4. is that enough to get an answer from llms without dying in front of the sreen from the waiting time for each token? i dont expect the speed as i get from my now setup with rtx 3060 12gb with exllama but im curious and wont be able to check it until weekend because time.
2023-07-25T19:45:26
https://www.reddit.com/r/LocalLLaMA/comments/159is9e/1500gb_ram_and_2x_xeon_e52680_v4_enough_for_cpu/
Plums_Raider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159is9e
false
null
t3_159is9e
/r/LocalLLaMA/comments/159is9e/1500gb_ram_and_2x_xeon_e52680_v4_enough_for_cpu/
false
false
self
1
null
hpe proliant g10 with 1500gb ram and 2x xeon gold 6226r enough for proper cpu only?
1
Hi all, i was able to obtain a hpe proliant g10 with 1500gb ram and 2x xeon gold 6226r. is that enough to get an answer from llms without dying in front of the sreen from the waiting time for each token? i dont expect the speed as i get from my now setup with rtx 3060 12gb with exllama but im curious and wont be able to check it until weekend because time.
2023-07-25T19:55:36
https://www.reddit.com/r/LocalLLaMA/comments/159j2vk/hpe_proliant_g10_with_1500gb_ram_and_2x_xeon_gold/
Plums_Raider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159j2vk
false
null
t3_159j2vk
/r/LocalLLaMA/comments/159j2vk/hpe_proliant_g10_with_1500gb_ram_and_2x_xeon_gold/
false
false
self
1
null
How to prompt llama.c
1
Okay so I tried the llama.c…the first thing to run above 0.1 t/s on my pc Yayyy Now with the provided model it doesn’t allow any kind of prompting So I tried to fiddle with it and since by default it uses a single token to begin I added a start word And this is how it turned out Then I tried to make it work with prompts of more than one word Now the thing is I don’t really understand much about LLMs C even less So after a few segmentation faults here and there and a headache I’m going to leave it here If anyone has any idea how to do this or if it has already been done please do tell Good night
2023-07-25T20:03:26
https://www.reddit.com/gallery/159jaym
Former_Apple
reddit.com
1970-01-01T00:00:00
0
{}
159jaym
false
null
t3_159jaym
/r/LocalLLaMA/comments/159jaym/how_to_prompt_llamac/
false
false
https://b.thumbs.redditm…ISbmnZIkrkFA.jpg
1
null
Llama-2-70b-Guanaco-QLoRA becomes the first model on the Open LLM Leaderboard to beat gpt3.5's MMLU benchmark
1
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) https://preview.redd.it/wq4vow0wc6eb1.png?width=1457&format=png&auto=webp&s=bcf22b0c57513a31bfb7e4c85baa8df2d6986e93 https://preview.redd.it/g37tow0wc6eb1.png?width=1455&format=png&auto=webp&s=a75af69bffc9295be92a7f3b5306778fcdb7ebe7 The current gpt comparison for each Open LLM leaderboard benchmark are Average - Llama 2 finetunes are nearly equal to gpt 3.5 ARC - Open source models are still far behind gpt 3.5 HellaSwag - Around 12 models on the leaderboard beat gpt 3.5, but are decently far behind gpt 4 MMLU - 1 model barely beats gpt 3.5 TruthfulQA - Around 130 models beat gpt 3.5, and currently 2 models beat gpt 4 Is MMLU still seen as the best of the four benchmarks? Also, why are open source models still so far behind when it comes to ARC?
2023-07-25T21:10:23
https://www.reddit.com/r/LocalLLaMA/comments/159l6f3/llama270bguanacoqlora_becomes_the_first_model_on/
DontPlanToEnd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159l6f3
false
null
t3_159l6f3
/r/LocalLLaMA/comments/159l6f3/llama270bguanacoqlora_becomes_the_first_model_on/
false
false
https://b.thumbs.redditm…L6oG_kpqz9xU.jpg
1
null
Llama-2-70b-Guanaco-QLoRA becomes the first model on the Open LLM Leaderboard to beat gpt3.5's MMLU benchmark
1
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) https://preview.redd.it/wq4vow0wc6eb1.png?width=1457&format=png&auto=webp&s=bcf22b0c57513a31bfb7e4c85baa8df2d6986e93 https://preview.redd.it/g37tow0wc6eb1.png?width=1455&format=png&auto=webp&s=a75af69bffc9295be92a7f3b5306778fcdb7ebe7 The current gpt comparison for each Open LLM leaderboard benchmark are Average - Llama 2 finetunes are nearly equal to gpt 3.5 ARC - Open source models are still far behind gpt 3.5 HellaSwag - Around 12 models on the leaderboard beat gpt 3.5, but are decently far behind gpt 4 MMLU - 1 model barely beats gpt 3.5 TruthfulQA - Around 130 models beat gpt 3.5, and currently 2 models beat gpt 4 Is MMLU still seen as the best of the four benchmarks? Also, why are open source models still so far behind when it comes to ARC?
2023-07-25T21:14:07
https://www.reddit.com/r/LocalLLaMA/comments/159l9ll/llama270bguanacoqlora_becomes_the_first_model_on/
DontPlanToEnd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159l9ll
false
null
t3_159l9ll
/r/LocalLLaMA/comments/159l9ll/llama270bguanacoqlora_becomes_the_first_model_on/
false
false
https://a.thumbs.redditm…e3IovQf0l8F4.jpg
1
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]}
The difference between quantization methods for the same bits
3
Using GGML quantized models, let's say we are going to talk about 4bit I see a lot of versions suffixed with either 0, 1, k\_s or k\_m I understand that the difference is in the way of quantization that affect the final size of the quantized models but how does this effect quality of output and speed of inference?
2023-07-25T22:46:35
https://www.reddit.com/r/LocalLLaMA/comments/159nrh5/the_difference_between_quantization_methods_for/
yehiaserag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159nrh5
false
null
t3_159nrh5
/r/LocalLLaMA/comments/159nrh5/the_difference_between_quantization_methods_for/
false
false
self
3
null
How to evaluate production data with Llama 2 70B quickly?
2
Hi everyone, I am working for a large corp that wants to evaluate Llama 2 70B to see how it performs against OpenAI on Azure, and Vertex AI. Because OpenAI on Azure and Vertex AI are deployed inside our VPC it is easy to try, but to try Llama 2 70B, we would need to get quotas for GPUs and I guess engineering effort on our side, which becomes much more painful. It seems like it is not trivial to solve, as we would like something that is fast, like a SaaS, but it's hard to have that accepted by Compliance / ITSec, and On-VPC deployment is painful in another way. What do you think are good ways to solve this? Have you encountered that issue too? Would love to have your opinion!
2023-07-25T22:46:35
https://www.reddit.com/r/LocalLLaMA/comments/159nrh6/how_to_evaluate_production_data_with_llama_2_70b/
Separate-Still3770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159nrh6
false
null
t3_159nrh6
/r/LocalLLaMA/comments/159nrh6/how_to_evaluate_production_data_with_llama_2_70b/
false
false
self
2
null
What 7b model based on Llama-2 do you use?
2
I got left behind on the news after a couple weeks of "enhanced" worked commtments. Now I got the time on my hands, I felt really out of date on how fast things are going here. I know the "best" can be a bit subjective so I think the better question is, what 7b model do people use the most nowaday? GGML format would be best on my case. Thank you!
2023-07-26T00:41:48
https://www.reddit.com/r/LocalLLaMA/comments/159qm9p/what_7b_model_based_on_llama2_do_you_use/
Spirited_Employee_61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159qm9p
false
null
t3_159qm9p
/r/LocalLLaMA/comments/159qm9p/what_7b_model_based_on_llama2_do_you_use/
false
false
self
2
null
QLora training, better to use base Llama 2 or an already fine-tuned version (Nous Hermes, WizardLM, etc)? Best way to format the dataset for training?
1
Hi, I help out in a fan translation community for Chinese games, usually wuxia or xianxia themed. Sadly this genre of games does not translate well with google, deepl, etc. We have had good success using chatgpt as an aide, but it would not be feasible to feed it a spreadsheet with thousands of lines to translate. One of us have already curated a 46 thousand line key value pair (untranslated and translated) dataset from previous translation mods we've made. They had mixed results with training the M2M100 480m model. I thought I would give making a qlora a try using their dataset, I remember when I tested [Chinese Nous Hermes](https://huggingface.co/coyude/Nous-Hermes-13b-Chinese-GGML) that I got some pretty promising results. This brings me to my question, would it be better to use an already finetuned model like WizardLM 1.2, or Nous Hermes Llama 2? Or should I just use the base Lllama 2 for training on this dataset. I will be trying to train this using qlora on a google collab notebook unless someone can suggest a better method (my local computer is a laptop with an 8gb 3070 laptop). This is my first time trying to train any sort of ai and Im not super familiar with python so any suggestions or help will be appreciated. My last question is how should the dataset be formatted for optimal results with qlora training? I was initially going to do something like this: [ { "instruction": "Translate this text from Chinese to English for a Wuxia themed game.", "input": "对了{junior},在阁中修业迈入了第二年,接下来应当会有更多阁外派遣吧?", "output": "By the way, {junior}, after entering the second year of training in the Pavilion, there should be more dispatches outside the Pavilion?" }, ... ] The AI would need to retain those curly brackets if possible, etc. There's sometimes things like that and markup language for things like font color mixed in.
2023-07-26T00:58:16
https://www.reddit.com/r/LocalLLaMA/comments/159qzgs/qlora_training_better_to_use_base_llama_2_or_an/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159qzgs
false
null
t3_159qzgs
/r/LocalLLaMA/comments/159qzgs/qlora_training_better_to_use_base_llama_2_or_an/
false
false
self
1
{'enabled': False, 'images': [{'id': '6q6Y2tDda1x3yhz5MsJHEz6XFIE8uQ8cIyWyaPYZSSc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=108&crop=smart&auto=webp&s=d80482e8642de7b9ca2ca40d5f0b33cdff9279a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=216&crop=smart&auto=webp&s=d72bfb824c359cbf0547a817791d3ea051625798', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=320&crop=smart&auto=webp&s=6cffdd47cd174ecbc05137835fa2a96059a137b0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=640&crop=smart&auto=webp&s=aeb95b2ea6dfa72679b25da733e6d764163a66d2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=960&crop=smart&auto=webp&s=a6c2784c54d007195977b14ecbf94e3a2667ca1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=1080&crop=smart&auto=webp&s=1b6fffd64bd4fa57de0d3c42c49f96c4c7464d9a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?auto=webp&s=d7ed719efe28350eeca4505414d53666976c474a', 'width': 1200}, 'variants': {}}]}
Unfiltered version of open-assistant/guanaco dataset
5
Should we prepare unfiltered version of this dataset
2023-07-26T01:21:41
https://www.reddit.com/r/LocalLLaMA/comments/159rilb/unfiltered_version_of_openassistantguanaco_dataset/
Sufficient_Run1518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159rilb
false
null
t3_159rilb
/r/LocalLLaMA/comments/159rilb/unfiltered_version_of_openassistantguanaco_dataset/
false
false
self
5
null
Llama2 (7B-Chat) surprisingly outputs nice essay when started with “random prompt”
7
 Run command: ** ./main -m ./models/7B/ggml-model-q4_0.bin -n 1024 --repeat-penalty 1.0 --color -i --interactive-first -r "User:" --random-prompt** Llama2 opened with the random word “If” and I filled it in ___ If I was a succesful application developer who just released a successful monetizable concept utilizing Docker - that concept explained in all its details would look like:  prüm ===================================================================  Title: Docker-based Monetization of a Successful Application ===================================================================   Abstract: This document outlines a strategy for leveraging Docker to monetize a successful application. The concept involves creating a containerized version of the application, offering it as a service to customers, and generating revenue through subscription fees or pay-per-use models. The document provides a detailed explanation of the concept, its potential benefits, and the steps required to implement it. Introduction: Monetizing a successful application is essential for any business to remain competitive and sustainable in today's fast-paced digital landscape. One innovative approach is to leverage Docker, an open-source platform for containerization. By packaging the application into a container, businesses can offer it as a service to customers, generating revenue through various pricing models. In this document, we will explore the Docker-based monetization concept in detail, including its potential benefits, the steps required to implement it, and potential challenges. Potential Benefits: 1. Increased Revenue: By packaging the application into a container, businesses can offer it as a service to customers, generating revenue through subscription fees or pay-per-use models. 2. Scalability: Docker containers can be easily scaled to meet the needs of a growing customer base, without requiring significant investment in infrastructure. 3. Flexibility: The containerized application can be deployed on various platforms, such as cloud, on-premise, or hybrid environments, providing customers with flexibility in how they consume the service. 4. Ease of Deployment: With Docker, the application can be deployed quickly and easily, reducing deployment times and minimizing the risk of errors during the deployment process. 5. Improved Security: Docker containers provide an additional layer of security, as each container is isolated from the rest of the system, reducing the risk of security breaches. 6. Better Performance: Docker containers can improve application performance by providing faster start-up times, improved resource utilization, and reduced overhead.   Steps Required to Implement:   1. Identify the Target Audience: Determine the target audience for the application, including their needs, preferences, and expectations. 2. Define the Requirements: Define the requirements for the application, including the features and functionalities needed to meet the target audience's needs. 3. Develop the Application: Develop the application using Docker, incorporating the required features and functionalities. 4. Test and Validate: Test and validate the application to ensure it meets the requirements and works as expected. 5. Package and Deploy: Package the application into a container, and deploy it to the target environment. 6. Monitor and Maintain: Monitor and maintain the application, ensuring it continues to meet the needs of the target audience.   Challenges: 1. Learning Curve: There may be a learning curve for developers and IT teams who are not familiar with Docker and containerization technologies. 2. Security Concerns: There may be concerns about the security of the containerized application, particularly if the container is not properly configured or secured. 3. Integration: Integrating the containerized application with existing systems and infrastructure may present challenges, particularly if the systems are not compatible with Docker. 4. Limited Resources: Containerization may not be feasible for applications that require extensive resources, such as large databases or complex simulations. 5. Lack of Standardization: There may be a lack of standardization in the use of Docker and containerization technologies, which can make it difficult to ensure consistency and reproducibility across different environments and teams.   Best Practices: 1. Plan and Design: Plan and design the application and its containerization strategy, taking into account the target audience, requirements, and constraints. 2. Use Official Images: Use official Docker images to ensure consistency and reproducibility across different environments and teams. 3. Use Secure Containers: Use secure containers to protect the application and its data, and to meet security requirements. 4. Monitor and Log: Monitor and log the application to ensure it is working as expected and to identify any issues or errors. 5. Test and Validate: Test and validate the application to ensure it meets the requirements and works as expected. 6. Document and Share: Document and share the containerization strategy and results, to ensure consistency and reproducibility across different environments and teams.   Future Developments:   1. Improved Security: Containers will continue to improve security features, such as better isolation, to meet the increasing security demands of enterprises. 2. Better Performance: Containers will continue to improve performance, such as faster startup times, to meet the needs of modern applications. 3. Simplified Management: Containers will continue to simplify management, such as automated scaling and failover, to make it easier for developers and ops teams to work together. 4. Hybrid and Multi-Cloud: Containers will continue to support hybrid and multi-cloud strategies, to give enterprises the flexibility to deploy applications across different environments. 5. AI and Machine Learning: Containers will continue to incorporate AI and machine learning to improve automation, security, and performance. 6. Collaboration and Interoperability: Containers will continue to improve collaboration and interoperability, to make it easier for developers and ops teams to work together and integrate with other technologies.   Conclusion: Containerization has become a crucial aspect of modern software development and deployment. By using containers, developers can package their applications and their dependencies into a single container that can be run consistently across different environments, such as development, testing, staging, and production. This consistency and portability make it easier for developers and ops teams to work together, reduce the risk of errors, and improve the overall efficiency of the development and deployment process. Containers will continue to evolve and improve in the coming years, incorporating new technologies and addressing emerging challenges, such as security, performance, and management. As containers continue to advance, they will remain an essential tool for software development and deployment.
2023-07-26T02:15:06
https://www.reddit.com/gallery/159soaj
Happy_Chicken9835
reddit.com
1970-01-01T00:00:00
0
{}
159soaj
false
null
t3_159soaj
/r/LocalLLaMA/comments/159soaj/llama2_7bchat_surprisingly_outputs_nice_essay/
false
false
https://b.thumbs.redditm…2a6NK4qCsoVs.jpg
7
null
Way to fill in blank information in structured text besides LMQL?
1
&#x200B; https://preview.redd.it/t4zxc2zka8eb1.png?width=1914&format=png&auto=webp&s=3f3cbb47705329d955e94960d252bbf19b4886c4 I would like to be able to create something that fills in gaps in structured text instead of LLM generating everything. Is LMQL my only option or is there a simpler way? I am asking because i want to use koboldapi with a locally hosted instance how i can do with langchain with a wrapper for the API. Is there any way I can also either use KoboldAPI in LMQL or achieve this result some other way? Im going to guess that unless I want to do a lot of writing my only good option is to switch to LlamaCpp. (Finetuning is probably out of the equation for quite some time because of money constraints, and on their own prompts on the other hand can be fragile and random) Would i be better off looking for a " symbol and type examples for the AI in the system prompt? I would appreciate any suggestions. Cheers.
2023-07-26T03:27:03
https://www.reddit.com/r/LocalLLaMA/comments/159u7a1/way_to_fill_in_blank_information_in_structured/
paryska99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159u7a1
false
null
t3_159u7a1
/r/LocalLLaMA/comments/159u7a1/way_to_fill_in_blank_information_in_structured/
false
false
https://b.thumbs.redditm…_uGsdLJCJD_M.jpg
1
null
Hacked away an abysmally simple Code Interpreter over the weekend using locally hosted Llama-2 based models.
10
First off I want to thank chatGPT for making all this possible! \----------------------------------------------------------------------------------------------------------------------------------------------------- UPDATE: Repo is up: [🦙 Baby Code](https://github.com/itsPreto/baby-code) \----------------------------------------------------------------------------------------------------------------------------------------------------- Feel free to give as much feedback as possible. I have a full time (very busy) day job, which I'm late to as I type this-- but I'll be looking to maintain this as much as possible on my downtime! In the last 7 months I went from knowing only Kotlin (for my Android dev job) to know basically being a full-stack engineer-- and I know I'm probably doing a bunch of things wrong but just WOW have I already come a long way. So I figured I'd post here first to get some initial feedback before contemplating to push to Github (this would be my first open source project:) Anyways-- I've got loads of ideas for features and improvements but I would, first and foremost, like to hear from the community. Was also learning a bit of video editing lol :3 [my goofy code interpreter](https://reddit.com/link/159v7re/video/20gg9zogg8eb1/player) `In a nutshell` \- `Backend`: **Python Flask** (CORS for serving both the api and the html). \- `Frontend`: **Html/Js/Css** (I designed the UI 100% to my own liking but I'd be open to change it). \- `Engine`: **Llama.cpp** *(Inferencing library for Llama/GGML models).* \- `Model`: **Llama-2** *(only models compatible with Llama.cpp).* \- `Arbiter`: **LangChain** (for gluing all of this together). \- `Wrapper`: **LlamaCpp** *(Langchain's wrapper around Llama.cpp for loading the models).* (`visualized`): | | | | | | |:-------:|:------:|:--------:|:-----:|:-------:| | Frontend| <--> | Backend | <--> | | | (HTML, | | (Flask) | | | | CSS, JS)| | | | | | |<------>| /generate| |Llama.Cpp| | | | | | (models | | |<------>| /run | | locally | | | | | | stored) | | | | | | | | | LlamaCpp | | | | | <--------> | | | | | | | | LangChain | | <------------------------> | \-----------------------------------------------------------------------------------------------------------------------------------------------------GPT-4's breakdown: 1. ***Frontend***: The user interacts with the frontend of the application, typically by entering a Python-related question or prompt into the text area and clicking on a button to send this question to the server. The request sent from the frontend to the backend is an HTTP POST request, containing the user's question in the body. If the response contains a valid Python code snippet, it is displayed in the output text area. If the response contains an error message, the error message is displayed instead. 2. ***Backend***: The backend receives the HTTP POST request from the frontend. This triggers the \`/generate\` endpoint. The question from the request is passed to the LlamaCpp model to generate a Python code snippet that could answer the user's question. The Python code is extracted from the model's response using a regular expression that matches Python code enclosed in triple backticks. * A user interacts with the frontend, triggering an HTTP POST request to the backend's /generate endpoint. * The backend processes the request, which involves calling the LlamaCpp model to generate Python code in response to a given prompt. * The backend extracts the Python code from the model's response and runs it using the /run endpoint. * The backend sends an HTTP response back to the frontend with the results of the code execution, which updates the user interface accordingly.
2023-07-26T04:16:58
https://www.reddit.com/r/LocalLLaMA/comments/159v7re/hacked_away_an_abysmally_simple_code_interpreter/
LyPreto
self.LocalLLaMA
2023-07-27T00:00:17
0
{}
159v7re
false
null
t3_159v7re
/r/LocalLLaMA/comments/159v7re/hacked_away_an_abysmally_simple_code_interpreter/
false
false
https://b.thumbs.redditm…175sLgtR9kLA.jpg
10
{'enabled': False, 'images': [{'id': '_ypYR2bezMLYEsU_ln7OwfOIF-wB-yXAAC9-SCtVMiM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=108&crop=smart&auto=webp&s=fefdd6224ba1f6842de3cea2a9ebc1be1c0da190', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=216&crop=smart&auto=webp&s=f5bef2cb840decb757256e251f8bd91a35905563', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=320&crop=smart&auto=webp&s=714e91d656a0d754fb2e4524583df3b45e1d78b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=640&crop=smart&auto=webp&s=96a6382979da810bf4f006909a61ed98bfd81a8d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=960&crop=smart&auto=webp&s=d510769b86fa03dac20c0c187f1b6700fbbda0e7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=1080&crop=smart&auto=webp&s=6d749009e1a1c4bb768fc01db1e429ec682c60cf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?auto=webp&s=85963ab5ffccf85ea9527e564a9788b0c697d793', 'width': 1200}, 'variants': {}}]}
How to stop getting these kind of comments during roleplay?
2
I'm using Chronos-hermes 13B with SillyTavern, sometimes I get these walls of text at the end of a reply (i'm censoring the character's name and my name, and the conversation): [Shogun.](https://preview.redd.it/sye7l9xnk8eb1.png?width=605&format=png&auto=webp&s=6f502c739aa63e3d56750b9bb0b6261040a019dc) [Censored because is NSFW.](https://preview.redd.it/5govzs4pj8eb1.png?width=615&format=png&auto=webp&s=9cbffa8ebc829e5e125e8df979a2cbb63bd36845) There is no character's note using KoboldAI with ChronosHermes13B, so, how can I instruct the AI to only reply as the character they are roleplaying? I tried changing the temperature but nothing happened, they keep adding "reviews" of the story or entire character descriptions for some reason.
2023-07-26T04:27:49
https://www.reddit.com/r/LocalLLaMA/comments/159vf66/how_to_stop_getting_these_kind_of_comments_during/
allmightyloser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159vf66
false
null
t3_159vf66
/r/LocalLLaMA/comments/159vf66/how_to_stop_getting_these_kind_of_comments_during/
false
false
https://b.thumbs.redditm…weTHFi1oCDuc.jpg
2
null
currently aiming to fine-tune a 7b parameter model to beat 30b/40b need difficult Benchmark questions
3
I'm currently training my own model that in my opinion Rivals that of responses from the top 40B models any questions you seem to always get bad answers too , can help me benchmark and further improve the llm based on them so please reply to this post with any prompts that may help as I do of course plan on open sourcing the Finished model the overall reasoning for the 7B is for the overarching need for expensive Hardware in the local language model community or renting from cloud-based services overall the presence for pushing the edges of lower parameter models seems to be limited to 13B at best
2023-07-26T04:40:50
https://www.reddit.com/r/LocalLLaMA/comments/159vo8c/currently_aiming_to_finetune_a_7b_parameter_model/
bralynn2222
self.LocalLLaMA
2023-07-26T04:48:10
0
{}
159vo8c
false
null
t3_159vo8c
/r/LocalLLaMA/comments/159vo8c/currently_aiming_to_finetune_a_7b_parameter_model/
false
false
self
3
null
Does a gpu llama model have to be GGML for it to be offloaded to CPU?
1
New to all of this, I have a model I want to use but it is not GGML. Does that mean I can only run it on GPU, or can I still offload some of it to ram? Thanks
2023-07-26T05:57:56
https://www.reddit.com/r/LocalLLaMA/comments/159x44k/does_a_gpu_llama_model_have_to_be_ggml_for_it_to/
PurdSurv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159x44k
false
null
t3_159x44k
/r/LocalLLaMA/comments/159x44k/does_a_gpu_llama_model_have_to_be_ggml_for_it_to/
false
false
self
1
null
OpenAI’s Andrej Karpathy Launches Baby Llama 2
9
2023-07-26T06:25:05
https://www.theinsaneapp.com/2023/07/openai-karpathy-launches-baby-llama-2.html
vadhavaniyafaijan
theinsaneapp.com
1970-01-01T00:00:00
0
{}
159xlup
false
null
t3_159xlup
/r/LocalLLaMA/comments/159xlup/openais_andrej_karpathy_launches_baby_llama_2/
false
false
https://b.thumbs.redditm…3QO_6WBMKKuo.jpg
9
{'enabled': False, 'images': [{'id': 'F6A7MMhrmtgO4VDrfl_-k2zwZbbD-TRvLZLMXcwhV5k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=108&crop=smart&auto=webp&s=5f90afb7b473341e9d8d9fe144974bf3a3e8e23f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=216&crop=smart&auto=webp&s=53c86e7e110ba874b41c0138bd9b0fd3d8e956de', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=320&crop=smart&auto=webp&s=8384269ac2383184d7fb26439cffa43dd7af38d1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=640&crop=smart&auto=webp&s=477e03f133736f2dcdd329f6ecfd1f819342daee', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=960&crop=smart&auto=webp&s=0b2e818611cc6fac165695137971b61c4404ddc9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=1080&crop=smart&auto=webp&s=5cbb23c819b4f49e16a9aafedd365222d9dc8688', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?auto=webp&s=05f731ee9dcd4f60649e398bf719e9cfd74d9d5f', 'width': 1280}, 'variants': {}}]}
Quantization and CPU only inference
1
I want to play with some models locally (CPU only) and I am a bit lost : \- Are GGML models my best bet for CPU only inference? \- Which quantization ? TheBloke models have a huge list of quantization flavors for each model, which is the best for CPU only? Thanks! &#x200B;
2023-07-26T06:50:34
https://www.reddit.com/r/LocalLLaMA/comments/159y1nd/quantization_and_cpu_only_inference/
Low-Woodpecker-4522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159y1nd
false
null
t3_159y1nd
/r/LocalLLaMA/comments/159y1nd/quantization_and_cpu_only_inference/
false
false
self
1
null
'NoneType' object is not iterable Error when loading llama-2-7b-chat through vLLM
1
[removed]
2023-07-26T07:05:17
https://www.reddit.com/r/LocalLLaMA/comments/159yb4m/nonetype_object_is_not_iterable_error_when/
comical_cow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159yb4m
false
null
t3_159yb4m
/r/LocalLLaMA/comments/159yb4m/nonetype_object_is_not_iterable_error_when/
false
false
default
1
null
'NoneType' object is not iterable Error when loading llama-2-7b-chat through vLLM
2
I downloaded the models from meta directly yesterday and I've put them on gdrive so I can access them through colab. I tried loading the model through vllm and faced the following 2 errors first: - config.json not found - For this I renamed the params.json to config.json file in the model path. - Tokenizer not found - I uploaded tokenizer.model in the model path. Then when I try to load the model, it takes 5-10 minutes to load the tokenizer, then gives me the error 'NoneType' object is not iterable error. I have attached a picture of the error, but is there something wrong I am doing? Is there a different config.json file I should be using?
2023-07-26T07:09:54
https://i.redd.it/1n8cvcwfe9eb1.jpg
jacks751
i.redd.it
1970-01-01T00:00:00
0
{}
159ye11
false
null
t3_159ye11
/r/LocalLLaMA/comments/159ye11/nonetype_object_is_not_iterable_error_when/
false
false
https://a.thumbs.redditm…DDjnAxlhZjx8.jpg
2
{'enabled': True, 'images': [{'id': '0uKkKWUWE26YYZheSlD_FXnCZ-RkiU2vhVbstGdphqQ', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=108&crop=smart&auto=webp&s=d27bed5de4437005cdaa37d9b837a403fa8ab06f', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=216&crop=smart&auto=webp&s=3b25767ea740f7c83404b844f3c9e21e7c75768e', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=320&crop=smart&auto=webp&s=14783379d8e77ec69b23c6be91c17740df0a3336', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=640&crop=smart&auto=webp&s=b24a2bf3a04051f02572c1aac0cbdd9cfd7cc0ae', 'width': 640}, {'height': 682, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=960&crop=smart&auto=webp&s=5160217f064c96a3298618a49c538629190418eb', 'width': 960}, {'height': 767, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=1080&crop=smart&auto=webp&s=91014a18a1e1d4dbe4e3e977ca2420838b19fda8', 'width': 1080}], 'source': {'height': 1137, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?auto=webp&s=1365ea511b631fcc71f90980a88d5e1950ca16e1', 'width': 1600}, 'variants': {}}]}
My workplace is looking into building PC for Local LLM models to run
1
We are not having issues with budget, what would be the best configuration would be best to build? I'm looking right now at GeForce RTX 4090 for graphics card that all I know so far, and what else everything needs to be? RAM 128GB?, What CPU would be better?
2023-07-26T07:27:12
https://www.reddit.com/r/LocalLLaMA/comments/159yp7c/my_workplace_is_looking_into_building_pc_for/
SupremeConscious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159yp7c
false
null
t3_159yp7c
/r/LocalLLaMA/comments/159yp7c/my_workplace_is_looking_into_building_pc_for/
false
false
self
1
null
Llama-2-7b-chat producing gibberish output
1
I'm using the llama-2-7b-chat model through vllm, and it is giving me a gibberish output. I've checked the config.json file and I've ensured that tokenizer is the one from meta's GitHub. I don't know what I'm doing wrong and how to fix it.
2023-07-26T08:08:00
https://i.redd.it/2pej8fuso9eb1.jpg
jacks751
i.redd.it
1970-01-01T00:00:00
0
{}
159zejn
false
null
t3_159zejn
/r/LocalLLaMA/comments/159zejn/llama27bchat_producing_gibberish_output/
false
false
https://b.thumbs.redditm…tER2kiZ63Uhw.jpg
1
{'enabled': True, 'images': [{'id': 'JFueTY5GodsZFWG8oxN4z7mrzUrOnunb7jblnE3TjwI', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=108&crop=smart&auto=webp&s=7631ead19286a42a71ee9418e3cd085f08cdef27', 'width': 108}, {'height': 58, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=216&crop=smart&auto=webp&s=02f5bc052fb74d1b0d8152c584ab33104306712d', 'width': 216}, {'height': 87, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=320&crop=smart&auto=webp&s=8caea6e9614619f43855389066bcf8c935634c08', 'width': 320}, {'height': 174, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=640&crop=smart&auto=webp&s=3afeca9e101a6c51ce84a142dd704fcdee57ebb1', 'width': 640}, {'height': 261, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=960&crop=smart&auto=webp&s=8114067cd3395be52af990d6d82d39496297920d', 'width': 960}, {'height': 294, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=1080&crop=smart&auto=webp&s=591dcb6f9936a7d3ab2d15a7faa085a747cde80d', 'width': 1080}], 'source': {'height': 436, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?auto=webp&s=ec2741eb13fffe081f082461325326f850579cde', 'width': 1600}, 'variants': {}}]}
I'm relatively new to LLM's but I find it odd, that a supposedly "superior" GPT-4 LLM is flat out not doing what I need it to, but Llama 2 does it very well!
1
In a nutshell, I'm totally new to all this AI stuff, but I've managed to set up oobabooga with llama-2-13b-chat.ggmlv3.q4\_1, and let me tell you, it's been an awesome experience so far. The only thing is, I kinda wish it were a bit more open-minded when it comes to morals and ethics. I think it would be even funnier and more helpful that way. Nevertheless, it's been performing exceptionally well with the tasks I throw at it, unlike those sneaky GPT-3.5 and GPT-4 versions that used to deceive and gaslight me constantly and need to be constantly prompting. But hey, I'm genuinely excited to see where the Llama LLM will go in the future! [GPT-3.5](https://preview.redd.it/zkz41rjzn9eb1.png?width=662&format=png&auto=webp&s=98feddd144cd0de03341d8b06ae8655ea5f92173) [GPT-3.5](https://preview.redd.it/wqxney3wr9eb1.png?width=683&format=png&auto=webp&s=f62ae91641232722ad191cc533b7cba565602f58) [llama-2-13b-chat.ggmlv3.q4\_1](https://preview.redd.it/4dya5ujzn9eb1.png?width=783&format=png&auto=webp&s=c4ef0494dbe8f403ff74358737f7f7ee3b451e49)
2023-07-26T08:26:42
https://www.reddit.com/r/LocalLLaMA/comments/159zq6y/im_relatively_new_to_llms_but_i_find_it_odd_that/
chrisebryan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
159zq6y
false
null
t3_159zq6y
/r/LocalLLaMA/comments/159zq6y/im_relatively_new_to_llms_but_i_find_it_odd_that/
false
false
https://b.thumbs.redditm…5DU523R21fQg.jpg
1
null
Which LLama2 can I run?
1
I have 2 3090 and 32 GB of RAM. Which Llama2 Unquantized can I run?
2023-07-26T08:46:31
https://www.reddit.com/r/LocalLLaMA/comments/15a02do/which_llama2_can_i_run/
rajanghimire534
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a02do
false
null
t3_15a02do
/r/LocalLLaMA/comments/15a02do/which_llama2_can_i_run/
false
false
self
1
null
Malicious prompt injection
1
A subtle vector for spamming/phishing where the user just sees images/audio but there's nastiness behind the scenes. From the Twitter thread : "...it only works on open-source models (i.e. model weights are public) because these are adversarial inputs and finding them requires access to gradients... I'd hoped that open source models would be particularly appropriate for personal assistants because they can be run locally and avoid sending personal data to LLM providers but this puts a bit of a damper on that." [https://twitter.com/random\_walker/status/1683833600196714497](https://twitter.com/random_walker/status/1683833600196714497) Paper: [https://arxiv.org/abs/2307.10490](https://arxiv.org/abs/2307.10490)
2023-07-26T08:46:53
https://www.reddit.com/r/LocalLLaMA/comments/15a02le/malicious_prompt_injection/
danja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a02le
false
null
t3_15a02le
/r/LocalLLaMA/comments/15a02le/malicious_prompt_injection/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SXJnLOEjbW5FqerOHQrTs4fE3usWKuzASiViNabAJsE', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=108&crop=smart&auto=webp&s=6d77c61972835cc18f1568dc51626ec9836e388d', 'width': 108}, {'height': 274, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=216&crop=smart&auto=webp&s=5a591a8bca5b8bd5749e46301640d63c872136e3', 'width': 216}, {'height': 406, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=320&crop=smart&auto=webp&s=2368cdcf1a261a485d266d2524ae42de22313761', 'width': 320}, {'height': 812, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=640&crop=smart&auto=webp&s=dd02b4fa36b827850027112e284878477d4cf8e3', 'width': 640}, {'height': 1218, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=960&crop=smart&auto=webp&s=c068c9a17f125dee94bd1b0549ef0d1abe16f630', 'width': 960}], 'source': {'height': 1246, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?auto=webp&s=834539c07bc382fbd3c5fe45bcda252180980d0a', 'width': 982}, 'variants': {}}]}
WizardLM 13B V1.2 Roleplay Sampler
1
[removed]
2023-07-26T09:00:25
https://www.reddit.com/r/LocalLLaMA/comments/15a0az7/wizardlm_13b_v12_roleplay_sampler/
Aeon-Black
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a0az7
false
null
t3_15a0az7
/r/LocalLLaMA/comments/15a0az7/wizardlm_13b_v12_roleplay_sampler/
false
false
nsfw
1
null
OpenAI is still exploring an open source LLM release, currently codenamed G3PO, and views Llama 2's rapid adoption as a threat
1
This news comes from *The Information*, the same business publication that previously leaked the imminent release of Llama 2. The [full article](https://www.theinformation.com/articles/pressure-grows-on-openai-to-respond-to-metas-challenge) is paywalled but here's a quick summary of the situation: * [Last time](https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/) this was reported two months ago, OpenAI was reportedly preparing for an immediate release. Now, they're still exploring the idea of releasing an open source model but haven't confirmed a timeline yet. * OpenAI is feeling pressured by Meta's release of Llama 2. Their model, named G3PO internally, is unlikely to be competitive with GPT-3.5 or GPT-4. The G3PO name could be a hint to its capabilities. * According to the author, they're delaying the release because they want to focus on launching an app store and creating a personalized ChatGPT assistant. Their app store would be a marketplace offering another way to forming developer lock-in. * Even with the delay and changing focus, OpenAI will likely move forward with an open source model for the same reasons Meta released Llama 2. They reportedly believe in a process of developing advanced models to generate revenue while releasing less advanced open source models to keep developers on their side. I wouldn't be surprised if they also delayed the release because they need more time to push their advanced models ahead. It'd be interesting to see a GPT-3.5-Turbo open sourced once something like GPT-4.5 exists.
2023-07-26T09:14:31
https://www.reddit.com/r/LocalLLaMA/comments/15a0jxk/openai_is_still_exploring_an_open_source_llm/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a0jxk
false
null
t3_15a0jxk
/r/LocalLLaMA/comments/15a0jxk/openai_is_still_exploring_an_open_source_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'k_gM2lyBQ__LBnOY7SNXNiEqwPQYQ0BnT6iIuwGISzo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=108&crop=smart&auto=webp&s=b598c5aa52503f4a817a1e2b9f5d043015a8dc6f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=216&crop=smart&auto=webp&s=d5174573690f9ddef512b1df282117f29be94c75', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=320&crop=smart&auto=webp&s=4df98d447072def2e9bd8657191dd53cba632db6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=640&crop=smart&auto=webp&s=8464b37961c2df31ac5fe4c71f8fb2dd4665b102', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=960&crop=smart&auto=webp&s=3d6d423fc043d85e7db5d58c139c34b3550cdf97', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=1080&crop=smart&auto=webp&s=93c1b5c6caab88c16fabfe9bfce2cd2bc5e1605d', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?auto=webp&s=f537cba9de238588be26dd3c0aeaca33b0a87e1b', 'width': 1200}, 'variants': {}}]}
Short guide to hosting your own llama.cpp openAI compatible web-server
1
[removed]
2023-07-26T09:27:20
https://www.reddit.com/r/LocalLLaMA/comments/15a0s1y/short_guide_to_hosting_your_own_llamacpp_openai/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a0s1y
false
null
t3_15a0s1y
/r/LocalLLaMA/comments/15a0s1y/short_guide_to_hosting_your_own_llamacpp_openai/
false
false
self
1
null
Can finetuning LLM beat Roberta/Deberta in traditional tasks?
1
For example, there is a classification dataset. If I finetune llama with this dataset, will it perform better than finetune a bert model?
2023-07-26T09:36:15
https://www.reddit.com/r/LocalLLaMA/comments/15a0xxx/can_finetuning_llm_beat_robertadeberta_in/
Financial_Stranger52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a0xxx
false
null
t3_15a0xxx
/r/LocalLLaMA/comments/15a0xxx/can_finetuning_llm_beat_robertadeberta_in/
false
false
self
1
null