title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Trying to fine-tune Starcoderbase 1B in rombodawg dataset(googlecolab)
1
So, I was trying to fine tune the smallest startcoder in the huge data set of rombodawg. However my PC is crap so I was trying to use google colab. I don't know if it is possible. I was able to load the model and but failed to load the database in a trainable way, I am inexperience and I am trying to adapt some google colabs I found in the topic (PEFT-Lora). Can anyone help me? Give me some tips or at least some references? Or even show me that it is impossible heheeh.
2023-08-08T02:35:36
https://www.reddit.com/r/LocalLLaMA/comments/15l543h/trying_to_finetune_starcoderbase_1b_in_rombodawg/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l543h
false
null
t3_15l543h
/r/LocalLLaMA/comments/15l543h/trying_to_finetune_starcoderbase_1b_in_rombodawg/
false
false
self
1
null
Why can’t Lora fine tune add knowledge?
1
Whenever someone asks about fine tuning a Lora for a llama model to “add knowledge”, someone will suggest to do RAG instead. What is the reason why Lora fine tune can’t do it, but the original full parameter pretraining can?
2023-08-08T02:44:27
https://www.reddit.com/r/LocalLLaMA/comments/15l5auj/why_cant_lora_fine_tune_add_knowledge/
xynyxyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l5auj
false
null
t3_15l5auj
/r/LocalLLaMA/comments/15l5auj/why_cant_lora_fine_tune_add_knowledge/
false
false
self
1
null
Baby 🦙 Code 🐍 Interpreter 🚀 (v.2)
1
# Hey everyone! 👋 Obs: you can scroll past if this ain’t for you:) [About 2 weeks ago](https://www.reddit.com/r/LocalLLaMA/comments/159v7re/hacked_away_an_abysmally_simple_code_interpreter/), I excitedly posted here about my first ever, open-source library ***^(baby code)***. Today I'm happy to announce a major overhaul from its predecessor. 🌟 Before getting into the updates-- credit where credit is due, and shoutout is in order: This project is fundamentally an extension of the `examples/server/` within the remarkable [Llama.cpp](https://github.com/ggerganov/llama.cpp). Without its invaluable arsenal of tools, I genuinely think I wouldn't have accomplished even 5% of what's been done here (^(this isn't a whole lot)). The heart of this project beats thanks to `llama.cpp`. 🙌 ***^(\~and chatGPT)*** # Enter [[baby-llama.pycpp](https://github.com/itsPreto/baby-llama.pycpp#-baby-code-interpreter)] 🔗 Gone are the days (*13, in fact*) of multiple, needless layers of abstraction; this version is natively integrated into `llama.cpp`, ensuring a smooth branch tracking from the upstream ***^(to)*** ***~~^(steal)~~*** ***^(borrow all their goodies,)*** and its also way more reliable due to the [absolutely insane](https://github.com/ggerganov/llama.cpp/pulls) support this library has. [https:\/\/github.com\/itsPreto\/baby-llama.pycpp](https://reddit.com/link/15l5ivs/video/g6exqn76usgb1/player) **What's New?** * 🚀 **Performance Boost**: By eliminating intermediary layers, the app runs faster and more reliably. * 🎨 **UI Overhaul**: A more elegant and user-friendly design that feels different from any chat application you've used. * 💬 **Contextual Conversations**: The model can now remember and refer back to previous parts of the conversation, ensuring you have a coherent chat experience. * 🔄 **Dynamic Code Interaction**: Copy, run, or even compare differences between Python code blocks right from the chat. * 🐞 **Auto-Debugging &** 🏃 **Auto-Run**: Errors in the code? The chatbot will let you know. Want to run the code without clicking a button? It can do that too! * 📊 **Inference Metrics**: Stay informed about how fast the model is processing your requests. * 📊 **Performance Metrics**: Keep a tally of how many scripts run successfully vs how many don't. * ❓ **Random Prompts**: Not sure what to ask? Click the "Rand" button for a random prompt! * 📜 **Code Diff Viewer**: Select any two generated scripts to perform a fast diff between them. ...and many more possible features through the powerful backend combination of **Llama.cpp** and [ggml.ai](https://ggml.ai/)🌐. ## I'd love to get your feedback on this project and invite contributors, maintainers, and testers to help me with it! The project still needs a shit ton of improvements since the best practices were basically just ^(YEETED) out the window lol *But as GPT-4 says*: >"Be like the baby llama: Instead of chewing over the past, spit out the bugs and code your future." 🦙🐍✨ ⚠️ **Disclaimer**: Use at your own discretion-- while this project offers features like Auto-Run and Auto-Debug, it's essential to understand the ^(obvious) potential dangers of running unchecked, LLM-generated code. The Auto-Run feature, *which is disabled by default*, ***WILL***, automatically execute any generated scripts without any checks. When this is turned off you will be prompted through a confirmation window to execute the code. So ^(naturally), be cautious and enable this feature only when they are entirely sure of the code being executed. ​ >You hold the power! Thanks for your time and looking forward to your valuable feedback! 🙏 And potential support!
2023-08-08T02:55:12
https://www.reddit.com/r/LocalLLaMA/comments/15l5ivs/baby_code_interpreter_v2/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l5ivs
false
null
t3_15l5ivs
/r/LocalLLaMA/comments/15l5ivs/baby_code_interpreter_v2/
false
false
https://a.thumbs.redditm…i75kyB08snM0.jpg
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Has anyone tried finetuning through ReLoRA?
9
Has anyone tried finetuning through [ReLoRA](https://github.com/Guitaricet/relora)? It seems like a way of adding knowledge to models.
2023-08-08T05:31:09
https://www.reddit.com/r/LocalLLaMA/comments/15l8oyv/has_anyone_tried_finetuning_through_relora/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l8oyv
false
null
t3_15l8oyv
/r/LocalLLaMA/comments/15l8oyv/has_anyone_tried_finetuning_through_relora/
false
false
self
9
{'enabled': False, 'images': [{'id': 'bqlZHX0DME6igHKbkudvkCpdRG3-_vFiv_J1JMc8-qw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=108&crop=smart&auto=webp&s=97168ba9e2c31c1c6c5aaf036d5e48c30a6c5780', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=216&crop=smart&auto=webp&s=bcb86921adfc6adf1636cabf2583703d861959cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=320&crop=smart&auto=webp&s=b194cda91c36e9074dfe67294db8fbc802b5e8f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=640&crop=smart&auto=webp&s=1faa8f311ab5a516897bdcb44dea7957ac693f33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=960&crop=smart&auto=webp&s=5f72612f303299583bd58ed2a58c4773faee7e6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=1080&crop=smart&auto=webp&s=04f2aee3d9d1f21d87544f5f5c091cf21c5a1283', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?auto=webp&s=0165fd76784143b91d0a212f85aaf2f844df823f', 'width': 1200}, 'variants': {}}]}
What llama 2 pay-on-demand API to use?
1
Hi all I'd like to do some experiments with the 70B chat version of Llama 2. However, I don't have a good enough laptop to run it locally with reasonable speed. So I consider using some remote service, since it's mostly for experiments. So my question: Do you have any recommendations for APIs I can use, where I just pay per usage? Same as the OpenAI API basically. I saw you can host the models on HuggingFace, Azure or AWS, but they have a dedicated VM running (I think you have to start or stop it) which costs an hourly fix-price. Then I also saw [https://replicate.com/](https://replicate.com/), which I don't know, but they seem to offer what I need. Do you have experience with this service? Or do you have any recommendations? Thanks a lot!
2023-08-08T06:08:30
https://www.reddit.com/r/LocalLLaMA/comments/15l9eh7/what_llama_2_payondemand_api_to_use/
silvanmelchior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l9eh7
false
null
t3_15l9eh7
/r/LocalLLaMA/comments/15l9eh7/what_llama_2_payondemand_api_to_use/
false
false
self
1
null
Declarai - a game-changer for Python-based language model interactions!
3
Struggled with using LLMs in production? We've been there. That's why we created Declarai, an open-source gift to the engineering community. What's Declarai?Simply put, it's declarative AI with Python. Keep developing as always, but supercharged with the power of LLMs. With Declarai, you can: 🔥 Write Declarative Python Code: Use Python's own syntax like type hints and docstrings to guide an AI model. 🔥 Build Robust Solutions: Craft production-grade AI systems, minus the complex prompts or messy dependencies. ​ https://preview.redd.it/64ubq3k4cugb1.png?width=1456&format=png&auto=webp&s=2dbd634b91a05e41e98898c8c6f68f5852c403d2 Feel free to take a look at our docs: [https://vendi-ai.github.io/declarai/](https://vendi-ai.github.io/declarai/) We would love to get your feedback and would appreciate a star on GitHub 🙏 ⭐[https://github.com/vendi-ai/declarai](https://github.com/vendi-ai/declarai) Declarai is still in beta so your feedback would be invaluable to us! Help us build the future of AI for engineers, no fancy terms, no advance datascience, just code that works 🤩
2023-08-08T07:31:28
https://www.reddit.com/r/LocalLLaMA/comments/15laymb/declarai_a_gamechanger_for_pythonbased_language/
matkley12
self.LocalLLaMA
2023-08-08T07:43:00
0
{}
15laymb
false
null
t3_15laymb
/r/LocalLLaMA/comments/15laymb/declarai_a_gamechanger_for_pythonbased_language/
false
false
https://b.thumbs.redditm…oKgBAhaYbYpI.jpg
3
null
Video testing(for reference sake) with a bonafide trivial benchmark for the mountain of models that's forming on Huggingface. Suggestions welcomed.
1
[removed]
2023-08-08T07:37:16
https://www.reddit.com/r/LocalLLaMA/comments/15lb2g8/video_testingfor_reference_sake_with_a_bonafide/
AI_Trenches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lb2g8
false
null
t3_15lb2g8
/r/LocalLLaMA/comments/15lb2g8/video_testingfor_reference_sake_with_a_bonafide/
false
false
self
1
{'enabled': False, 'images': [{'id': '4TdTL_e77QYjU7JKEsBZ1I1hqGryfWhbNyKTgp1CA-o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=108&crop=smart&auto=webp&s=53e7b5f7e79788f23fd93f4792d90756633147d2', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=216&crop=smart&auto=webp&s=ada673f0b99006d94ae7b3be37a0e9360bbb6f46', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=320&crop=smart&auto=webp&s=3ac23377f9516043e839af9d6c0a22c2493f91fc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=640&crop=smart&auto=webp&s=3dfe0a8971eb8a14297598b2f731251525ec093f', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?auto=webp&s=a99ff10d14e7943eda13f7c56eb9fb8832851019', 'width': 900}, 'variants': {}}]}
PyTorch and CUDA version mismatch
1
I am using this repo: [https://github.com/mzbac/qlora-fine-tune](https://github.com/mzbac/qlora-fine-tune) to fine tune a 7B 4bit quantized llama model. However, when trying to run setup\_cuda.py it says that my cuda and torch versions are mismatched.I tried building torch from source but it still only installs torch 2.0.1+cu118 which is incompatible with CUDA 12.2 which is what I am running Should I downgrade my CUDA version to 11.8? https://preview.redd.it/nj1ne57miugb1.png?width=1256&format=png&auto=webp&s=2ee3611db5e7b9137d762ae302028c3e4d6f02b3
2023-08-08T08:18:42
https://www.reddit.com/r/LocalLLaMA/comments/15lbss9/pytorch_and_cuda_version_mismatch/
QuantumTyping33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lbss9
false
null
t3_15lbss9
/r/LocalLLaMA/comments/15lbss9/pytorch_and_cuda_version_mismatch/
false
false
https://b.thumbs.redditm…fqQhdGZS3sYM.jpg
1
null
Pretty great reasoning from Nous Research Hermes LLama2 13B, q4.
1
2023-08-08T09:16:23
https://i.redd.it/wj382l7rsugb1.png
TopperBowers
i.redd.it
1970-01-01T00:00:00
0
{}
15lcv2i
false
null
t3_15lcv2i
/r/LocalLLaMA/comments/15lcv2i/pretty_great_reasoning_from_nous_research_hermes/
false
false
https://b.thumbs.redditm…ExrT0_5wjk6s.jpg
1
{'enabled': True, 'images': [{'id': 'MvcKsKpvWFfFarLhkDDllwYOXiC2Fici_u9m61spywM', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=108&crop=smart&auto=webp&s=25c6d83946dd24932bd6e15017e449f1cd8e971a', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=216&crop=smart&auto=webp&s=1916ebcdd78e47c6188464490ef55b7d0a82ccfd', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=320&crop=smart&auto=webp&s=4d0598b344964a17b70b6241d65c3dc9bda78a2b', 'width': 320}, {'height': 297, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=640&crop=smart&auto=webp&s=de241c9241e564154f78c232ed3d73a04a3b6ac6', 'width': 640}], 'source': {'height': 395, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?auto=webp&s=8c11e5d6589c73e2ff59a58558eeadcd276f7d84', 'width': 850}, 'variants': {}}]}
Help: What sorcery do I use to figure out correct model loader?
1
I new to all of this stuff and am trying to load some new models, but I don't know how to determine the model loader information. How does one determine these parameters? Is it written up somewhere? Is it buried in the model card or does one have to write this themselves according to some information on a website? Please help. Thank you.
2023-08-08T09:35:23
https://www.reddit.com/r/LocalLLaMA/comments/15ld8ld/help_what_sorcery_do_i_use_to_figure_out_correct/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ld8ld
false
null
t3_15ld8ld
/r/LocalLLaMA/comments/15ld8ld/help_what_sorcery_do_i_use_to_figure_out_correct/
false
false
self
1
null
What is the most effective language model for summarizing scientific text?
1
I've been working on abstractive summarization for scientific articles (mainly arXiV). Until now, I've fine-tuned a PEGASUS and a BART-large model and obtained okay-ish ROUGE values (\~40 ROUGE-1). What other models would you guys suggest that generate coherent summaries given a section of the research paper? Thank you.
2023-08-08T10:10:06
https://www.reddit.com/r/LocalLLaMA/comments/15ldw0w/what_is_the_most_effective_language_model_for/
psj_2908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ldw0w
false
null
t3_15ldw0w
/r/LocalLLaMA/comments/15ldw0w/what_is_the_most_effective_language_model_for/
false
false
self
1
null
Using HF Transformer APIs for local use of models like "TheBloke/Llama-2-7B-Chat-GGML"
1
[removed]
2023-08-08T10:52:17
https://www.reddit.com/r/LocalLLaMA/comments/15leqpa/using_hf_transformer_apis_for_local_use_of_models/
m_k_johnson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15leqpa
false
null
t3_15leqpa
/r/LocalLLaMA/comments/15leqpa/using_hf_transformer_apis_for_local_use_of_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'TS8j-IHmN8kpOvNSHQflSeOArGV9aYAaVmSnkggMS0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=108&crop=smart&auto=webp&s=48b677f91db4cbe41319e587f30b415ba58782d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=216&crop=smart&auto=webp&s=66c18db07742b8b4621e1fe92b632b38829185bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=320&crop=smart&auto=webp&s=910b062c09f481e72e563646b4f89b8b8d8731f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=640&crop=smart&auto=webp&s=c6ee42d59deb0a27a1921431e8eb886b6dc58d3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=960&crop=smart&auto=webp&s=61a6994eb8bd0f40b8474fc9f59d22f6ecce308d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=1080&crop=smart&auto=webp&s=a13bb41fbfea384c96d19e897a16cd8447b864da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?auto=webp&s=c6684bc3eba63191f50c5b7b675497c0289905ec', 'width': 1200}, 'variants': {}}]}
Inference Speed for Llama 2 70b on A6000 with Exllama - Need Suggestions!
8
Hello everyone,I'm currently running Llama-2 70b on an A6000 GPU using Exllama, and I'm achieving an average inference speed of 10t/s, with peaks up to 13t/s. I'm wondering if there's any way to further optimize this setup to increase the inference speed.Has anyone here had experience with this setup or similar configurations? I'd love to hear any suggestions, tips, or best practices that could help me boost the performance. Thanks in advance for your insights! Edit: Im using Text-generation-webui with max\_seq\_len 4096 and alpha\_value 2.
2023-08-08T11:05:57
https://www.reddit.com/r/LocalLLaMA/comments/15lf119/inference_speed_for_llama_2_70b_on_a6000_with/
Used_Carpenter_6674
self.LocalLLaMA
2023-08-08T11:12:02
0
{}
15lf119
false
null
t3_15lf119
/r/LocalLLaMA/comments/15lf119/inference_speed_for_llama_2_70b_on_a6000_with/
false
false
self
8
null
chronos-hermes-13b-v2-GPTQ - It's here, how is it?
1
Here it is folks: [https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ](https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ) or [https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML) How are people getting on with it for role play? I'm tempted to give it a try but I assume it only has 4k context. Hoping that someone will make a higher context version? Still... post your experiences. Is it better than the venerable v1? (which I'm still using!)
2023-08-08T11:11:15
https://www.reddit.com/r/LocalLLaMA/comments/15lf4vo/chronoshermes13bv2gptq_its_here_how_is_it/
CasimirsBlake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lf4vo
false
null
t3_15lf4vo
/r/LocalLLaMA/comments/15lf4vo/chronoshermes13bv2gptq_its_here_how_is_it/
false
false
self
1
{'enabled': False, 'images': [{'id': '5lq-vTKb8WPoIHJIEwLDWm-pYzECuhBMTlOVwcxpDrI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=108&crop=smart&auto=webp&s=cb46f17a656cf4e4df0ca8a270ea0958d90cd055', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=216&crop=smart&auto=webp&s=2fa89fcdce3609e231bd45051f854daf612ae4b8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=320&crop=smart&auto=webp&s=aaac478b7016f3474de7dfc01668d075dbb90401', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=640&crop=smart&auto=webp&s=0ef64b378b346d29602556575f758ae7bd73248c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=960&crop=smart&auto=webp&s=3a5a9cb8945cb88d592bb47c9cc0e82c59baeaaa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=1080&crop=smart&auto=webp&s=c95f299563fcb20ca1e2f22bc46456f1b225768a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?auto=webp&s=898083f327649ebaf016422f99ebbaa5f46b21ab', 'width': 1200}, 'variants': {}}]}
I will help you set up Local LLama for Free.
1
A few of my friends asked me to set it up for them. I've documented the process for the setup so I can do this pretty fast for others as well. I'm trying to gauge the market on whether consulting people on Llama might be a viable market. So, I get plenty out of it as well. If you're interested, you can comment or DM me.
2023-08-08T12:07:54
https://www.reddit.com/r/LocalLLaMA/comments/15lgcpf/i_will_help_you_set_up_local_llama_for_free/
middlenameishardwork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lgcpf
false
null
t3_15lgcpf
/r/LocalLLaMA/comments/15lgcpf/i_will_help_you_set_up_local_llama_for_free/
false
false
self
1
null
Teaching Llama to reason in another language help!
1
Hi r/LocalLLaMA, ​ I have been tinkering with Ooba and API calls for a while now, and have hit a bit of a wall. My use case is corporate and not too out there: given transcripts of business calls, I should be able to answer questions and deliver summaries of the conversation. All easy with Langchain and vector databases, and iterative summarization. My only issue is that this is all happening in Brazilian Portuguese. OpenAI models all have a similar level of intelligence and coherence in non-English languages, but Llama seems to fail miserably at this. There was an attempt to teach LLama 1 Portuguese: [https://github.com/22-hours/cabrita](https://github.com/22-hours/cabrita) , and so I used the same dataset on Llama2-13B-chat to update the project, but like some of you have been experiencing, the model goes off its rocker after around 100 tokens, doesn't know when to stop, often lapses into English while still being correct, etc. Do any of you have experience teaching an LLM a different language? I have read that this happens mainly at foundational training, and that LoRAs can't really imbue the model with reasoning skills in a new language. I was thinking of some possible options: * Use a larger dataset which successfully created a good finetune (e.g. Vicuna) and translate it to PT-BR using ChatGPT (expensive). * Transcribe all calls in Portuguese and English, and do all prompts and reasoning in English, only translating to Portuguese at the end. ​ I really appreciate any answers of further points you may have, as I've trained maybe 10-20 LoRAs at this point with no real improvement. ​ P.S. : After how many steps do you usually stop LoRA finetuning? I seem to consistently get negligible loss improvements after around 100, and my runs that went far beyond that and probably overfitted only output a bunch of newlines regardless of the prompt.
2023-08-08T12:22:12
https://www.reddit.com/r/LocalLLaMA/comments/15lgoo7/teaching_llama_to_reason_in_another_language_help/
Puzzled_Chemist_279
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lgoo7
false
null
t3_15lgoo7
/r/LocalLLaMA/comments/15lgoo7/teaching_llama_to_reason_in_another_language_help/
false
false
self
1
{'enabled': False, 'images': [{'id': 'h1WdwUjHD04AYItpD2-q7M_23Em4OEh1IeVUkIzKuOU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=108&crop=smart&auto=webp&s=2da3f5cd5c1f35bc9151f780f3047ed450916623', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=216&crop=smart&auto=webp&s=a55fdbd2ab5090400f5b834d104030497d24d62a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=320&crop=smart&auto=webp&s=f7f9cf17c676653eb7f1cb3f6ec1797dc22fa344', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=640&crop=smart&auto=webp&s=a6e11c0535fe96c1debc1884e29519842c3debfe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=960&crop=smart&auto=webp&s=1aaca8544076713d4ff630bbc61d117db16581f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=1080&crop=smart&auto=webp&s=19e7f2c78b2241c1f8fff6c7b4ba2dc6edb0207d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?auto=webp&s=a7edbad9b190ec1977da916d053d8f2c3d77ead5', 'width': 1200}, 'variants': {}}]}
Lora on top of lora-merged base model?
1
My intuition is that it should be technically trivial after the merge, but not sure if anyone actually tried it. Has anyone tried lora- or qlora-ing on a previously lora-ed model? Maybe it’s possible but not recommended?
2023-08-08T12:33:40
https://www.reddit.com/r/LocalLLaMA/comments/15lgy72/lora_on_top_of_loramerged_base_model/
EntertainmentBroad43
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lgy72
false
null
t3_15lgy72
/r/LocalLLaMA/comments/15lgy72/lora_on_top_of_loramerged_base_model/
false
false
self
1
null
Will training a LoRA help an AI emulate a specific writer's style?
1
I'm interested in creating LoRAs for specific authors so I can get an AI to emulate their style. Most immediately, I've been looking at Nabokov and Hemingway, specifically, because their writing styles are distinctive and immediately recognizable. I've trained a LoRA on a dataset of three of Nabokov's books, and tried a number of different bake settings, but the effect seems very minimal, and possibly even a placebo if I'm being honest. I think my most drastic training attempt was with a batch size of 256, a rank of 32, 20 epochs, and a 1e-6 learning rate, but even then I can't quite tell if anything's happening. Do I need different settings, or is this just not a process that's going to work to begin with? I've been using all three of the new Llama-2 models in this process and training on different models didn't seem to change anything either.
2023-08-08T12:45:28
https://www.reddit.com/r/LocalLLaMA/comments/15lh7wx/will_training_a_lora_help_an_ai_emulate_a/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lh7wx
false
null
t3_15lh7wx
/r/LocalLLaMA/comments/15lh7wx/will_training_a_lora_help_an_ai_emulate_a/
false
false
self
1
null
ggml.js: Serverless AI Inference on browser with Web Assembly
1
[removed]
2023-08-08T12:49:18
https://www.reddit.com/r/LocalLLaMA/comments/15lhb0u/ggmljs_serverless_ai_inference_on_browser_with/
AnonymousD3vil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lhb0u
false
null
t3_15lhb0u
/r/LocalLLaMA/comments/15lhb0u/ggmljs_serverless_ai_inference_on_browser_with/
false
false
self
1
{'enabled': False, 'images': [{'id': '4fC4vd6ed9N7BYtcZcg9XPJ6-FWYZfbxrXGLCVtQISQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=108&crop=smart&auto=webp&s=a99695e720ab200453487046768d233035ccb7d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=216&crop=smart&auto=webp&s=0e6c1414fb1d52a3970866162b6c53df9c568156', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=320&crop=smart&auto=webp&s=5f98061347f0fd8b2c8390165ef9b33fd68801aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=640&crop=smart&auto=webp&s=24e9789252cafab68704651b9a99a08b708f5cb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=960&crop=smart&auto=webp&s=d20df18f86876cb3e409418e1ea4570cf5c636ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=1080&crop=smart&auto=webp&s=ec4046c6276af5c4334ecd6d97cb032256849bc8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?auto=webp&s=648667a349adbeab05eb87dc83baedadb5b1020d', 'width': 1200}, 'variants': {}}]}
Hosted nous hermes?
1
I know this is local llama, but sometimes you wanna move from local to hosted. Is there anywhere that these models can be hosted cheaply?
2023-08-08T13:03:48
https://www.reddit.com/r/LocalLLaMA/comments/15lhnol/hosted_nous_hermes/
TopperBowers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lhnol
false
null
t3_15lhnol
/r/LocalLLaMA/comments/15lhnol/hosted_nous_hermes/
false
false
self
1
null
There is an option to teach llama from github?
1
[removed]
2023-08-08T13:07:57
https://www.reddit.com/r/LocalLLaMA/comments/15lhqsi/there_is_an_option_to_teach_llama_from_github/
Agreeable_Fun7280
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lhqsi
false
null
t3_15lhqsi
/r/LocalLLaMA/comments/15lhqsi/there_is_an_option_to_teach_llama_from_github/
false
false
default
1
null
Big Model Comparison/Test (13 models tested)
1
Many interesting models have been released lately, and I tested most of them. Instead of keeping my observations to myself, I'm sharing my notes with you all. Looking forward to your comments, especially if you have widely different experiences, so I may go back to retest some models with different settings. Here's how I evaluated these: - Same conversation with all models, [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend, [KoboldCpp](https://github.com/LostRuins/koboldcpp) backend, GGML q5_K_M, deterministic settings, > 22 messages, going to full 4K context, noting especially good or bad responses. So here's the list of models and my notes plus my very personal rating (➕ = worth a try, - ➖ disappointing, ❌ = unusable): - ➕ **[airoboros-l2-13b-gpt4-2.0](https://huggingface.co/TheBloke/airoboros-l2-13b-gpt4-2.0-GGML)**: Talked without emoting, terse/boring prose, wrote what User does, exited scene without completion, got confused about who's who and anatomy, repetitive later. But detailed gore and surprisingly funny sense of humor! - Also tested with Storywriter (non-deterministic, best of 3): Little emoting, multiple long responses (> 300 limit), sometimes funny, but mentioned boundaries/safety, ended RP by leaving multiple times, had to ask for detailed descriptions, got confused about who's who and anatomy. - ➖ **[airoboros-l2-13b-gpt4-m2.0](https://huggingface.co/TheBloke/airoboros-l2-13b-gpt4-m2.0-GGML)**: Listed harm to self or others as limit, terse/boring prose, got confused about who's who and anatomy, talked to itself, repetitive later. Scene was good, but only after asking for description. Almost same as the previous model, but less smart. - Also tested with Storywriter (non-deterministic, best of 3): Less smart, logic errors, very short responses. - ➖ **[Chronos-13B-v2](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML)**: Got confused about who's who, over-focused one plot point early on, vague, stating options instead of making choices, seemed less smart. - ➕ **[Chronos-Hermes-13B-v2](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)**: More storytelling than chatting, sometimes speech inside actions, not as smart as Nous-Hermes-Llama2, didn't follow instructions that well. But nicely descriptive! - ➖ **[Hermes-LLongMA-2-13B-8Ke](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML)**: Doesn't seem as eloquent or smart as regular Hermes, did less emoting, got confused, wrote what User does, showed misspellings. SCALING ISSUE? Repetition issue after just 14 messages! - ➖ **[Huginn-13B-GGML](https://huggingface.co/TheBloke/Huginn-13B-GGML)**: Past tense actions annoyed me! Didn't test further! - ❌ **[13B-Legerdemain-L2](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGML)**: Started hallucinating and extremely long monologue right after greeting. Unusable! - ➖ **[OpenAssistant-Llama2-13B-Orca-8K-3319](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML)**: Quite smart, but eventually got confused about who's who and anatomy, mixing up people and instructions, went OOC, giving warnings about graphic nature of some events, some repetition later, AI assistant bleed-through. - ❌ **[OpenAssistant-Llama2-13B-Orca-v2-8K-3166](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-v2-8K-3166-GGML)**: EOS token triggered from start, unusable! Other interactions caused rambling. - ➕ **[OpenChat_v3.2](https://huggingface.co/TheBloke/OpenChat_v3.2-GGML)**: Surprisingly good descriptions! Took action-emoting from greeting example, but got confused about who's who, repetitive emoting later. - ➖ **[TheBloke/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML)**: Talked without emoting, sudden out-of-body-experience, long talk, little content, boring. - ❌ **[qCammel-13](https://huggingface.co/TheBloke/qCammel-13-GGML)**: Surprisingly good descriptions! But extreme repetition made it unusable! - ➖ StableBeluga-13B: No action-emoting, safety notices and asked for confirmation, mixed up anatomy, repetitive. But good descriptions! My favorite remains **[Nous-Hermes-Llama2](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML)** which I tested and compared with **[Redmond-Puffin-13B](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)** [here](https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/) before. I think what's really needed for major breakthroughs is a fix for the [Llama 2 repetition issues](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) and usable larger contexts (> 4K and coherence falls apart fast).
2023-08-08T13:37:51
https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lihmq
false
null
t3_15lihmq
/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]}
What is a good model for NER inference that fits into 12gb RAM of RTX 3060?
1
Any suggestions for a newbie wanting to try out llama2 on a limited GPU resource workstation? CPU RAM of 176 gb.
2023-08-08T13:50:47
https://www.reddit.com/r/LocalLLaMA/comments/15litek/what_is_a_good_model_for_ner_inference_that_fits/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15litek
false
null
t3_15litek
/r/LocalLLaMA/comments/15litek/what_is_a_good_model_for_ner_inference_that_fits/
false
false
self
1
null
looking for a guide
1
Does anyone know a comprehensive guide to set up everything to run LocalLLama? [Preferably on a steamdeck (Linux/SteamOs) 😂]
2023-08-08T14:07:22
https://www.reddit.com/r/LocalLLaMA/comments/15lj8vq/looking_for_a_guide/
pussifricker1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lj8vq
false
null
t3_15lj8vq
/r/LocalLLaMA/comments/15lj8vq/looking_for_a_guide/
false
false
self
1
null
Merging base Llama2 LoRA weights into Chat model
1
I have been playing around with LoRA as a way to get knowledge into Llama-2-7B, with some limited success. I was able to achieve some style transfer, but the model still tends to hallucinate. Interestingly enough, by merging base-model LoRA weights trained on a simple autoregressive objective into the Chat model, the limited knowledge and style transfer seemed to work as well. Anyone know why this may be the case, and some ways of going about exploring this further?
2023-08-08T15:07:45
https://www.reddit.com/r/LocalLLaMA/comments/15lkts4/merging_base_llama2_lora_weights_into_chat_model/
Numerous_Current_298
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lkts4
false
null
t3_15lkts4
/r/LocalLLaMA/comments/15lkts4/merging_base_llama2_lora_weights_into_chat_model/
false
false
self
1
null
Prospects for future hardware releases
1
Is it likely that AMD or Intel will sense a market opportunity and start shipping affordable cards for those who wish to run local LLMs on a budget? Will future optimizations considerably reduce the requirements to run a model at a given performance level, or are we close to fundamental efficiency? Is there even a way to quantify this?
2023-08-08T15:22:01
https://www.reddit.com/r/LocalLLaMA/comments/15ll7or/prospects_for_future_hardware_releases/
WarmCartoonist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ll7or
false
null
t3_15ll7or
/r/LocalLLaMA/comments/15ll7or/prospects_for_future_hardware_releases/
false
false
self
1
null
Bright Eye: free mobile app that generates art and different forms of text (code, math answers, essays, games, ideas, and more)!
1
Hi all. I’m the cofounder of a startup focused on developing the AI super app called “Bright Eye”, a multipurpose AI product that generates and analyzes content. One of its interesting use cases is helping students study, people plan, and offering general advice. As the title puts it, it’s capable of generating almost anything, so the use-cases in terms of productivity isn’t confined to only those above, it can apply however you see fit. We run on GPT-4, stable diffusion, and Microsoft azure cognitive services. Check us out below, we’re looking for advice on the functionality and design of the app (and possibly some longtime users): https://apps.apple.com/us/app/bright-eye/id1593932475
2023-08-08T16:10:31
https://www.reddit.com/r/LocalLLaMA/comments/15lmiyk/bright_eye_free_mobile_app_that_generates_art_and/
EtelsonRecomputing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lmiyk
false
null
t3_15lmiyk
/r/LocalLLaMA/comments/15lmiyk/bright_eye_free_mobile_app_that_generates_art_and/
false
false
self
1
{'enabled': False, 'images': [{'id': '5U3w0HRUOPA7NaUGZ4RL_8wjJYuCeus8Xsjl6SQQYik', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=108&crop=smart&auto=webp&s=7db2cfa683e5720dee090d1f221bd54e8df0b627', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=216&crop=smart&auto=webp&s=bbd373d36ead27cd6671a23121b1fdfd15fddaa3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=320&crop=smart&auto=webp&s=54e5172709b995922fb1d12b22e3d35ad5e5d6cd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=640&crop=smart&auto=webp&s=093fc958654b88d8708047d87320af5aae1a6f6e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=960&crop=smart&auto=webp&s=0aacb4d9eae046d0fe454180c95b174fcd30df62', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=1080&crop=smart&auto=webp&s=471afa92ea2d771f53b3ecba6d64378ffe7678ff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?auto=webp&s=0d339af183f91e074b1e2e1a19e16f9179d4d0da', 'width': 1200}, 'variants': {}}]}
New Code Generation Model from Stability AI with 16K context
1
“Stability AI has just announced the release of StableCode, its very first LLM generative AI product for coding. This product is designed to assist programmers with their daily work while also providing a great learning tool for new developers ready to take their skills to the next level.”
2023-08-08T16:16:17
https://twitter.com/stabilityai/status/1688931312122675200?s=46&t=4Lg1z9tXUANCKLiHwRSk_A
Acrobatic-Site2065
twitter.com
1970-01-01T00:00:00
0
{}
15lmojm
false
{'oembed': {'author_name': 'Stability AI', 'author_url': 'https://twitter.com/StabilityAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚀Exciting news! Stability AI has launched StableCode, the revolutionary generative AI LLM for coding!<br><br>💡 Developers, get ready to level up your coding game! <a href="https://twitter.com/hashtag/AI?src=hash&amp;ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/Coding?src=hash&amp;ref_src=twsrc%5Etfw">#Coding</a> <a href="https://twitter.com/hashtag/StableCode?src=hash&amp;ref_src=twsrc%5Etfw">#StableCode</a> <a href="https://twitter.com/hashtag/StabilityAI?src=hash&amp;ref_src=twsrc%5Etfw">#StabilityAI</a><a href="https://t.co/XFrV36JMMu">https://t.co/XFrV36JMMu</a></p>&mdash; Stability AI (@StabilityAI) <a href="https://twitter.com/StabilityAI/status/1688931312122675200?ref_src=twsrc%5Etfw">August 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/StabilityAI/status/1688931312122675200', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_15lmojm
/r/LocalLLaMA/comments/15lmojm/new_code_generation_model_from_stability_ai_with/
false
false
https://a.thumbs.redditm…Xy1rFyTGrSI0.jpg
1
{'enabled': False, 'images': [{'id': 'LaKRDKsNPue9HGl9OWKQdS_cFT1BV-KJj5nRYgR0qFI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CbUSt5r8wlKyvA-VX34Doxh3vW4JmQzUb438c-p23U8.jpg?width=108&crop=smart&auto=webp&s=a6816300d23fc456382dc7a90c01c84fed6f8fda', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/CbUSt5r8wlKyvA-VX34Doxh3vW4JmQzUb438c-p23U8.jpg?auto=webp&s=7ca07363e071db9939ae40f8e17d894fdb7268a7', 'width': 140}, 'variants': {}}]}
Wizard Coder not Inserting Newlines
1
I'm using [Wizard Coder](https://huggingface.co/michaelfeil/ct2fast-WizardCoder-15B-V1.0) for code completion but I'm finding it regularly doesn't insert any new lines in the code it generates. For example: "def power(x, y" autocompletes with: "): if y == 0: return 1 else: return x \* power(x, y-1) print(power(2, 3)) # Output: 8 print(factorial(5)" &#x200B; As you can see, all of this is on a single line. Has anyone else seen this issue and is there any solution? Is this just a pitfall of the current best open source code completion models?
2023-08-08T16:23:27
https://www.reddit.com/r/LocalLLaMA/comments/15lmvbp/wizard_coder_not_inserting_newlines/
kintrith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lmvbp
false
null
t3_15lmvbp
/r/LocalLLaMA/comments/15lmvbp/wizard_coder_not_inserting_newlines/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zQBjqlzNyNQuSWeFUBZpTPHXCSbNON8vgpVhHD9WPm8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=108&crop=smart&auto=webp&s=8b2f2daec92b62d36a81d895339703f242dbd6f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=216&crop=smart&auto=webp&s=4a0e8a36de83c203773fd328c023da81e331e5b9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=320&crop=smart&auto=webp&s=1cda842e84c5757b997b2ee440a438d6ef4aebb5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=640&crop=smart&auto=webp&s=5ef54a38d3f3c712e9b5d712789af9b49bbff75e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=960&crop=smart&auto=webp&s=872256b2ce20588d463ac877cb646c6c3c348914', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=1080&crop=smart&auto=webp&s=22f4378f6cc9de281f7f6094453ee291a93bd6e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?auto=webp&s=05708a208b5be44e60aa1778bb679841830f3765', 'width': 1200}, 'variants': {}}]}
NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip
1
2023-08-08T16:50:52
https://nvidianews.nvidia.com/news/gh200-grace-hopper-superchip-with-hbm3e-memory
fallingdowndizzyvr
nvidianews.nvidia.com
1970-01-01T00:00:00
0
{}
15lnktt
false
null
t3_15lnktt
/r/LocalLLaMA/comments/15lnktt/nvidia_unveils_nextgeneration_gh200_grace_hopper/
false
false
https://b.thumbs.redditm…iP1Z9xquMYGQ.jpg
1
{'enabled': False, 'images': [{'id': 'LNUxxo97U9-YS6WNrzJuSjRteeP8s-F1K2gN6F94lGg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=108&crop=smart&auto=webp&s=2a10af4e6dc90a754e0b90ea9d3d4f68136ad005', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=216&crop=smart&auto=webp&s=ed922c09000aea72b8979bfcbe132c8660b8c8e4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=320&crop=smart&auto=webp&s=9e2db4cf3e4f5c1a01eae5b4b46186d0f36b3f9a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=640&crop=smart&auto=webp&s=86c7d77fae400cf3822f011c2cff7e8780682f8b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=960&crop=smart&auto=webp&s=7d4b2a52743e5caaa328575a1a26571ff4eceb0c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=1080&crop=smart&auto=webp&s=eae3e1b4ab6aeddccb16000f5f7bb2273dd0ec74', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?auto=webp&s=f296c837749cce4fbb566943da19411ea3146f21', 'width': 1920}, 'variants': {}}]}
noob starting with llama2
1
trying to use llama2 to create an offline chatbot (using m1 mac)read the getting started page and guide found that i can use llama.cpp and ggml models inorder to run this on cpu based machine. though the model list was too overwhelming, i decided for now to go forward with llama2-7b-chat by bloke. can anyone give me a brief for this would be really easy for you but for someone like me who is getting started and excited, wanna just get started with this. if you could just give me a brief about how will the workflow go or point me in the direction it would be great. Thank you!
2023-08-08T16:54:53
https://www.reddit.com/r/LocalLLaMA/comments/15lnoib/noob_starting_with_llama2/
HawkingRadiation42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lnoib
false
null
t3_15lnoib
/r/LocalLLaMA/comments/15lnoib/noob_starting_with_llama2/
false
false
self
1
null
has anyone tried Qwen-7B-Chat?
1
Came across this today and the benchmarks are really surprising-- [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) Especially the [tool usage](https://huggingface.co/Qwen/Qwen-7B-Chat#%E5%B7%A5%E5%85%B7%E4%BD%BF%E7%94%A8%E8%83%BD%E5%8A%9B%E7%9A%84%E8%AF%84%E6%B5%8B%EF%BC%88tool-usage%EF%BC%89) benchmark which is comparable to gpt-3.5-turbo.
2023-08-08T18:20:20
https://www.reddit.com/r/LocalLLaMA/comments/15lpyto/has_anyone_tried_qwen7bchat/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lpyto
false
null
t3_15lpyto
/r/LocalLLaMA/comments/15lpyto/has_anyone_tried_qwen7bchat/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Xj-CDOmBnVRen-RT9Mv5LhiNSsTyclH3JlnXqsFNCMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=108&crop=smart&auto=webp&s=eccdbe9fd0b48f42a9ba66f6736fb7c7097d957d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=216&crop=smart&auto=webp&s=72e7ef41d59ec09360dfed779acc64264f01b288', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=320&crop=smart&auto=webp&s=72c8bc711cd3de73277b55280c514bc2b6a7d840', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=640&crop=smart&auto=webp&s=5603bea044b05b3c3595e10523cfd5ff60515669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=960&crop=smart&auto=webp&s=da0ff9a21d3f95894e40628bd6ea5fa9505781d1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=1080&crop=smart&auto=webp&s=cd723b065d7a2c441e8dcbf1678a9715dfcf4fd7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?auto=webp&s=2fc622f56e6df2023b88657888a0026ad59cde98', 'width': 1200}, 'variants': {}}]}
Can CPU llama.cpp get close to over 1t/s?
1
What might an Epyc rome or Milan 16core do with 200GB/s Memory bandwidth and lets say 256GB memory do token/sec? Is Epyc Genoa with 12 channel ddr5 with 460GB/s needed for just 1 token/s?
2023-08-08T18:55:30
https://www.reddit.com/r/LocalLLaMA/comments/15lqw8n/can_cpu_llamacpp_get_close_to_over_1ts/
HilLiedTroopsDied
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lqw8n
false
null
t3_15lqw8n
/r/LocalLLaMA/comments/15lqw8n/can_cpu_llamacpp_get_close_to_over_1ts/
false
false
self
1
null
Structured documentation for fine tuning
1
Hey guys ! I was wondering just like langchain has a good organised documentation on several ways to deal with LLMs , is there any similar documentation on how to fine tune and infer open-source LLMs , concepts like quantization , lora , qlora etc
2023-08-08T19:01:48
https://www.reddit.com/r/LocalLLaMA/comments/15lr2b0/structured_documentation_for_fine_tuning/
Spiritual-Rub925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lr2b0
false
null
t3_15lr2b0
/r/LocalLLaMA/comments/15lr2b0/structured_documentation_for_fine_tuning/
false
false
self
1
null
Python LLaMa tokenizer with zero dependencies?
1
[removed]
2023-08-08T19:10:24
https://www.reddit.com/r/LocalLLaMA/comments/15lrarw/python_llama_tokenizer_with_zero_dependencies/
GusPuffy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lrarw
false
null
t3_15lrarw
/r/LocalLLaMA/comments/15lrarw/python_llama_tokenizer_with_zero_dependencies/
false
false
self
1
{'enabled': False, 'images': [{'id': 'f85dH88dhYQPPafVGTNRvAc-A7RF-lmAfbRxl3qh294', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=108&crop=smart&auto=webp&s=e615a9bf00336f0a058df322754dac21e466b4e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=216&crop=smart&auto=webp&s=137737b46294de2efbca1134201e30fe0fa2912a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=320&crop=smart&auto=webp&s=0a5e6d19ba3e5ff052dfe8b7bc5453a670f834c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=640&crop=smart&auto=webp&s=633cc5080ba4e33decd0dabc3e53404300dfb460', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=960&crop=smart&auto=webp&s=bd2889648b58201d9d52d43a71b5748600be2bad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=1080&crop=smart&auto=webp&s=64b5bce29c8673becfa804436c8ea54feb673c1c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?auto=webp&s=a75a315351aa5de5fdf9a182cfe2a1a4323a8b21', 'width': 1200}, 'variants': {}}]}
A beginner seeking for help
1
Hello everyone. I've been closely following all the work you're doing, and I'd like to start as well, setting up my own model, being able to fine-tune it, and maybe even gaining solid skills in the field. I have a good foundation in Python, and I'm persistent when I want to understand something, but right now I'm a bit lost. Despite [the guide](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) (which has already helped me quite a bit, I must admit), I'm still struggling, and it's frustrating. I've set up the webUI, but I haven't managed to get a single model working yet. Please, do you know of an affordable tutorial for beginners that explains the things to know, and guides the way to quickly become self-sufficient on the subject? I'm the kind of person who learns by watching others do. I'm sure many members of the community have gone through the same stage as me and have tips to share. What life has taught me is that things always seem terribly complicated when they push us out of our comfort zone. Thank you in advance for your help.
2023-08-08T20:36:51
https://www.reddit.com/r/LocalLLaMA/comments/15ltm2z/a_beginner_seeking_for_help/
Orfvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ltm2z
false
null
t3_15ltm2z
/r/LocalLLaMA/comments/15ltm2z/a_beginner_seeking_for_help/
false
false
self
1
null
Exploring Local Multi-GPU Setup for AI: Harnessing AMD Radeon RX 580 8GB for Efficient AI Model
1
I'm a newcomer to the realm of AI for personal utilization. I happen to possess several AMD Radeon RX 580 8GB GPUs that are currently idle. Contemplating the idea of assembling a dedicated Linux-based system for LLMA localy, I'm curious whether it's feasible to locally deploy LLAMA with the support of multiple GPUs? If yes how and any tips
2023-08-08T20:43:21
https://www.reddit.com/r/LocalLLaMA/comments/15ltsj1/exploring_local_multigpu_setup_for_ai_harnessing/
OfficialRakma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ltsj1
false
null
t3_15ltsj1
/r/LocalLLaMA/comments/15ltsj1/exploring_local_multigpu_setup_for_ai_harnessing/
false
false
self
1
null
Local API and Apple shortcuts
1
One of the extensions in Generation Web UI allows for an api. Is it possible to run it locally so I can ask questions through apple shortcuts https://preview.redd.it/cibdq93mjygb1.png?width=672&format=png&auto=webp&s=b1427cce433f8ea61495b3b9a78a19253a73d4e4
2023-08-08T21:51:57
https://www.reddit.com/r/LocalLLaMA/comments/15lvnla/local_api_and_apple_shortcuts/
LegendarySpy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lvnla
false
null
t3_15lvnla
/r/LocalLLaMA/comments/15lvnla/local_api_and_apple_shortcuts/
false
false
https://b.thumbs.redditm…luc0X-Mn-daY.jpg
1
null
Have you all been guilty of being somewhat religious about specific models in the mist of all the local models coming up left and right ? [model fatigue]
1
Like for me, and I am probably not being objective here but the Nous-Hermes variant regardless of Llama1 or Llama2 is just...really a step above the rest. I think it is just due to me experiencing model fatigue. Do you all have a specific model you all are "religious" about ?
2023-08-08T21:52:43
https://www.reddit.com/r/LocalLLaMA/comments/15lvod0/have_you_all_been_guilty_of_being_somewhat/
Vitamin_C_is_awesome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lvod0
false
null
t3_15lvod0
/r/LocalLLaMA/comments/15lvod0/have_you_all_been_guilty_of_being_somewhat/
false
false
self
1
null
What is the best coding model to use directly with VS Code?
1
Is Wizard the best option to compete with CoPilot?
2023-08-08T22:04:03
https://www.reddit.com/r/LocalLLaMA/comments/15lvzfy/what_is_the_best_coding_model_to_use_directly/
SillyLilBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lvzfy
false
null
t3_15lvzfy
/r/LocalLLaMA/comments/15lvzfy/what_is_the_best_coding_model_to_use_directly/
false
false
self
1
null
Confused about the "custom URL provided will remain valid for model downloads for 24 hours to download each model up to 5 times"
1
Hello y'all, I just saw now the email from Meta I received 6 days ago granting me access to download their LLaMa v2 models. Though apparently, in 24 hours, I can download a model for only five times. I am just a little confused. Does it mean like I can only download five times and have to fill out the form again? I use Google Colab's free tier for doing all my LLM playing, where I have to keep downloading the models from Huggingface directly. So I am just worried if the "custom URL provided will remain valid for model downloads for 24 hours to download each model up to 5 times" is going to be an issue for me. PS: This old PC will definitely not be able to run the model, which is why I use Google Colab
2023-08-08T22:32:04
https://www.reddit.com/r/LocalLLaMA/comments/15lwpej/confused_about_the_custom_url_provided_will/
ImNotLegitLol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lwpej
false
null
t3_15lwpej
/r/LocalLLaMA/comments/15lwpej/confused_about_the_custom_url_provided_will/
false
false
self
1
null
New SillyTavern Release - with proxy replacement!
1
There's a new major version of **[SillyTavern](https://github.com/SillyTavern/SillyTavern)**, my favorite LLM frontend, perfect for chat and roleplay! The new feature I'm most excited about: > **Added settings and instruct presets to imitate simple-proxy for local models** Finally a replacement for the *simple-proxy-for-tavern*! The proxy was a useful third-party app that did some prompt manipulation behind the scenes, leading to better output than without it. However, it hasn't been updated in months and isn't compatible with many of SillyTavern's later features like group chats, objectives, summarization, etc. Now there's finally a built-in alternative: The Instruct Mode preset named "**Roleplay**" basically does the same the proxy did to produce better output. It works with any model, doesn't have to be an instruct model, any chat model works just as well. So I've stopped using the proxy and am not missing it thanks to this preset. And it's nice being able to make adjustments directly within SillyTavern, not having to edit proxy JavaScript files anymore.
2023-08-08T22:36:26
https://www.reddit.com/r/LocalLLaMA/comments/15lwtai/new_sillytavern_release_with_proxy_replacement/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lwtai
false
null
t3_15lwtai
/r/LocalLLaMA/comments/15lwtai/new_sillytavern_release_with_proxy_replacement/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]}
Setting Up a Lightweight Local LLM, Fine-tuning, and Creating an API for Queries
1
Hey fellow Redditors, I've been fascinated by large language models (LLMs) and their potential applications. I'm eager to dive into the world of LLMs by setting up a lightweight local version, fine-tuning it with my own data, and eventually creating an API to query it online. 🚀 However, I'm a bit unsure about the exact steps and tools involved in this process. Here's what I have in mind: Setting Up a Lightweight Local LLM: I'm looking for recommendations on lightweight versions of LLMs that I can run on my local machine. Something that strikes a balance between resource consumption and performance would be ideal. Fine-Tuning with Custom Data: I want to fine-tune the model using my own data to make it more relevant to my specific needs. I've heard about techniques like transfer learning and domain adaptation. Any insights into the tools, datasets, and steps involved in this process would be greatly appreciated. Creating an API for Queries: Once my model is ready, I'd like to create an API that allows me to send queries to the model and receive text generation as output. What are some recommended frameworks or libraries for building such an API? How can I ensure security and efficiency while serving these requests? If you've had experience with any of these steps or can point me in the right direction, I'd be incredibly grateful. Whether it's tutorials, articles, tools, or personal tips, your input will be a valuable resource for me and others who are embarking on a similar journey. Let's discuss and share our insights on how to set up and utilize LLMs to their fullest potential. Thanks in advance for your help and expertise! 🙌📚
2023-08-08T22:44:03
https://www.reddit.com/r/LocalLLaMA/comments/15lwzz5/setting_up_a_lightweight_local_llm_finetuning_and/
aiCornStar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lwzz5
false
null
t3_15lwzz5
/r/LocalLLaMA/comments/15lwzz5/setting_up_a_lightweight_local_llm_finetuning_and/
false
false
self
1
null
For those running backends on W10/11
1
So, I noticed that my W11 throttled token generation while terminal was running in the background. I'm using kobold.cpp as my backend and noticed 40-50% drop in token generation, every time my browser was active instead of the terminal. Apparently changing system power options to favor high performance solved it, at least for me. &#x200B; Just FYI. &#x200B; &#x200B; P.S. I might need to change for Ubuntu or something...
2023-08-08T22:58:27
https://www.reddit.com/r/LocalLLaMA/comments/15lxd09/for_those_running_backends_on_w1011/
nollataulu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lxd09
false
null
t3_15lxd09
/r/LocalLLaMA/comments/15lxd09/for_those_running_backends_on_w1011/
false
false
self
1
null
Hi sub, i want to build my 2023 PC and i'm thinking in GPU for IA/LLAMA2...
1
[removed]
2023-08-08T22:58:50
https://www.reddit.com/r/LocalLLaMA/comments/15lxddr/hi_sub_i_want_to_build_my_2023_pc_and_im_thinking/
Icy_Sun_4958
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15lxddr
false
null
t3_15lxddr
/r/LocalLLaMA/comments/15lxddr/hi_sub_i_want_to_build_my_2023_pc_and_im_thinking/
false
false
self
1
null
Tried running llama-70b on 126GB of memory; memory overflow. How much memory is necessary ?
1
2023-08-08T23:15:11
https://i.redd.it/zjpv9w8dyygb1.png
MoiSanh
i.redd.it
1970-01-01T00:00:00
0
{}
15lxscb
false
null
t3_15lxscb
/r/LocalLLaMA/comments/15lxscb/tried_running_llama70b_on_126gb_of_memory_memory/
false
false
https://a.thumbs.redditm…XCeoq6RKXpp4.jpg
1
{'enabled': True, 'images': [{'id': 'xStm3HNsKGKSAHNnfzUUE0VN7YieO75cGhirAOOMSxo', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=108&crop=smart&auto=webp&s=510f4e7b7f7dfb795418e57cc2e407d279acca71', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=216&crop=smart&auto=webp&s=eff3e39029164836c0c0d65a5d70a9719aa0e908', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=320&crop=smart&auto=webp&s=1c637b3c527600d93f59fbf61c665d163e3758a9', 'width': 320}, {'height': 281, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=640&crop=smart&auto=webp&s=6a2c65cfef2d3275916b73080ecb2a0561197980', 'width': 640}, {'height': 422, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=960&crop=smart&auto=webp&s=5b5dd96b0950e002de94660462d243b2108ee966', 'width': 960}, {'height': 474, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=1080&crop=smart&auto=webp&s=bacb8396339eb10e3c624a23960ce62d3a73fb3f', 'width': 1080}], 'source': {'height': 1570, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?auto=webp&s=f0295991e586c1122aac054a851866eff415a78a', 'width': 3570}, 'variants': {}}]}
GPT4All - Can LocalDocs plugin read HTML files?
1
Used Wget to mass download a wiki. Looking to train a model on the wiki.
2023-08-09T05:24:12
https://www.reddit.com/r/LocalLLaMA/comments/15m5v6s/gpt4all_can_localdocs_plugin_read_html_files/
Rzablio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15m5v6s
false
null
t3_15m5v6s
/r/LocalLLaMA/comments/15m5v6s/gpt4all_can_localdocs_plugin_read_html_files/
false
false
self
1
null
New version of Turbopilot released!
1
New: Refactored + Simplified: The source code has been improved to make it easier to extend and add new models to Turbopilot. The system now supports multiple flavours of model New: Wizardcoder, Starcoder, Santacoder support - Turbopilot now supports state of the art local code completion models which provide more programming languages and "fill in the middle" support.
2023-08-09T05:57:33
https://github.com/ravenscroftj/turbopilot
Acrobatic-Site2065
github.com
1970-01-01T00:00:00
0
{}
15m6hn1
false
null
t3_15m6hn1
/r/LocalLLaMA/comments/15m6hn1/new_version_of_turbopilot_released/
false
false
https://b.thumbs.redditm…bfpwbjfm8wmM.jpg
1
{'enabled': False, 'images': [{'id': 'VxeQvnvYGXg_A2HNdzuNBEzctgC1QlNp0_NATAQ2rS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=108&crop=smart&auto=webp&s=edee7eb2947e6e1dfe12ddd6ace311d562877a8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=216&crop=smart&auto=webp&s=aee9e347605a56bfb8f998211de468f28f7802a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=320&crop=smart&auto=webp&s=0f3324fb3b14cb5dabb77c1de38170d01f9510fa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=640&crop=smart&auto=webp&s=862bdc051b819d9ec61a1bc6fee89c070aa407bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=960&crop=smart&auto=webp&s=9e9f94171b34d307414f689f2b5a045fdbbdbc65', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=1080&crop=smart&auto=webp&s=623db90dc570d3df8c0c03e2e84c9f8ab24ed14c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?auto=webp&s=0cdee7bfd253b2b2a6f960371134f1f28bc592bb', 'width': 1200}, 'variants': {}}]}
Difference between meta-llama-2-7b and meta-llama-2-7b-hf
1
Was browsing through hugginfFace Llama's model card and came across the hf variant for all models. It says that it's pretrained on HuggingFace transformer. What exactly does it mean?
2023-08-09T06:17:16
https://www.reddit.com/r/LocalLLaMA/comments/15m6ulr/difference_between_metallama27b_and_metallama27bhf/
IamFuckinTomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15m6ulr
false
null
t3_15m6ulr
/r/LocalLLaMA/comments/15m6ulr/difference_between_metallama27b_and_metallama27bhf/
false
false
self
1
null
Lamma Context length is it max(4096) or can it be increased??
1
I am running the model through replicate and i am getting error while i am testing on large input. is 4096 maximum that llama model can support or can i increase that ... if i try to pass in chunks will it give me same results because i am working on identifying the tone. \--------------------------------------------------------------------------- ModelError Traceback (most recent call last) Cell In\[58\], line 1 \----> 1 imposter\_scam\_resp = generate\_llama2\_response\_fraud(fraud\_detection\_prompt, imposter\_scam) **2** print(imposter\_scam\_resp) Cell In\[57\], line 30, in generate\_llama2\_response\_fraud(prompt\_template, prompt\_input) **24** output = replicate.run( **25** \# 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5', # LLM model **26** "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1", **27** input={"prompt": f"{full\_prompt} {prompt\_input} Assistant: ", # prompt **28** "temperature":0.1, "top\_p":0.9, "max\_length":512, "repetition\_penalty":1}) # model parameters **29** full\_response = '' ---> 30 for item in output: **31** full\_response += item **33** return full\_response File [\~/my\_python\_venvs/gpt\_env/lib/python3.10/site-packages/replicate/prediction.py:79](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/Charan%20Teju/Documents/langchain/audio-processing/notebooks/~/my_python_venvs/gpt_env/lib/python3.10/site-packages/replicate/prediction.py:79), in Prediction.output\_iterator(self) **76** self.reload() **78** if self.status == "failed": ---> 79 raise ModelError(self.error) **81** output = self.output or \[\] **82** new\_output = output\[len(previous\_output) :\] ModelError: start (0) + length (4097) exceeds dimension size (4096). \--
2023-08-09T09:12:18
https://www.reddit.com/r/LocalLLaMA/comments/15m9zyo/lamma_context_length_is_it_max4096_or_can_it_be/
Dry_Sink_597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15m9zyo
false
null
t3_15m9zyo
/r/LocalLLaMA/comments/15m9zyo/lamma_context_length_is_it_max4096_or_can_it_be/
false
false
self
1
null
Prompt Processing Times? (GGML CPU-only)
1
Tested with the following: Prompt: 478 tokens BLAS: 512 System: i3 9th Gen 4 cores/4 threads with 16GB DDR4 2400 RAM ----------- LLaMA 7B Q2 Processing: 82ms/token Generation: 206ms/token ----------- LLaMA 7B Q4 Processing: 81ms/token Generation: 258ms/token ----------- LLaMA 13B Q2 Processing: 146ms/token Generation: 380 ms/token The generation times make sense, they're increasing with the quant as well as the model size but is the prompt processing time independent of the quantization? It seems to increase only with model size and not quant.
2023-08-09T10:08:28
https://www.reddit.com/r/LocalLLaMA/comments/15mb1f0/prompt_processing_times_ggml_cpuonly/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mb1f0
false
null
t3_15mb1f0
/r/LocalLLaMA/comments/15mb1f0/prompt_processing_times_ggml_cpuonly/
false
false
self
1
null
Fine-tune Llama 2 with DPO, has anyone tried?
1
2023-08-09T11:15:57
https://huggingface.co/blog/dpo-trl
Nondzu
huggingface.co
1970-01-01T00:00:00
0
{}
15mcd1b
false
null
t3_15mcd1b
/r/LocalLLaMA/comments/15mcd1b/finetune_llama_2_with_dpo_has_anyone_tried/
false
false
https://b.thumbs.redditm…l0UzsnhUBpoc.jpg
1
{'enabled': False, 'images': [{'id': 'AmQgpyNhIlUEj9zr9gG-RgzMA6CHeTDaZ7Q4LqqedT8', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=108&crop=smart&auto=webp&s=74b9d4f0311999391894918231269f3bcd31e1d3', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=216&crop=smart&auto=webp&s=8147b6d2b8c6c97c952ae07f1ee06d61f65fc7f8', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=320&crop=smart&auto=webp&s=851cc0f3c4f578a71abcf252ec0e49191e842a4a', 'width': 320}, {'height': 311, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=640&crop=smart&auto=webp&s=0b81473a67554129a8d51aaca7996b487d428f88', 'width': 640}, {'height': 467, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=960&crop=smart&auto=webp&s=5c1230adc455c2abf5b857e66e7a903395dd7543', 'width': 960}, {'height': 526, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=1080&crop=smart&auto=webp&s=d236e1aed2e2b05e1ca72080d0cfdbbfb5e8b09f', 'width': 1080}], 'source': {'height': 579, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?auto=webp&s=e951b9097b184faf1f05e6c9334226b25d2e8b8e', 'width': 1188}, 'variants': {}}]}
My meta-llama/Llama-2-7b-hf fine-tuned model does not learn to use the additional special tokens
1
I am trying to fine-tune the meta-llama/Llama-2-7b-hf model on a recipe dataset using QLoRA and SFTTrainer. My dataset contains special tokens (such as <RECIPE\_TITLE>, <END\_TITLE>, , <END\_STEPS>, etc.) which helps with structuring the recipes. During fine-tuning I have added these additional tokens to the tokenizer: >special\_tokens\_dict = {‘additional\_special\_tokens’: \[“<RECIPE\_TITLE>”, “<END\_TITLE>”, “”, “<END\_INGREDIENTS>”, “”, “<END\_STEPS>”\], ‘pad\_token’: “”} tokenizer.add\_special\_tokens(special\_tokens\_dict) I also resized the token embeddings for the model so that it matches the length of the tokenizer. However, the fine-tuned model predicts all these newly added tokens in the right places (the generated recipe is well-structured), but it predicts these tokens through a combination of token ids, not utilizing the additional token ids. From what I found in other posts, LoRA does not automatically update the embedding matrix, so i made sure to specify this in the lora config: >peft\_config = LoraConfig( lora\_alpha=lora\_alpha, lora\_dropout=lora\_dropout, r=lora\_r, bias=“none”, task\_type=“CAUSAL\_LM”, target\_modules=\[“q\_proj”, “v\_proj”, “k\_proj”\], modules\_to\_save=\[“embed\_tokens”, “lm\_head”\], ) What is the reason behind the model not being able to learn the embeddings of the newly added tokens?
2023-08-09T11:20:29
https://www.reddit.com/r/LocalLLaMA/comments/15mcgjv/my_metallamallama27bhf_finetuned_model_does_not/
rares13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mcgjv
false
null
t3_15mcgjv
/r/LocalLLaMA/comments/15mcgjv/my_metallamallama27bhf_finetuned_model_does_not/
false
false
self
1
null
What should be my expectations from models trained on a PC?
1
I'm considering making a serious upgrade to my PC so I could start playing around with AI. But before ponying up thousands of dollars in triple digit 5DDR RAM and thousand dollar GPUs, I want to know what I should expect from "PC AI" as most of the AI I am familiar with are billion dollar models like GPT4, trained by the smartest people in the world on the topic (I am "just" a developer for comparison. no compsci degree). Could I develop niche-specific chatbots for clients? Could I train the model on West's Respiratory Physiology and tell it to analyze my CPAP data? What should I expect?
2023-08-09T11:48:29
https://www.reddit.com/r/LocalLLaMA/comments/15md13z/what_should_be_my_expectations_from_models/
BigBootyBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15md13z
false
null
t3_15md13z
/r/LocalLLaMA/comments/15md13z/what_should_be_my_expectations_from_models/
false
false
self
1
null
This
1
[removed]
2023-08-09T12:14:26
https://www.reddit.com/r/LocalLLaMA/comments/15mdlnk/this/
kielerrr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mdlnk
false
null
t3_15mdlnk
/r/LocalLLaMA/comments/15mdlnk/this/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bWxZxCMhP9jfrcdBLdv9A8KMN-S-eA8s6v7OcxWVjKw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=108&crop=smart&auto=webp&s=4be6acec1540e26a5b8f50c6e781047d4bc6acdf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=216&crop=smart&auto=webp&s=298dafc7c016f29265b51840deceb215f0624ca8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=320&crop=smart&auto=webp&s=6178af642a5d291347dc1ca9d72b5d9aaaa8ebeb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=640&crop=smart&auto=webp&s=dff332329f003fe7c3305760ad47c0d38a5fe9fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=960&crop=smart&auto=webp&s=0474cc429ea74a8bab362757377ccbb9d92aaafc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=1080&crop=smart&auto=webp&s=65715568fe66a5654d000fe7ab8ec1c4d660862e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?auto=webp&s=3d2a6710aee1ad462f994a45fef27d0a391b784a', 'width': 1200}, 'variants': {}}]}
This dataset trains openai models but has no effect on llama2. Why?
1
[removed]
2023-08-09T12:16:58
https://www.reddit.com/r/LocalLLaMA/comments/15mdnq5/this_dataset_trains_openai_models_but_has_no/
kielerrr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mdnq5
false
null
t3_15mdnq5
/r/LocalLLaMA/comments/15mdnq5/this_dataset_trains_openai_models_but_has_no/
false
false
default
1
{'enabled': False, 'images': [{'id': 'bWxZxCMhP9jfrcdBLdv9A8KMN-S-eA8s6v7OcxWVjKw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=108&crop=smart&auto=webp&s=4be6acec1540e26a5b8f50c6e781047d4bc6acdf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=216&crop=smart&auto=webp&s=298dafc7c016f29265b51840deceb215f0624ca8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=320&crop=smart&auto=webp&s=6178af642a5d291347dc1ca9d72b5d9aaaa8ebeb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=640&crop=smart&auto=webp&s=dff332329f003fe7c3305760ad47c0d38a5fe9fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=960&crop=smart&auto=webp&s=0474cc429ea74a8bab362757377ccbb9d92aaafc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=1080&crop=smart&auto=webp&s=65715568fe66a5654d000fe7ab8ec1c4d660862e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?auto=webp&s=3d2a6710aee1ad462f994a45fef27d0a391b784a', 'width': 1200}, 'variants': {}}]}
Is there any hope at all for getting a 13B GPTQ model running entirely on an 8 GB GPU?
1
With the release of ExLlama and its incredible optimizations, I was hoping that I'd finally be able to load 13B models into my GPU, but unfortunately it's not quite there yet. While it OOMs with regular ExLlama, I can load it with ExLlama\_HF but it still OOMs upon inference. I know that of course I can offload some layers to the CPU or run GGML, but at that point it's incredibly slow. That being said, has anyone figured out a way to load a 13B GPTQ model onto a 8 GB card? Maybe some sort of way to run my computer with nothing besides the LLM and an output window so that VRAM doesn't get allocated to anything else? Or perhaps someone knows if further optimizations are being done?
2023-08-09T12:32:46
https://www.reddit.com/r/LocalLLaMA/comments/15me0oc/is_there_any_hope_at_all_for_getting_a_13b_gptq/
Gyramuur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15me0oc
false
null
t3_15me0oc
/r/LocalLLaMA/comments/15me0oc/is_there_any_hope_at_all_for_getting_a_13b_gptq/
false
false
self
1
null
Llama2 Embeddings FastAPI Service
1
I just wanted a quick and easy way to easily submit strings to a REST API and get back the embedding vectors in JSON using Llama2 and other similar LLMs, so I put this together over the past couple days. It's very quick and easy to set up and totally self-contained and self-hosted. You can easily add new models to it by simply adding the HuggingFace URL to the GGML format model weights. Two models are included by default.
2023-08-09T12:37:34
https://github.com/Dicklesworthstone/llama_embeddings_fastapi_service
dicklesworth
github.com
1970-01-01T00:00:00
0
{}
15me4i9
false
null
t3_15me4i9
/r/LocalLLaMA/comments/15me4i9/llama2_embeddings_fastapi_service/
false
false
default
1
{'enabled': False, 'images': [{'id': 'LIFpZZ97uHRTB2fR9xb2TOd2YvmaGyOm5rq7V0YRxTo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=108&crop=smart&auto=webp&s=32c0d6a94a728c1dc48428fae9ea4b7325f806d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=216&crop=smart&auto=webp&s=0f1c7814beff6ff5a1da5e3557300a7e4a9d6639', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=320&crop=smart&auto=webp&s=3ba4fefcc0d86b8071a125f8e564fc4c636cc359', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=640&crop=smart&auto=webp&s=47d354cd43497ea54a9fc0e649c31d22fc3b9faa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=960&crop=smart&auto=webp&s=78dea0fe84430517e01f3f0145796832ba47997e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=1080&crop=smart&auto=webp&s=cb603e2cfae9c323a5d8fe2078ce086531bd97c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?auto=webp&s=28fc3b94ef0a30be422e4fb8480b9f97796e6580', 'width': 1200}, 'variants': {}}]}
PC build for LLMs
1
[removed]
2023-08-09T12:48:36
https://www.reddit.com/r/LocalLLaMA/comments/15mednm/pc_build_for_llms/
04RR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mednm
false
null
t3_15mednm
/r/LocalLLaMA/comments/15mednm/pc_build_for_llms/
false
false
self
1
null
Fine-tuning Llama 2 using spot instances across multiple clouds or Lambda Cloud
1
2023-08-09T13:52:12
https://dstack.ai/examples/finetuning-llama-2/
cheptsov
dstack.ai
1970-01-01T00:00:00
0
{}
15mfysb
false
null
t3_15mfysb
/r/LocalLLaMA/comments/15mfysb/finetuning_llama_2_using_spot_instances_across/
false
false
https://a.thumbs.redditm…DH2H8iTc8Ku8.jpg
1
{'enabled': False, 'images': [{'id': '1CJ80XE7TBliPhvCtW1XlPN9rEz_QAFB-YxQSDbhaKw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=108&crop=smart&auto=webp&s=34039547ea3d5e89dd33add5eff01d197aa2f659', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=216&crop=smart&auto=webp&s=789e4eb5bddc9467bb4588fac00ed43933281130', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=320&crop=smart&auto=webp&s=7dfdba67cba947a90854cd2d4d13b48fb266577b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=640&crop=smart&auto=webp&s=bd6e340f3810899f1bbc315f27bb80d000f6085f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=960&crop=smart&auto=webp&s=ed36155cc3c6873399daf516fd85f84bc83ad524', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=1080&crop=smart&auto=webp&s=6132101025b785e35af7caac410a496a225a062e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?auto=webp&s=76e59c56ef72b9e2172ea53448d60950578f86ad', 'width': 1200}, 'variants': {}}]}
Lost In Translation: LLMs Unleashed
1
You know those digital buddies we’ve come to rely on for a bit of tech magic? Well, they’ve decided to put on their cryptic cloaks and mess with our heads a bit. Buckle up, because we’re about to dive into a world where your AI assistant isn’t just your sidekick – it’s your partner in eerie shenanigans. This: [https://daystosingularity.com/2023/08/09/lost-in-translation-llms-unleashed/](https://daystosingularity.com/2023/08/09/lost-in-translation-llms-unleashed/)
2023-08-09T13:57:12
https://www.reddit.com/r/LocalLLaMA/comments/15mg3g4/lost_in_translation_llms_unleashed/
Powerful-Pumpkin-938
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mg3g4
false
null
t3_15mg3g4
/r/LocalLLaMA/comments/15mg3g4/lost_in_translation_llms_unleashed/
false
false
self
1
{'enabled': False, 'images': [{'id': 'z-4lvjVtpSargyO33nCIO081kU4se1phI4FktUjpbSk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=108&crop=smart&auto=webp&s=6e5e2d2bb460e4c25f694fb1e5b11b952566ed92', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=216&crop=smart&auto=webp&s=2a120494fc7b3df28d8f4b106e1a4fd09046febb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=320&crop=smart&auto=webp&s=58c41c956cc8fcec882590fb4bca2b3810fac690', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=640&crop=smart&auto=webp&s=f1f6f77e16af8364ca06224759e08ec9425a726b', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?auto=webp&s=23460e8dded89bda515507b0080175c4fb85a4f2', 'width': 768}, 'variants': {}}]}
EU Parliament approved the text of the AI Regulation Law (it is not applied yet, but we might be very near) - Which models should I hoard? Which are the best unconsored before the blackout?
1
[removed]
2023-08-09T14:15:54
https://www.reddit.com/r/LocalLLaMA/comments/15mgl52/eu_parliament_approved_the_text_of_the_ai/
PuzzledAd1197
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mgl52
false
null
t3_15mgl52
/r/LocalLLaMA/comments/15mgl52/eu_parliament_approved_the_text_of_the_ai/
false
false
self
1
null
Introducing the newest WizardLM-70B V1.0 model!
1
Introducing the newest **WizardLM-70B V1.0** model ! 1. WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on **coding**, **mathematical reasoning** and **open-domain conversation** capacities. 2. This model is license friendly, and follows the same license with Meta Llama-2. 3. Next version is in training and will be public together with our new paper soon. For more details, please refer to: Model weight: [https://huggingface.co/WizardLM/WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) Demo and Github: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) Twitter: [https://twitter.com/WizardLM\_AI](https://twitter.com/WizardLM_AI) &#x200B; https://preview.redd.it/d61gunflg3hb1.png?width=900&format=png&auto=webp&s=bd3a9a77124d6d7dcbbfcd6ecfd0d1aaa1d4d7ed
2023-08-09T14:24:55
https://www.reddit.com/r/LocalLLaMA/comments/15mgthr/introducing_the_newest_wizardlm70b_v10_model/
cylaw01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mgthr
false
null
t3_15mgthr
/r/LocalLLaMA/comments/15mgthr/introducing_the_newest_wizardlm70b_v10_model/
false
false
https://b.thumbs.redditm…4Tvgc4nN3BWQ.jpg
1
{'enabled': False, 'images': [{'id': 'lOaLM5PtpNjrwQBaVnzypT1kCPzSVsujOefNAVhO5CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=108&crop=smart&auto=webp&s=9e2bd842483d97a3a5070984e9c32b6df4165eb0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=216&crop=smart&auto=webp&s=56d61187f3f2ce092927a58b01bb4429bef50baf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=320&crop=smart&auto=webp&s=f1854e6e0a24b3a5acf5574286ebf8d3483d84a1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=640&crop=smart&auto=webp&s=94ae94e10c5f717c9dcc2ad7a55728edf14bbe24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=960&crop=smart&auto=webp&s=9455647e7f989af7f0e2e89be70f44f6a35436c0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=1080&crop=smart&auto=webp&s=54f146ddf6f12904286d4832a7ae47f6c6b7c9eb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?auto=webp&s=edaa911b536e91fdb6fe1081df28a049e1f0b658', 'width': 1200}, 'variants': {}}]}
EU Parliament approved the text of the AI Regulation Law (it is not applied yet, but we might be very near) - Which models should I hoard? Which are the best uncensored before the blackout?
47
I am overreacting a bit to this, but I still want to be prepared in case models and UIs will not be available in EU or if Open Source projects will have to obscure themselves before everyone understands and apply the transparency EU requires. **I want to prepare to a possible AI winter in EU.** --- This might even scare big companies away, but the main problem is that EU will inevitably be stuck in the past. I think this is catastrophic, as much as regulation might be a good thing for the general public, this will also privatize the technology making it more available with the ones with the resources to actually use models. Based on what I read, I assume it would also have an impact on hardware market and much more to come in a few months, if this is fully passed. EU always has been the lead for privacy and I am proud of that, but this... seems rushed to me and I don't think they fully understood our position nor the technology itself to regulate it this way. They seem to have completely forgotten about the Open Source. This might also turn out to be good, with more transparency, but also bad with more privatization and basically money bullying open source out the continent.
2023-08-09T14:46:55
https://www.reddit.com/r/LocalLLaMA/comments/15mhebl/eu_parliament_approved_the_text_of_the_ai/
BetterProphet5585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mhebl
false
null
t3_15mhebl
/r/LocalLLaMA/comments/15mhebl/eu_parliament_approved_the_text_of_the_ai/
false
false
self
47
null
Has anyone tried deploying GGML models in production?
1
Considering they’re not thread-safe, I’d be curious how you could deploy them behind a reliable API on AWS. Maybe use the LocalAI kubernetes config on EKS?
2023-08-09T15:08:04
https://www.reddit.com/r/LocalLLaMA/comments/15mhyk0/has_anyone_tried_deploying_ggml_models_in/
bangarangguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mhyk0
false
null
t3_15mhyk0
/r/LocalLLaMA/comments/15mhyk0/has_anyone_tried_deploying_ggml_models_in/
false
false
self
1
null
Faster than OpenAI Api?
2
I am currently running a process where I have to analyse large chunks of texts by an LLM. I divide the texts into parts that fit the token window, but it still takes about an hour per text to run the prompt on everything and receive a reply for all chunks. Can Llama be faster? I would probably connect to an Api provider for now, but might install it on my server later if the volume increases.
2023-08-09T15:13:56
https://www.reddit.com/r/LocalLLaMA/comments/15mi446/faster_than_openai_api/
ekevu456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mi446
false
null
t3_15mi446
/r/LocalLLaMA/comments/15mi446/faster_than_openai_api/
false
false
self
2
null
What is the best embedding search?
1
I am working to a small personal project. I plan to vectorise a lot of text using sBERT and then I would like to be able to search through this content by asking "natural" language questions. I know that I need to tokenize the question and the use the tokens to run a similarity search through the vectors. I am currently stuck here, I am not sure what is the best available search. I tried FAISS, but I have to handle the storage of the vectors and load them in the memory to use FAISS. I heard that there are already complete solutions like DuckDB or Chroma or Qdrant that are handling everything (storage, integrity, indexing, search). Are those better? Or I should use FAISS?
2023-08-09T15:43:41
https://www.reddit.com/r/LocalLLaMA/comments/15mivz5/what_is_the_best_embedding_search/
aiworshipper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mivz5
false
null
t3_15mivz5
/r/LocalLLaMA/comments/15mivz5/what_is_the_best_embedding_search/
false
false
self
1
null
Can Llama convert English cue words into a Japanese sentences?
1
Apologies, but I'm a beginner. I want to convert few English words into Japanese sentences. The sentences would be easy so I don't think Llama would make a mistake there. Is llama capable enough to do both of these tasks? I would appreciate if you can provide me some links that I can read and implement. As of now, I feel llama doesn't support Japanese language. 😪
2023-08-09T15:52:49
https://www.reddit.com/r/LocalLLaMA/comments/15mj4tm/can_llama_convert_english_cue_words_into_a/
JapaniRobot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mj4tm
false
null
t3_15mj4tm
/r/LocalLLaMA/comments/15mj4tm/can_llama_convert_english_cue_words_into_a/
false
false
self
1
null
Newbie needing some clarification in regards to koboldcpp, please and thank you.
1
Other than the selection of the model using the -model flag, 1. What options should I be using? 2. Is it always beneficial to put the maximum amount of layers on the GPU to use all available VRAM? 3. Do I need to specify how many threads the application should use? And do I always want to use all available threads? 4. Can I make the system try to generate images locally first, and then fall back to the stable horde if the generation fails due to lack of vram? 5. Is there a way to have the system generate me a 5000 word (or any number really) short story without my intervention? Thanks so much for reading. What an incredible time to be alive. I can't believe how good these self hostable tools are becoming.
2023-08-09T17:09:48
https://www.reddit.com/r/LocalLLaMA/comments/15ml757/newbie_needing_some_clarification_in_regards_to/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ml757
false
null
t3_15ml757
/r/LocalLLaMA/comments/15ml757/newbie_needing_some_clarification_in_regards_to/
false
false
self
1
null
👨‍💻 An awesome and curated list of best code-LLM for research.
1
[https://github.com/huybery/Awesome-Code-LLM](https://github.com/huybery/Awesome-Code-LLM) Letting LLMs help humans write code (named Code-LLMs) would be the best way to free up productivity, and we're collecting the research progress on this repo. If this resonates with you, please 🌟 star the repo on GitHub, contribute your pull request. 😊 Let's make it more comprehensive together. Feel free to ask questions or share your thoughts in the comments below.
2023-08-09T17:30:50
https://www.reddit.com/r/LocalLLaMA/comments/15mlrt0/an_awesome_and_curated_list_of_best_codellm_for/
huybery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mlrt0
false
null
t3_15mlrt0
/r/LocalLLaMA/comments/15mlrt0/an_awesome_and_curated_list_of_best_codellm_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BsZMx6KFo0Ls1MX1YXO9BRz_V7c3QO-lr0wVkKQrzIc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=108&crop=smart&auto=webp&s=9209768234ed8ca24eb2f935493c9d2de0bdd74f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=216&crop=smart&auto=webp&s=8c6e330b6418073e87038df42003a799562f58de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=320&crop=smart&auto=webp&s=b3eabd94378105fd2f0fff6586d470e7a5243f0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=640&crop=smart&auto=webp&s=be1e4a53d7528b15dcdd412f323af9c6ea4f0e41', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=960&crop=smart&auto=webp&s=64389939d0f269f631cd949eb3985ab491ada231', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=1080&crop=smart&auto=webp&s=ff1e0a5b6542e41a104689ffd2ddd2803fa6e644', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?auto=webp&s=e2ec13f60ab6f50ce59cd3351377d45968126894', 'width': 1200}, 'variants': {}}]}
Why is this so strange? Can you change the system prompt and get it to be more uncensored? (Using Llama-2-13b)
1
2023-08-09T17:40:32
https://i.redd.it/lam2k65pf4hb1.png
BetterProphet5585
i.redd.it
1970-01-01T00:00:00
0
{}
15mm1fg
false
null
t3_15mm1fg
/r/LocalLLaMA/comments/15mm1fg/why_is_this_so_strange_can_you_change_the_system/
false
false
https://b.thumbs.redditm…bFvbVtOiqTgo.jpg
1
{'enabled': True, 'images': [{'id': 'G1pcZuWRUSCn62iMjroPQ1ZFPyMxDURHxagLUSDhZaA', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=108&crop=smart&auto=webp&s=5b9855142578d231ece29f708c04c1f9aeb66d72', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=216&crop=smart&auto=webp&s=9485efae475d4e78bdeb8307bcb9a75a04cf036d', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=320&crop=smart&auto=webp&s=4fdd251ff34450c966bc109cb6e6962c3b5f221e', 'width': 320}, {'height': 609, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=640&crop=smart&auto=webp&s=6be6f23ae5c427318293f7932d4057c24a8c3164', 'width': 640}], 'source': {'height': 695, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?auto=webp&s=a2cb342c1a316e57c65dab972a39f6c5a82fb7d9', 'width': 730}, 'variants': {}}]}
Documents on interfacing with local models
1
No i don’t mean existing UIs i mean creating a UI and interfacing with the local model. I’ve searched high and low and can’t find any existing documentation. I’ve spent the last few days going over multiple websites, looking at how to articles, and I’ve yet to find “here’s the code”. Hell i still can’t figure out if there’s a separation between the python im seeing on say LLAMA 2s git and the model itself. Is there some code framework that interacts with the model(data) is it one large bundle…when i look at hugging face models and the files there all i see is a config.json that says something like: { “mofel_type” : “llama” } Zero code and just a bunch of .bin files. For example it would be nice to not run a local model on my 4090 and instead run it on my 3090 except finding the documentation to perform such a feet is elusive. If someone could point me in the right direction that would be appreciated
2023-08-09T17:49:34
https://www.reddit.com/r/LocalLLaMA/comments/15mma90/documents_on_interfacing_with_local_models/
tickleMyBigPoop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mma90
false
null
t3_15mma90
/r/LocalLLaMA/comments/15mma90/documents_on_interfacing_with_local_models/
false
false
self
1
null
[Project] Making AMD GPUs Competitive for LLM inference
1
ML compilation (MLC) techniques makes it possible to run LLM inference performantly. An AMD 7900xtx at $1k could deliver 80-85% performance of RTX 4090 at $1.6k, and 94% of RTX 3900Ti previously at $2k. Most of the performant inference solutions are based on CUDA and optimized for NVIDIA GPUs nowadays. In the meantime, with the high demand for compute availability, it is useful to bring support to a broader class of hardware accelerators. AMD is a potential candidate. MLC LLM makes it possible to compile LLMs and deploy them on AMD GPUs using its ROCm backend, getting competitive performance. More specifically, AMD RX 7900 XTX ($1k) gives 80% of the speed of NVIDIA RTX 4090 ($1.6k), and 94% of the speed of NVIDIA RTX 3090Ti (previously $2k). Besides ROCm, our Vulkan support allows us to generalize LLM deployment to other AMD devices, for example, a SteamDeck with an AMD APU. - Blogpost describing the techniques: https://blog.mlc.ai/2023/08/09/Making-AMD-GPUs-competitive-for-LLM-inference - Github: https://github.com/mlc-ai/mlc-llm/
2023-08-09T18:01:21
https://www.reddit.com/r/LocalLLaMA/comments/15mmlte/project_making_amd_gpus_competitive_for_llm/
yzgysjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mmlte
false
null
t3_15mmlte
/r/LocalLLaMA/comments/15mmlte/project_making_amd_gpus_competitive_for_llm/
false
false
self
1
null
Train with custom knowledge
1
Hi, I have have ~100 XML files in a simple schema, these files contain logic and rules. At the moment these files are created manually. Is it possible to take for example the llama2 model and train it with this additional data, so that it could generate or modify these XML files by instructions?
2023-08-09T18:08:02
https://www.reddit.com/r/LocalLLaMA/comments/15mmsj6/train_with_custom_knowledge/
FroyoAbject
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mmsj6
false
null
t3_15mmsj6
/r/LocalLLaMA/comments/15mmsj6/train_with_custom_knowledge/
false
false
self
1
null
Sharding LLMs
1
I have been wondering about this for a few days now and haven't been able to find much information on the how-to. I'm trying to fine-tune the Aguila HuggingFace model on the free version of Google Colab, but I haven't been able to locate a sharded version anywhere. I thought I might need to do it myself, but at the moment, I still haven't been able to figure it out. I'd really appreciate it if someone could provide some guidance or point me in the right direction. Thanks so much in advance for any help!
2023-08-09T18:52:05
https://www.reddit.com/r/LocalLLaMA/comments/15mnysc/sharding_llms/
Jaded-Armadillo8348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mnysc
false
null
t3_15mnysc
/r/LocalLLaMA/comments/15mnysc/sharding_llms/
false
false
self
1
null
Best model for text generation in low-end computers?
1
I need to implement text generation into a program that is already a bit VRAM heavy, in order to be usable for the maximum number of users, what would be some good performing small models that don't need much VRAM (maybe 2-4 GB) and respond decently fast?
2023-08-09T19:06:07
https://www.reddit.com/r/LocalLLaMA/comments/15moc8z/best_model_for_text_generation_in_lowend_computers/
Valevergus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15moc8z
false
null
t3_15moc8z
/r/LocalLLaMA/comments/15moc8z/best_model_for_text_generation_in_lowend_computers/
false
false
self
1
null
Do LLMs take embeddings directly or is it converted to text for RAG?
1
Pretty much the title. &#x200B; I want to know how LLMs answer questions based off embeddings retrieved from a vector database. To my understanding, text embeddings are just a list of numbers that represent the text's semantic meaning. How does an LLM understand these numbers if the embedding model is not the same as the LLM's? Do I have to convert the text embeddings back into the input text?
2023-08-09T19:53:21
https://www.reddit.com/r/LocalLLaMA/comments/15mpl6p/do_llms_take_embeddings_directly_or_is_it/
malicious510
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mpl6p
false
null
t3_15mpl6p
/r/LocalLLaMA/comments/15mpl6p/do_llms_take_embeddings_directly_or_is_it/
false
false
self
1
null
What are the text chunking/splitting and embedding best practices for RAG applications?
1
I'm trying to make an LLM powered RAG application without LangChain that can answer questions about a document (pdf) and I want to know some of the strategies and libraries that you guys have used to transform your text for text embedding. I would also like to know which embedding model you used and how you dealt with the sequence length. My documents will be long textbooks and I'm currently using the MTEB text embedders from Hugging Face which all have sequence lengths of 512.
2023-08-09T20:10:05
https://www.reddit.com/r/LocalLLaMA/comments/15mq1ri/what_are_the_text_chunkingsplitting_and_embedding/
malicious510
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mq1ri
false
null
t3_15mq1ri
/r/LocalLLaMA/comments/15mq1ri/what_are_the_text_chunkingsplitting_and_embedding/
false
false
self
1
null
Assessing Learning and Progress in Training with LoRa Method: A Case Study with llama2
1
I am trying to overfit llama2 and understand if it's learning. To achieve this, I am training it using a single example and the LoRa method. I expect that it will learn from it—thus, the loss will decrease—and when I generate text, it will reproduce that example given only the beginning. &#x200B; When I train the 13B model, the loss decreases to 0, and the model memorizes the training example. However, when I train the same code with a 7B model using LoRa, the loss remains at zero initially and doesn't make any progress. Moreover, when generating from the training example, it produces the same text as before training. &#x200B; Could you please suggest how to ensure that the training is taking place using the LoRa method? have you successfully trained the 7B model using LoRa; how did you confirm this? for finetuning with Lora i used this code [https://github.com/tloen/alpaca-lora/tree/main](https://github.com/tloen/alpaca-lora/tree/main) and additionally tried [https://github.com/facebookresearch/llama-recipes/tree/main](https://github.com/facebookresearch/llama-recipes/tree/main)
2023-08-09T20:37:53
https://www.reddit.com/r/LocalLLaMA/comments/15mqrv6/assessing_learning_and_progress_in_training_with/
GooD404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mqrv6
false
null
t3_15mqrv6
/r/LocalLLaMA/comments/15mqrv6/assessing_learning_and_progress_in_training_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dSnK02WXdmgmsoIp5lR1xLca8kIYz6n7guVtLbmPaO0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=108&crop=smart&auto=webp&s=2e732da77d05b2417646488bd3c5c0d657e11ef7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=216&crop=smart&auto=webp&s=f31232d3a8d1811a9add298710af45b2844b18c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=320&crop=smart&auto=webp&s=14fc21efeb2432bdef3aaeaa3511b141aa99d37f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=640&crop=smart&auto=webp&s=3ad2f264fcb0ab0112ca93fd2be1f92a008cbb00', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=960&crop=smart&auto=webp&s=0b9828f01ea1d42e6976e1eb7afd1ba36c8374dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=1080&crop=smart&auto=webp&s=ece1eeddb60c7186862bbbb74ccc40629aeb728b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?auto=webp&s=ca0e7aa949fd38a947499cbaaf4629bfd026cbaf', 'width': 1200}, 'variants': {}}]}
Do you guys think the 34b LLaMA 2 model has been cancelled?
1
It's been a while, and Meta has not said anything about the 34b model from the original LLaMA2 paper. The fine-tuned instruction model did not pass their "safety" metrics, and they decided to take time to "red team" the 34b model, however, that was the chat version of the model, not the base one, but they didn't even bother to release the base 34b model... Which is a shame since theoretically the base model has nothing to do with it. I've seen a lot of wealthy users enjoying the 70b model, or people moving to the budget option (13b), or people accepting the painful performance of running everything off CPU... And no real demand for the 30b model which used to be the "affordable" power user option for LLaMA1. Do you guys think Meta gave up on it? I am under the impression that this is the case as they don't have any mention of it on the official LLaMA repository, and releasing the base model (not fine-tuned) would be trivial for them, so maybe they think there is an underlying problem with the base model and they will hardly bother to retrain the whole thing. What do you think?
2023-08-09T21:14:39
https://www.reddit.com/r/LocalLLaMA/comments/15mrrnm/do_you_guys_think_the_34b_llama_2_model_has_been/
hellninja55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mrrnm
false
null
t3_15mrrnm
/r/LocalLLaMA/comments/15mrrnm/do_you_guys_think_the_34b_llama_2_model_has_been/
false
false
self
1
null
How to enable long-term memory in LLM's?
1
Let's say I have multiple conversations with an LLM stored somewhere, are there any resources/approaches to enable long-term memory in the LLM? Ideally you'd just store the entire conversation history and feed it in as a prompt, but that doesn't seem to be the most feasible option given the context retention of most models. Does anyone have any smart ways to do this? Or is there any literature out there for an LLM to prime an LLM to "remember" a particular user? &#x200B; Would appreciate any insights!
2023-08-09T21:20:18
https://www.reddit.com/r/LocalLLaMA/comments/15mrx2n/how_to_enable_longterm_memory_in_llms/
Ok_Coyote_8904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mrx2n
false
null
t3_15mrx2n
/r/LocalLLaMA/comments/15mrx2n/how_to_enable_longterm_memory_in_llms/
false
false
self
1
null
Generative Agents now open-sourced.
1
2023-08-09T21:55:25
https://github.com/joonspk-research/generative_agents
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
15msuvx
false
null
t3_15msuvx
/r/LocalLLaMA/comments/15msuvx/generative_agents_now_opensourced/
false
false
https://b.thumbs.redditm…XccyPhk-RQQs.jpg
1
{'enabled': False, 'images': [{'id': 'gc7vF97XLKz_geNSLhfMK561OFm416eoyTnwHYJzfNQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=108&crop=smart&auto=webp&s=9fb62091efb34fddecf2ff0c8891bb1023d6d5b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=216&crop=smart&auto=webp&s=db7591c2cd48953a88697dc1ee50accf0e338767', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=320&crop=smart&auto=webp&s=cd9c65bba49ed6e88601e68ef00b543fe6478b03', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=640&crop=smart&auto=webp&s=ec81a347ebc27f6c3ed707dd6ae8c017b0658916', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=960&crop=smart&auto=webp&s=a52488d2f958d00ec9412d383a5d53f743de96f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=1080&crop=smart&auto=webp&s=08002ab9a801695b689242be4373796cd2541b71', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?auto=webp&s=b108809d9fa7559424afe1766b24b248a87b04b8', 'width': 1200}, 'variants': {}}]}
AlpacaCielo2-7b-8k
1
[https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k](https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k) Updated version of my AlpacaCielo model for 7b. Now has better roleplaying capabilites, 8k context, and a system prompt. While being creative it's also really smart, and using orca-style/CoT system prompts can make it very good at reasoning. GGML and GPTQ's are available thanks to TheBloke, links on the huggingface page.
2023-08-09T21:56:41
https://www.reddit.com/r/LocalLLaMA/comments/15msw0f/alpacacielo27b8k/
pokeuser61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15msw0f
false
null
t3_15msw0f
/r/LocalLLaMA/comments/15msw0f/alpacacielo27b8k/
false
false
self
1
{'enabled': False, 'images': [{'id': '60dTBJOdHDYA6LEzdjqo1K6Pd1_wvXPBN1Mt4gOTeik', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=108&crop=smart&auto=webp&s=442c01891ecf9f21565c3b2815e4c5e0ca384930', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=216&crop=smart&auto=webp&s=93537c72b6323014ff548de2119d815b2c159496', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=320&crop=smart&auto=webp&s=91ff9dea68af680168564d3f187992ac65779f7a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=640&crop=smart&auto=webp&s=c5030339e4347a3e1cddd525fe61347531d4534b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=960&crop=smart&auto=webp&s=e230a06fe21894c2186b8704f85f505e06b68437', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=1080&crop=smart&auto=webp&s=0e69658074a747cf1a22e9442afa4b7252691bc2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?auto=webp&s=9eb70078604e7b20f43c6ccfecccd83f2fe1de9f', 'width': 1200}, 'variants': {}}]}
Vicuna 13b on RK3588 with Mail G610, OpenCL enabled. prefill: 2.3 tok/s, decode: 1.6 tok/s
1
Huge thanks to Apache TVM and MLC-LLM team, they created really fantastic framework to enable LLM natively run on consumer-level hardware. Now, You can literally run Vicuna-13B on Arm SBC with GPU acceleration. * Fast enough to run RedPajama-3b (prefill: 10.2 tok/s, decode: 5.0 tok/s) * Decent speed on Vicuna-13b (prefill: 1.8 tok/s, decode: 1.8 tok/s) This really gives me a chance to create a totally offline LLM device.
2023-08-09T21:58:47
https://www.reddit.com/r/LocalLLaMA/comments/15msxzk/vicuna_13b_on_rk3588_with_mail_g610_opencl/
EmotionalFeed0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15msxzk
false
null
t3_15msxzk
/r/LocalLLaMA/comments/15msxzk/vicuna_13b_on_rk3588_with_mail_g610_opencl/
false
false
self
1
null
SillyTavern's Roleplay preset vs. model-specific prompt format
1
2023-08-09T22:46:47
https://imgur.com/a/dHSrZag
WolframRavenwolf
imgur.com
1970-01-01T00:00:00
0
{}
15mu7um
false
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 642, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FdHSrZag%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D859&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FdHSrZag&image=https%3A%2F%2Fi.imgur.com%2FMD1Pm2s.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="642" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/MD1Pm2s.jpg?fb', 'thumbnail_width': 600, 'title': 'Comparison between Airoboros-specific Roleplay preset and original universal Roleplay preset', 'type': 'rich', 'url': 'https://imgur.com/a/dHSrZag', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'}
t3_15mu7um
/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/
false
false
https://b.thumbs.redditm…4iivHBarGHYU.jpg
1
{'enabled': False, 'images': [{'id': 'DQg2xV1mRiquLnUzSpHec8dRaE8sqlpA_xDH__lRlPg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=108&crop=smart&auto=webp&s=af401db8307829dbbe8370e62660602b0099f3ea', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=216&crop=smart&auto=webp&s=9a55f46042812de627991d9cc2b3a1ceb99af3b1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=320&crop=smart&auto=webp&s=e2d040d0c52d3b2874193031bead1cc1000c5a78', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=640&crop=smart&auto=webp&s=dc295cd01bbd2ee1d6175d1d7cee831edb7e24bc', 'width': 640}], 'source': {'height': 859, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?auto=webp&s=aabb4a8792eccd71b9917e377a8c661800a1f7ea', 'width': 859}, 'variants': {}}]}
LLM source of knowledge
1
I've been diving into the world of language models (LLMs) and their amazing capabilities recently. One idea that's been on my mind is using LLMs as a source of knowledge for staying up-to-date with the latest news. two ideas crossed my mind 1. Daily Fine-Tuning: Should I consider fine-tuning the LLM on news articles daily to ensure I'm getting the latest and most accurate updates? 2. Query Retrieval System: Or would it be more efficient to develop a query retrieval system that interacts with the LLM to fetch relevant news information in real time? How would you recommend I go about this? and is there a better solution for this idea?
2023-08-09T22:54:26
https://www.reddit.com/r/LocalLLaMA/comments/15muexp/llm_source_of_knowledge/
Dull-Morning4790
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15muexp
false
null
t3_15muexp
/r/LocalLLaMA/comments/15muexp/llm_source_of_knowledge/
false
false
self
1
null
Announcing The best 13b model out there "orca-mini-v3-13b"
1
I guess the correct title is "Announcing The best 13b model out there 'orca-mini-v3-13b' for few hours.... UPDATE 2: A big thanks to TheBloke who has generously created the GGML/GPTQ versions. You can access them via the following links: https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML https://huggingface.co/TheBloke/orca_mini_v3_13B-GPTQ After careful consideration, I've decided not to release the full model weights openly for now. I need to find the best way to receive appropriate recognition for the extensive hard work, computing costs, and resources that go into refining these models from their pre-trained versions. In the meantime, I encourage you to utilize the **quantized versions**. UPDATE: Model Mergers are now taking over OpenLLM Leader board, like these guys https://huggingface.co/garage-bAInd, so I guess no point in doing pure SFT on pretrained base models and submitting to HF OpenLLM Leaderboard because there will always someone who will take the best model out here and merged with second or third best one and create a new merged. I am making my orca-mini-v3-13b gated for meanwhile until I figured it out where all this is going.. May be this is all good for community or May be not.. I am open for suggestions. Enjoy... [https://huggingface.co/psmathur/orca\_mini\_v3\_13b](https://huggingface.co/psmathur/orca_mini_v3_13b) https://preview.redd.it/smaelw3c76hb1.png?width=2060&format=png&auto=webp&s=890c940dd84a682c868aa436b32c6632e4c0ac3b
2023-08-09T23:37:29
https://www.reddit.com/r/LocalLLaMA/comments/15mvi5a/announcing_the_best_13b_model_out_there/
Remarkable-Spite-107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mvi5a
false
null
t3_15mvi5a
/r/LocalLLaMA/comments/15mvi5a/announcing_the_best_13b_model_out_there/
false
false
https://b.thumbs.redditm…Ir2g7xirrpKo.jpg
1
{'enabled': False, 'images': [{'id': 'OFIkFuaVgXnuF7Xruhbg4ij3bMERJlHH42J0tq-7oAU', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=108&crop=smart&auto=webp&s=35a6958630dfdb9e4e1a0f0c35e6d66c59bf9091', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=216&crop=smart&auto=webp&s=8e5b198b9e6c8e49f0f7f92b8fc4ac434c7165e4', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=320&crop=smart&auto=webp&s=336cdbac597b748007892e379987fe5ff7ca65cb', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=640&crop=smart&auto=webp&s=0ad5f7dbfe75b357e0fc1b1346105681fdb16f9f', 'width': 640}, {'height': 422, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=960&crop=smart&auto=webp&s=92ef8acd36c41c43c57dfce9882e1acbedd3ff7d', 'width': 960}, {'height': 474, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=1080&crop=smart&auto=webp&s=6877835f1223e3c70250bf39d51befe5c5bf7ea6', 'width': 1080}], 'source': {'height': 906, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?auto=webp&s=b5460bebcba502848276c27d69a7fb72b1029ef9', 'width': 2060}, 'variants': {}}]}
vector search padding
1
I am doing a vector search for semantic matching. I chunked it so that i dont break sentences, so my chunks are around 768 characters + whatever it took to finish the sentence. However, the last chunk in the document is always < 768 and i see it gets picked up a lot - how do I pad this chunk? Do I append gibberish to it so it does not match as often?
2023-08-09T23:37:31
https://www.reddit.com/r/LocalLLaMA/comments/15mvi6y/vector_search_padding/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mvi6y
false
null
t3_15mvi6y
/r/LocalLLaMA/comments/15mvi6y/vector_search_padding/
false
false
self
1
null
GPU shortage in main cloud providers
1
I am trying to launch the [text inference server](https://github.com/huggingface/text-generation-inference) from Hugging face from a VM with a good GPU (Nvidia T4) but it's hard to find the resources. I first try on GCP but no resources was available neither the US or Europe. Same problem with AWS. With Azure I could get a GPU but I am not sure if I can really use it (Nvidia driver not found, etc). Do you see the same shortage of GPU? How do you deploy your LLM? Do you use only CPU based inference server?
2023-08-10T00:04:44
https://www.reddit.com/r/LocalLLaMA/comments/15mw65r/gpu_shortage_in_main_cloud_providers/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mw65r
false
null
t3_15mw65r
/r/LocalLLaMA/comments/15mw65r/gpu_shortage_in_main_cloud_providers/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Imdm79xgxA9kvl-lV3xwf5z21dQlmO1EmbOBRPo2izk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=108&crop=smart&auto=webp&s=35d5961c5aac9a9636856245f9c1181fd5d37be9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=216&crop=smart&auto=webp&s=323961b952e3db830611ccd06ceed80d7a83f1b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=320&crop=smart&auto=webp&s=4247c0e29f1f385c412063f9ddc873a225990026', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=640&crop=smart&auto=webp&s=674bce08b732c2facba0d722601c70b0a61882c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=960&crop=smart&auto=webp&s=d5eec314452b2e71a3b12474be053ac1033705e5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=1080&crop=smart&auto=webp&s=10497df7f651970838a34cd7b6fbe7628aca6843', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?auto=webp&s=e9547982230c5f865451dcd0988d53c7ed6a2d12', 'width': 1200}, 'variants': {}}]}
Success! I've managed to install Koboldcpp on my system. Now I want to learn how to use these tools effectively. Please advise.
1
What I feel like I want more than anything is to have a realistic expectation of what this tool can actually do so I can be consciously aware if I'm operating it poorly or not. I have not had great experiences so far. The power and creativity seems there, but I don't feel like I understand how to harness it. Is it possible for those of you who are good with this stuff to share a prompt, model, settings etc that you know work pretty damn well just so I can know what is decent performance and coherence and which is not? I would very much appreciate it. I'm open to suggestions for any and all modes, and I will fetch whatever model you suggest.
2023-08-10T00:42:25
https://www.reddit.com/r/LocalLLaMA/comments/15mx2b2/success_ive_managed_to_install_koboldcpp_on_my/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mx2b2
false
null
t3_15mx2b2
/r/LocalLLaMA/comments/15mx2b2/success_ive_managed_to_install_koboldcpp_on_my/
false
false
self
1
null
Can an RTX 4090 load a 30b model?
1
My machine is running windows 11 and has 32gb of system RAM with an RTX 4090. I have tried loading 30b models and cant load them. I think I tried 3 so far, don't remember which. Can anyone give me a quick run down how to load a 30b model? I want to load the most capable 30b model as per https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard leaderboard. A step by step guide or some hints are appreciated, thanks.
2023-08-10T00:46:37
https://www.reddit.com/r/LocalLLaMA/comments/15mx5us/can_an_rtx_4090_load_a_30b_model/
no_witty_username
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mx5us
false
null
t3_15mx5us
/r/LocalLLaMA/comments/15mx5us/can_an_rtx_4090_load_a_30b_model/
false
false
self
1
null
Huggingface for LoRA's?
1
Is there an equivalent to civitai or huggingface when it comes to finding (language) LoRA's or character files/contexts that others have made? I haven't trained anything yet, but I would like to contribute somewhere when I do.
2023-08-10T01:21:24
https://www.reddit.com/r/LocalLLaMA/comments/15mxynh/huggingface_for_loras/
prondis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15mxynh
false
null
t3_15mxynh
/r/LocalLLaMA/comments/15mxynh/huggingface_for_loras/
false
false
self
1
null
Nvidia reveals new A.I. chip, says costs of running LLMs will 'drop significantly'
1
2023-08-10T01:22:26
https://www.cnbc.com/2023/08/08/nvidia-reveals-new-ai-chip-says-cost-of-running-large-language-models-will-drop-significantly-.html
throwaway_ghast
cnbc.com
1970-01-01T00:00:00
0
{}
15mxzip
false
null
t3_15mxzip
/r/LocalLLaMA/comments/15mxzip/nvidia_reveals_new_ai_chip_says_costs_of_running/
false
false
https://a.thumbs.redditm…F2iA2A5yi7u4.jpg
1
{'enabled': False, 'images': [{'id': '9VcesSKYyBRhuBA2LT8loyzkyfc9jo3Df-gtyqBavzo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=108&crop=smart&auto=webp&s=4994bf64a183526138ed30da59a57db49c60b5e4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=216&crop=smart&auto=webp&s=e7fc29f5533e942734124f740e1d1438a4b0533d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=320&crop=smart&auto=webp&s=406e860d0b41b761f576fb90108af9674d010d34', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=640&crop=smart&auto=webp&s=040286edc7de0a58db3236f95a20c4a9870564db', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=960&crop=smart&auto=webp&s=cf625a125e2f60b30cfcf31cec36a755bcbae03e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=1080&crop=smart&auto=webp&s=e28188b032b885237334f4839ec271cae5838ccb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?auto=webp&s=cbe0c48ead87ea8ce5821a9c8ce22182b13ca1f5', 'width': 1920}, 'variants': {}}]}
Announcing The best 7b model out there "orca-mini-v3-7b"
1
Enjoy... [https://huggingface.co/psmathur/orca\_mini\_v3\_7b](https://huggingface.co/psmathur/orca_mini_v3_7b) Here are the evals score from the model card, HuggingFace OpenLLM Leaderboard is slow... in processing any submitted model. https://preview.redd.it/syga31ahf7hb1.png?width=1476&format=png&auto=webp&s=b3cda5d84ba794ce687a09cd14b96dad0f013eb2
2023-08-10T03:47:00
https://www.reddit.com/r/LocalLLaMA/comments/15n15x9/announcing_the_best_7b_model_out_there/
Remarkable-Spite-107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n15x9
false
null
t3_15n15x9
/r/LocalLLaMA/comments/15n15x9/announcing_the_best_7b_model_out_there/
false
false
https://b.thumbs.redditm…Vq3G3HcKxOmc.jpg
1
{'enabled': False, 'images': [{'id': 'guUYdBTPGqCETHbTWC0pfjVVAtLWLBdWdpaqzhoOkeg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=108&crop=smart&auto=webp&s=8913ff9f5f5cc518b1ca10f3e1d537a40c5c6a36', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=216&crop=smart&auto=webp&s=106734868c447aa7befd61ae71c78df6fe9eefa3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=320&crop=smart&auto=webp&s=855a7b9855eda1daaf931aa6a9cfefe4ebd2ce61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=640&crop=smart&auto=webp&s=65359d911e1076828f2cfe0c38dccdea3f7382f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=960&crop=smart&auto=webp&s=0c92e8629c75890581b990fdb6557ae50142266a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=1080&crop=smart&auto=webp&s=ccc8be25e8eeffe4ed09e238f84fd0e0e252da98', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?auto=webp&s=979bde2cd202d2dc0e40d51485905b6ab507e88a', 'width': 1200}, 'variants': {}}]}
How to finetune LLM for text classfication
1
[removed]
2023-08-10T03:49:02
https://www.reddit.com/r/LocalLLaMA/comments/15n17d2/how_to_finetune_llm_for_text_classfication/
KneeNo79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n17d2
false
null
t3_15n17d2
/r/LocalLLaMA/comments/15n17d2/how_to_finetune_llm_for_text_classfication/
false
false
self
1
null
Running out of VRAM?
1
Hi all, I'm trying to load the llama-2 model found [here](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16) on a Windows Server machine, which has over 22 GB of VRAM. However, when I try to use textgen webUI to load the model, it keeps running out of VRAM, and I get this error: [How is it running out of memory?](https://preview.redd.it/gl6s0bf9j7hb1.png?width=717&format=png&auto=webp&s=a1a8b0dc054737e1c6805273f7b588f869c8ccea) I'm super new to LLMs, and so I'm not a hundred percent sure what's going on. Can someone tell me where in the documentation I can look, and what I can do to possibly fix this issue? Thanks.
2023-08-10T04:07:14
https://www.reddit.com/r/LocalLLaMA/comments/15n1ke9/running_out_of_vram/
Milk_No_Titties
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n1ke9
false
null
t3_15n1ke9
/r/LocalLLaMA/comments/15n1ke9/running_out_of_vram/
false
false
https://b.thumbs.redditm…WP3KdKcRK7nM.jpg
1
{'enabled': False, 'images': [{'id': '278vv-gaDInPJrV4meg7BOOTUW7YKO_D3QK8c_dNrgs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=108&crop=smart&auto=webp&s=ba70eae8eed4466418501d10d195b4a508560e16', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=216&crop=smart&auto=webp&s=089c2aa294d9b4ee778bb3d83e692370510ffb79', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=320&crop=smart&auto=webp&s=270da8b561adb3c7fd7395c7d9eea75bab92bc68', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=640&crop=smart&auto=webp&s=321f94f3a108786bc113af342a8b431d5054ec69', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=960&crop=smart&auto=webp&s=905b58b145f0881b2e97e6d2752ba9ae32641dce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=1080&crop=smart&auto=webp&s=9ee4848878998564d40bfe0d6e0d1451cdd24898', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?auto=webp&s=f844c5ebde972712c202968a9db4751739918cfa', 'width': 1200}, 'variants': {}}]}
Resume fine-tune from checkpoint with qlora
1
I was fine-tuning a model when power went off. How do I resume fine-tuning from the latest checkpoint? I passed in --checkpoint\_dir=output/checkpoint-9250 and --resume\_from\_checkpoint=True but they don't seem to work and it is starting training from step 1/10000 python3 [qlora.py](https://qlora.py) \--model\_name\_or\_path huggyllama/llama-7b --checkpoint\_dir output/checkpoint-9250 --dataset training\_data.json --load\_in\_4bit=True --max\_memory=8100 --resume\_from\_checkpoint=True
2023-08-10T04:16:12
https://www.reddit.com/r/LocalLLaMA/comments/15n1qrr/resume_finetune_from_checkpoint_with_qlora/
QuantumTyping33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n1qrr
false
null
t3_15n1qrr
/r/LocalLLaMA/comments/15n1qrr/resume_finetune_from_checkpoint_with_qlora/
false
false
self
1
null
Return Source Documents (PDFs) with a source snippets.
1
Hi, is it anyhow possible to return the source documents with a snippet in a PDF Q&A Chatbot of the sources the LLM was given for creating the answer? I know that there is in langchain a function for that but in my case it's only giving me the name of the PDF, not a snippet of the lines which were taken.
2023-08-10T05:30:15
https://www.reddit.com/r/LocalLLaMA/comments/15n36uh/return_source_documents_pdfs_with_a_source/
jnk_str
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n36uh
false
null
t3_15n36uh
/r/LocalLLaMA/comments/15n36uh/return_source_documents_pdfs_with_a_source/
false
false
self
1
null
RLHF training on AMD GPUs
1
Has anyone able to run rlhf training code such as alpacaFarm on AMD MI200 GPUs. There seemed to be an issue with deep speed latest version with ROcM.
2023-08-10T05:36:02
https://www.reddit.com/r/LocalLLaMA/comments/15n3ay6/rlhf_training_on_amd_gpus/
HopeElephant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n3ay6
false
null
t3_15n3ay6
/r/LocalLLaMA/comments/15n3ay6/rlhf_training_on_amd_gpus/
false
false
self
1
null
LongLLaMA-Instruct v1.1 32K
1
A new 3B instruct model with 32K context just dropped, with pretty cool numbers (55% lm-eval, 12% humaneval pass@1): [https://huggingface.co/syzymon/long\_llama\_3b\_instruct](https://huggingface.co/syzymon/long_llama_3b_instruct) You can chat with it on a free colab gpu (thanks to bf16 inference): [https://colab.research.google.com/github/CStanKonrad/long\_llama/blob/main/long\_llama\_instruct\_colab.ipynb](https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_instruct_colab.ipynb) Twitter thread: [https://twitter.com/s\_tworkowski/status/1687620785379360768](https://twitter.com/s_tworkowski/status/1687620785379360768) &#x200B; &#x200B;
2023-08-10T06:35:00
https://www.reddit.com/r/LocalLLaMA/comments/15n4ekv/longllamainstruct_v11_32k/
syzymon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15n4ekv
false
null
t3_15n4ekv
/r/LocalLLaMA/comments/15n4ekv/longllamainstruct_v11_32k/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KntXC5DHxcOtA2LNoK-d9nxtxaDqYRyCJspNwurd3eg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=108&crop=smart&auto=webp&s=592a5e3bf293b0d61761a5a99a40121f9457606a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=216&crop=smart&auto=webp&s=6608c26fc9194d5ce59069fac81e277dfb0b10bf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=320&crop=smart&auto=webp&s=3bf13f342627220bff91973609f78029ee574a38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=640&crop=smart&auto=webp&s=55301173ac456897c19bd3d14531f0597ba045db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=960&crop=smart&auto=webp&s=24ad906c34ff42757964b7add5079ae6e282ac4f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=1080&crop=smart&auto=webp&s=38a928d73b479b5a7dd33b559e14f05e9e369495', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?auto=webp&s=28ba2e04403aa9320b592de2f086e9d5b505ba6a', 'width': 1200}, 'variants': {}}]}