title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Embeddings or Instructor Embeddings?
1
Hello, I was wondering if I should use for my document Q&A Chatbot Embeddings or Instructor Embeddings from HuggingFace with LangChain. ​ Like this: [https://python.langchain.com/docs/integrations/text\_embedding/instruct\_embeddings](https://python.langchain.com/docs/integrations/text_embedding/instruct_embeddings) or that: [https://python.langchain.com/docs/integrations/text\_embedding/huggingfacehub](https://python.langchain.com/docs/integrations/text_embedding/huggingfacehub) ​ Do you guys have any experience wich of these works the best for my specific use case?
2023-07-26T12:04:04
https://www.reddit.com/r/LocalLLaMA/comments/15a3yef/embeddings_or_instructor_embeddings/
jnk_str
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a3yef
false
null
t3_15a3yef
/r/LocalLLaMA/comments/15a3yef/embeddings_or_instructor_embeddings/
false
false
self
1
{'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=108&crop=smart&auto=webp&s=2684aa31208d728f65279640de17c8d8f9039e79', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=216&crop=smart&auto=webp&s=d50c278029cd238c11dc42e60a8b08d7d1f28bc3', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=320&crop=smart&auto=webp&s=1642eda69cd46554b563bc6d931ff7565bf15d55', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=640&crop=smart&auto=webp&s=fbdcb89f2e77b07ef0f74faf07f62774da8993e6', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?auto=webp&s=a6f2697c0bbf3ffa9fd7a65e9e0e8d57c392d56a', 'width': 794}, 'variants': {}}]}
Is there any webui runpod templates that works "out of the box" for llama-2 GPTQs yet ?
1
Or is it still needed to fiddle manually, installing or setting stuff up ? I saw that u/TheBloke's templates were updated 28th june, before llama 2, and his llama 2 on hugging face says last version of Transformers / ExLlama is requiered. ( or maybe I just need to "git pull / pip install " a new version on the pod instance and it works ? ) ( I'm interested in the 70B versions, that I guess won't fit in my local 24Gb ) Thanks
2023-07-26T13:02:40
https://www.reddit.com/r/LocalLLaMA/comments/15a5al2/is_there_any_webui_runpod_templates_that_works/
knoodrake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a5al2
false
null
t3_15a5al2
/r/LocalLLaMA/comments/15a5al2/is_there_any_webui_runpod_templates_that_works/
false
false
self
1
null
New 3B model with 8K context
1
“Cerebras and Opentensor are pleased to announce BTLM-3B-8K (Bittensor Language Model), a new state-of-the-art 3 billion parameter open-source language model that achieves breakthrough accuracy across a dozen AI benchmarks. BTLM-3B-8K Highlights: - 7B level model performance in a 3B model - State of the art 3B parameter model - Optimized for long sequence length inference 8K or more - First model trained on the SlimPajama, the largest fully deduplicated open dataset - Runs on devices with as little as 3GB of memory when quantized to 4-bit - Apache 2.0 license for commercial use BTLM was commissioned by the OpenTensor foundation for use on the Bittensor network. Bittensor is a blockchain based network that lets anyone contribute AI models for inference, providing a decentralized alternative to centralized model providers like OpenAI and Google. Bittensor serves over 4,000 AI models with more than 10 trillion model parameters across the network.”
2023-07-26T13:03:47
https://twitter.com/cerebrassystems/status/1683556415330213888?s=46&t=4Lg1z9tXUANCKLiHwRSk_A
Acrobatic-Site2065
twitter.com
1970-01-01T00:00:00
0
{}
15a5bjz
false
null
t3_15a5bjz
/r/LocalLLaMA/comments/15a5bjz/new_3b_model_with_8k_context/
false
false
default
1
null
Anyone running dual 3090?
1
What is the hardware setup? Do you use SLI?
2023-07-26T14:58:37
https://www.reddit.com/r/LocalLLaMA/comments/15a84b9/anyone_running_dual_3090/
Remarkable_Ad4470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a84b9
false
null
t3_15a84b9
/r/LocalLLaMA/comments/15a84b9/anyone_running_dual_3090/
false
false
self
1
null
Llama-2 7B-hf repeats context of question directly from input prompt, cuts off with newlines
1
[removed]
2023-07-26T15:01:38
https://www.reddit.com/r/LocalLLaMA/comments/15a878a/llama2_7bhf_repeats_context_of_question_directly/
k-ga
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a878a
false
null
t3_15a878a
/r/LocalLLaMA/comments/15a878a/llama2_7bhf_repeats_context_of_question_directly/
false
false
self
1
null
Unveiling the Latent Potentials of Large Language Models (LLMs)
1
I've spent considerable time examining the capabilities of LLMs like GPT-4, and my findings can be summarized as: 1. **Latent Semantics in LLMs:** Hidden layers in LLMs carry a depth of meaning that has yet to be fully explored. 2. **Interpretable Representations:** By visualizing each hidden layer of LLMs as distinct vector spaces, we can employ SVMs and clustering methods to derive profound semantic properties. 3. **Power of Prompt Engineering:** Contrary to common practice, a single well-engineered prompt can drastically transform a GPT-4 model's performance. I’ve seen firsthand its ability to guide LLMs towards desired outputs. Machine Learning, especially within NLP, has achieved significant milestones, thanks to LLMs. These models house vast hidden layers which, if tapped into effectively, can offer us unparalleled insights into the essence of language. My PhD research delved into how vector spaces can model semantic relationships. I posit that within advanced LLMs lie constructs fundamental to human language. By deriving structured representations from LLMs using unsupervised learning techniques, we're essentially unearthing these core linguistic constructs. In my experiments, I've witnessed the rich semantic landscape LLMs possess, often overshadowing other ML techniques. **From a standpoint of explainability:** I envision a system where each vector space dimension denotes a semantic attribute, transcending linguistic boundaries. Though still in nascent stages, I foresee a co-creative AI development environment, with humans and LLMs iterating and refining models in real-time. While fine-tuning has its merits (I've personally fine-tuned instances and collaborated with the OpenOrca team), I've found immense value in prompt engineering. Properly designed prompts can redefine the scope of LLMs, making them apt for a variety of tasks. The potential applications of this approach are extensive. I present these ideas in the hope that the community sees their value and potential.
2023-07-26T15:21:29
https://www.reddit.com/r/LocalLLaMA/comments/15a8ppj/unveiling_the_latent_potentials_of_large_language/
hanjoyoutaku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a8ppj
false
null
t3_15a8ppj
/r/LocalLLaMA/comments/15a8ppj/unveiling_the_latent_potentials_of_large_language/
false
false
self
1
null
Is there any model or any tool (for free) that can read financial statements of multiple companies in pdf and create a comparison of their financial performance
1
I've been trying to extract text from pdf using pypdf2 but it seems like it's not getting 100 accuracy. Is there any model or any python codes that are already doing this?
2023-07-26T15:57:52
https://www.reddit.com/r/LocalLLaMA/comments/15a9nl0/is_there_any_model_or_any_tool_for_free_that_can/
paulus_aurellius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15a9nl0
false
null
t3_15a9nl0
/r/LocalLLaMA/comments/15a9nl0/is_there_any_model_or_any_tool_for_free_that_can/
false
false
self
1
null
eGpu to train /fine tune models
1
Just wondering if anyone is using eGPU to fine tune models. What's your setup
2023-07-26T16:18:26
https://www.reddit.com/r/LocalLLaMA/comments/15aa724/egpu_to_train_fine_tune_models/
paulus_aurellius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15aa724
false
null
t3_15aa724
/r/LocalLLaMA/comments/15aa724/egpu_to_train_fine_tune_models/
false
false
self
1
null
Beginner Oobabooga Chatbot Question - Verbosity goes off the rails.
1
[removed]
2023-07-26T16:41:08
https://www.reddit.com/r/LocalLLaMA/comments/15aasem/beginner_oobabooga_chatbot_question_verbosity/
decker12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15aasem
false
null
t3_15aasem
/r/LocalLLaMA/comments/15aasem/beginner_oobabooga_chatbot_question_verbosity/
false
false
self
1
null
Is training inside ooba with load-in-4bit the same as QLoRA ?
1
I know the training tab uses LoRA, but is LoRA + Accelerate 4-bit all I need to do QLoRA or am I missing something?
2023-07-26T16:54:47
https://www.reddit.com/r/LocalLLaMA/comments/15ab54p/is_training_inside_ooba_with_loadin4bit_the_same/
hurrytewer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ab54p
false
null
t3_15ab54p
/r/LocalLLaMA/comments/15ab54p/is_training_inside_ooba_with_loadin4bit_the_same/
false
false
self
1
null
Looking for suggestions on detecting user intent. Trying to help users complete varying tasks.
1
I've setup very rote chatbot and help systems in the past based on large decision trees. But it always ended up as a game of catch up where we would find new things to throw into the tree. We currently have a huge knowledge base full of mildly structured lessons learned. The platform we're using for our 'help' is being decommissioned for various reasons, so we are starting fresh. Thought it would be a good opportunity to investigate LLM solutions. The end goal is to have a clippy like recommendations on helping users complete tasks. I'm currently on a decent sized dev team, but this is a bit out of our wheel house. Sorry if the questions have been already answer, we're still getting our bearings on this and all the new terms.
2023-07-26T17:09:29
https://www.reddit.com/r/LocalLLaMA/comments/15abj3v/looking_for_suggestions_on_detecting_user_intent/
chris480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15abj3v
false
null
t3_15abj3v
/r/LocalLLaMA/comments/15abj3v/looking_for_suggestions_on_detecting_user_intent/
false
false
self
1
null
Langchain custom functions with llama
1
Hey folks, turns out if you wanna make a list of tools to be used by LLM in langchain - those can only be used with OpenAI. What a bummer! Do you use local llm with langchain to execute on big tasks, what’s your experience?
2023-07-26T18:05:58
https://www.reddit.com/r/LocalLLaMA/comments/15ad1zy/langchain_custom_functions_with_llama/
Sacksha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ad1zy
false
null
t3_15ad1zy
/r/LocalLLaMA/comments/15ad1zy/langchain_custom_functions_with_llama/
false
false
default
1
null
Extended Guide: Instruction-tune Llama 2
1
[https://www.philschmid.de/instruction-tune-llama-2](https://www.philschmid.de/instruction-tune-llama-2)
2023-07-26T18:11:30
https://www.reddit.com/r/LocalLLaMA/comments/15ad7ct/extended_guide_instructiontune_llama_2/
MuffinB0y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ad7ct
false
null
t3_15ad7ct
/r/LocalLLaMA/comments/15ad7ct/extended_guide_instructiontune_llama_2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NYy7vS_DCF7ziYozZI5NewU4mrQpjLxWwJIEeoOeoTE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=108&crop=smart&auto=webp&s=4768a7f3ce8e98b65ec2928dd27be69d13817653', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=216&crop=smart&auto=webp&s=f597cbd4fbbce7835de2c3ddf57bea4be32791f5', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=320&crop=smart&auto=webp&s=63abbf41f12bdd3f3a744092849dea63858626f3', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=640&crop=smart&auto=webp&s=8c350290c3032da07ffd1380750949fe1a6eddec', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=960&crop=smart&auto=webp&s=eb6f8491e988e2a9cbc7ff3ab2a8f7d3c829b09f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=1080&crop=smart&auto=webp&s=c63d9cb2ef67160c0d0c200ae7b5a4b86e3e4148', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?auto=webp&s=b98be99841a14dfc0937f46c8910ea6847ab32b0', 'width': 2400}, 'variants': {}}]}
Skyrim has been modded to have LLM powered NPC conversations.
1
2023-07-26T18:20:53
https://www.reddit.com/r/skyrimvr/comments/157d5u6/nazeems_just_been_misunderstood_fus_mantella_mod/
fallingdowndizzyvr
reddit.com
1970-01-01T00:00:00
0
{}
15adgdo
false
null
t3_15adgdo
/r/LocalLLaMA/comments/15adgdo/skyrim_has_been_modded_to_have_llm_powered_npc/
false
false
https://b.thumbs.redditm…e3_YpUw4ATPM.jpg
1
{'enabled': False, 'images': [{'id': 'rlAeftl_lR05tdr5X_lv-tmkKrEZk-Db6xIJo4rATWg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=108&crop=smart&auto=webp&s=3bd3e7d4add6b10b0aa36bc9c62df1c9e10548b8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=216&crop=smart&auto=webp&s=9025131beedf3bbd6860ca869c42616a31b865f5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=320&crop=smart&auto=webp&s=aeedf4d11220cc2c6737b1f5c8a8e688d68321a6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=640&crop=smart&auto=webp&s=596238811127a7ce5aed46dd92bcfe272f7ea57c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=960&crop=smart&auto=webp&s=d5842a2a642f8fa4d1f69a7e1953ba8f7393eee3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=1080&crop=smart&auto=webp&s=db6cab1566987f4c8012cc5d3725eaaa3720bd73', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?auto=webp&s=7e772efb09560a8f13575a9889a4f1064d870c1c', 'width': 1280}, 'variants': {}}]}
AI Policy @🤗: Open ML Considerations in the EU AI Act
1
2023-07-26T18:21:44
https://huggingface.co/blog/eu-ai-act-oss
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
15adh7x
false
null
t3_15adh7x
/r/LocalLLaMA/comments/15adh7x/ai_policy_open_ml_considerations_in_the_eu_ai_act/
false
false
https://b.thumbs.redditm…FCZHR9xVhzEU.jpg
1
{'enabled': False, 'images': [{'id': 'ApNanJQCBDXikXt8Myy3R4e4bvvWDeIYlhBLnQIgZ18', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=108&crop=smart&auto=webp&s=6f9194b85c22a953b39e196f444dcde8c101389b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=216&crop=smart&auto=webp&s=3e9875091731a9dd50054a47812d787eefe54a24', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=320&crop=smart&auto=webp&s=420414dc6ab78f702e161f393179e178c7bb9d66', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=640&crop=smart&auto=webp&s=a47c90edbbc3f0bfe220c24b178e769b7d6044b4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=960&crop=smart&auto=webp&s=ef1e72c3ff4bfd82309be3a07d2e5e5906578d82', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?auto=webp&s=0bf8c49121e5b887785f534419f738e960e268d0', 'width': 960}, 'variants': {}}]}
How to Develop LLMs in a Responsible Way
1
[removed]
2023-07-26T18:26:54
https://www.reddit.com/r/LocalLLaMA/comments/15adm7j/how_to_develop_llms_in_a_responsible_way/
Ashishpatel26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15adm7j
false
null
t3_15adm7j
/r/LocalLLaMA/comments/15adm7j/how_to_develop_llms_in_a_responsible_way/
false
false
self
1
null
Found this gem in a new dataset I am working on
1
2023-07-26T19:12:40
https://i.redd.it/tmwekja9zceb1.png
pokeuser61
i.redd.it
1970-01-01T00:00:00
0
{}
15aet79
false
null
t3_15aet79
/r/LocalLLaMA/comments/15aet79/found_this_gem_in_a_new_dataset_i_am_working_on/
false
false
https://b.thumbs.redditm…udkeJPr5ttGA.jpg
1
{'enabled': True, 'images': [{'id': 'FjyJj4ZLQNjo-FyxJ2Ra6SfdiAz4ihQXi_rSechXA9A', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=108&crop=smart&auto=webp&s=23dbe2fe846f4041cf44585b15e606f0690bf1c5', 'width': 108}, {'height': 40, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=216&crop=smart&auto=webp&s=828c980ef86a6b1aa9dc2ad3dea8ac771ffb9889', 'width': 216}, {'height': 59, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=320&crop=smart&auto=webp&s=d59431cfa5768813635381572c2cb89816d38bf7', 'width': 320}, {'height': 119, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=640&crop=smart&auto=webp&s=9bafe028df17d29e49af9caf93b30591d3aa32b3', 'width': 640}, {'height': 179, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=960&crop=smart&auto=webp&s=435f8b89814c0313504853f2bb791f57b597806d', 'width': 960}, {'height': 201, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=1080&crop=smart&auto=webp&s=88b1e4b9469b37f18df4dac2dd9746af51ea18d2', 'width': 1080}], 'source': {'height': 251, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?auto=webp&s=fcffa8be70dae3c8d5245e652f1552148726e44e', 'width': 1344}, 'variants': {}}]}
Is it possible to run Llama 2 without a GPU?
1
I have access to a grid of machines, some very powerful with up to 80 CPUs and >1TB of RAM. None has a GPU however. Is it possible to run Llama 2 in this setup? Either high threads or distributed. I'd like to build some coding tools. Simple things like reformatting to our coding style, generating #includes, etc. So doesn't have to be super fast but also not super slow.
2023-07-26T19:20:42
https://www.reddit.com/r/LocalLLaMA/comments/15af0la/is_it_possible_to_run_llama_2_without_a_gpu/
patery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15af0la
false
null
t3_15af0la
/r/LocalLLaMA/comments/15af0la/is_it_possible_to_run_llama_2_without_a_gpu/
false
false
self
1
null
Can I run the 4 or 8 bit TheBloke/llama-2-70b-Guanaco-QLoRA-fp16 on a 3090?
1
[removed]
2023-07-26T19:41:12
https://www.reddit.com/r/LocalLLaMA/comments/15afk7v/can_i_run_the_4_or_8_bit/
trv893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15afk7v
false
null
t3_15afk7v
/r/LocalLLaMA/comments/15afk7v/can_i_run_the_4_or_8_bit/
false
false
self
1
null
AGI local
1
[removed]
2023-07-26T19:49:26
https://www.reddit.com/r/LocalLLaMA/comments/15afrze/agi_local/
ComparisonTotal1016
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15afrze
false
null
t3_15afrze
/r/LocalLLaMA/comments/15afrze/agi_local/
false
false
self
1
null
What's the matter with GGML models?
1
I'm pretty new with running Llama locally on my 'mere' 8GB NVIDIA card using ooba/webui. I'm using GPTQ models like Luna 7B 4Bit and others, and they run decently at 30tk/sec using ExLLama. It's fun and all, but... Since some of you told me that GGML are far superior to even the same bit GPTQ models, I tried running some GGML models and offload layers onto the GPU as per loader options, but it is still extremely slow. The token generation is at 1-2tk/sec, but the time it needs to start generating takes more than a minute. I couldn't get **ANY** GGML model to run as fast as the GPTQ models. With that being said, what's the hype behing GGML models, if they run like crap? Or maybe I'm just using the wrong options? Appreciate the help!
2023-07-26T20:01:54
https://www.reddit.com/r/LocalLLaMA/comments/15ag3sh/whats_the_matter_with_ggml_models/
Fusseldieb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ag3sh
false
null
t3_15ag3sh
/r/LocalLLaMA/comments/15ag3sh/whats_the_matter_with_ggml_models/
false
false
self
1
null
What is the best model to extract data from a plain text?
1
With llama2, personal data like name of persons, phone numbers, etc?
2023-07-26T20:04:03
https://www.reddit.com/r/LocalLLaMA/comments/15ag5vw/what_is_the_best_model_to_extract_data_from_a/
kontostamas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ag5vw
false
null
t3_15ag5vw
/r/LocalLLaMA/comments/15ag5vw/what_is_the_best_model_to_extract_data_from_a/
false
false
self
1
null
Question: Settings for extending context
1
Hey! Rookie question from a lowly enthusiast. What are the settings supposed or theorized to be for rope frequency base and rope frequency scale? I'm assuming the desired values of these settings are dependent on multiple factors; the actual size of the input prompted to a model, the defined context size of i.e. 4k or 8k, but also whether a model already has a native context of 4k or 8k. What would you the experts out there consider using for these settings, when, why and what would/could the differences be? Is there a rule of thumb that can help the simpleminded such as me? I apologize if this question has wrongful assumptions. I hope it makes sense.
2023-07-26T20:04:10
https://www.reddit.com/r/LocalLLaMA/comments/15ag60j/question_settings_for_extending_context/
Sir_Mammut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ag60j
false
null
t3_15ag60j
/r/LocalLLaMA/comments/15ag60j/question_settings_for_extending_context/
false
false
self
1
null
Sharing Generative Media Lab Project Proposal
1
Dear colleagues, I wanted to share a project proposal I have drafted for establishing a Media Lab focused on empowering cultural and creative industries in the Global South with local generative AI capabilities. While I do not currently have the means to implement this lab myself, I hope this proposal document may serve as a useful starting point or source of ideas for anyone interested in establishing similar initiatives oriented towards empowering creative communities with generative AI. The proposal covers key aspects like objectives, methodology, timeline, estimated budget in euros, etc. It is available here: https://www.datoeneltejado.com/generative_artificial_intelligence_media_lab I openly share this proposal under a Creative Commons CC0 1.0 Universal (CC0 1.0) Public Domain Dedication license so you are welcome to reuse or adapt any portion that may be helpful in your own projects. Although this is just a conceptual document, I'm happy to provide any additional context or answer questions to the best of my ability. My goal is to contribute ideas and momentum towards seeing more accessible medialabs emerge across the Global South to explore generative AI. If this proposal helps inspire or inform real projects in any way, I would consider that a valuable outcome.
2023-07-26T20:17:33
https://www.reddit.com/r/LocalLLaMA/comments/15agimy/sharing_generative_media_lab_project_proposal/
Scared-Virus-3463
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15agimy
false
null
t3_15agimy
/r/LocalLLaMA/comments/15agimy/sharing_generative_media_lab_project_proposal/
false
false
self
1
{'enabled': False, 'images': [{'id': '-hTx_DcB2f4K0ibMdsuoLB060o4TX7G7F6Z9SSSDr3w', 'resolutions': [{'height': 34, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=108&crop=smart&auto=webp&s=181ad57c0777aa5b7689a650fef22375da304438', 'width': 108}, {'height': 69, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=216&crop=smart&auto=webp&s=88b5c196a0e7f25b7e0697158d878427d2c622a9', 'width': 216}, {'height': 102, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=320&crop=smart&auto=webp&s=00a79f7da1b52206ea3701f51b746479ae7fafc5', 'width': 320}, {'height': 205, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=640&crop=smart&auto=webp&s=39368a90b0887cd67a404af0b0381c3570cc3d17', 'width': 640}, {'height': 307, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=960&crop=smart&auto=webp&s=5a4068699b34c3008d61318f5f54abc78606aab8', 'width': 960}, {'height': 346, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=1080&crop=smart&auto=webp&s=195376cefb14fd468aa95885f4499d2fa8899c77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?auto=webp&s=28ce9a41af5f4c6676bcd59bb141b08a9c30a0b1', 'width': 1872}, 'variants': {}}]}
Can some kind soul help a completely clueless newbie?
1
I'm brand new to all this and am trying to install LLama 2, specifically TheBloke\_Llama-2-13B-chat-GPTQ\_gptq-4bit-32g-actorder\_True. I would think my computer is acceptable to run it (Ryzen 9 7900X, 32 GB RAM, 3080Ti GPU). Usually, it doesn't load the model (gives some Traceback error). I've got it to load in GPTQ-for-LLaMa, but the only output it shows is formatted as "ttotoodddottttoddd" etc. Only twice now, it loaded in ExLlama and seemed to work. However, I have to get that Traceback error 30-40 times before it'll work. Here's my installation process: 1. Downloaded and extracted oobabooga\_windows.zip 2. Ran start\_windows.bat and let it successfully install everything. 3. Entered "TheBloke/Llama-2-13B-chat-GPTQ:gptq-4bit-32g-actorder\_True" in Model (tab) > Download Custom Model or LoRa > Enter and waited for it to successfully download. 4. Selected the model from the Model drop-down menu (after refreshing). 5. Selected the Model Loader from the drop-down menu. 6. Click Load. 7. If it loaded (and that's a huge IF), I then go to the Text Generation tab and enter a prompt and press Enter. I must be missing some important steps. Any ideas?
2023-07-26T20:31:02
https://www.reddit.com/r/LocalLLaMA/comments/15agvi9/can_some_kind_soul_help_a_completely_clueless/
0260n4s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15agvi9
false
null
t3_15agvi9
/r/LocalLLaMA/comments/15agvi9/can_some_kind_soul_help_a_completely_clueless/
false
false
self
1
null
Someone made a partial orca 70b.
1
2023-07-26T20:40:26
https://huggingface.co/dfurman/llama-2-70b-dolphin-peft
a_beautiful_rhind
huggingface.co
1970-01-01T00:00:00
0
{}
15ah44m
false
null
t3_15ah44m
/r/LocalLLaMA/comments/15ah44m/someone_made_a_partial_orca_70b/
false
false
https://b.thumbs.redditm…pQUHm2XXB08Q.jpg
1
{'enabled': False, 'images': [{'id': 'EBsF30cXWdZedqxkETvroWk3qg0QWQKpj_3JRGXupvY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=108&crop=smart&auto=webp&s=e03b5580b33cec1a03026fcd108318852b77d2f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=216&crop=smart&auto=webp&s=087c48ff5fed96168794569bf6f3c53ca80d9ae3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=320&crop=smart&auto=webp&s=c389e99958c8d1247f8ca3bdbbb1d7dce408da40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=640&crop=smart&auto=webp&s=d745237183257db7ed405e4f90dab3afa45a2769', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=960&crop=smart&auto=webp&s=b16f73290c2f87fa93e88ceeba934fb8bcda8e93', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=1080&crop=smart&auto=webp&s=ea78ef038e28b5c2c2e7f982a83334f2c55040e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?auto=webp&s=0e6c8552e45c32ea4e96e21411b32e8609f966f8', 'width': 1200}, 'variants': {}}]}
Seeing as llama-2 guanaco 70b is worse for creative writing, I'd really like to see a llama-1 65b guanaco tuned for 8k (or 16k!) context. Anyone else with me?
1
Seeing as I use local LLM's largely for creative writing (and can't stand censorship), guanaco 65b has been my go-to model since I discovered it. Initially I was EXTREMELY excited when llama-2 was released, assuming that finetunes would further improve its abilities, but as [this post](https://www.reddit.com/r/LocalLLaMA/comments/159064y/llama_2_based_guanaco_and_airoboros_70b_are_a/) correctly points out, llama-2 finetunes of guanaco and airoboros are less capable in the creative fiction department, not more, in various aspects (see previously mentioned post for the deets). I know there's a 65b 8k context tune of airoboros, but I personally prefer guanaco, and am honestly surprised that someone hasn't done an equivalent expanded context tune of that model yet. Is anyone in the same boat as me? Just trying to judge interest out there, FWIW. If I had the abilities I would do it myself but this kind of thing is way beyond my technical skills, AFAIK. Would love to hear others' thoughts on the subject! Thanks.
2023-07-26T21:06:50
https://www.reddit.com/r/LocalLLaMA/comments/15ahtaw/seeing_as_llama2_guanaco_70b_is_worse_for/
spanielrassler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ahtaw
false
null
t3_15ahtaw
/r/LocalLLaMA/comments/15ahtaw/seeing_as_llama2_guanaco_70b_is_worse_for/
false
false
self
1
null
Intelligence Compared......
1
I have seen a few posts recently with fine tuned models trained on a tiny set of data performing better on tasks than GPT4. This is biased to what you are typing after training, you are in 'logic' / 'math' / literature' etc... mode when testing after your small data fine tuning. If we were to compare the brain of say a human (pretend:GPT4) and an orangutan (pretend:Llama2) we will see that there are many tasks that any given human can do an orangutan can do without knowing the context, The baboon from the 1800's who worked at a railway station and even carried on working after the 'owner' / work boss had died: [https://en.wikipedia.org/wiki/Jack\_(baboon)](https://en.wikipedia.org/wiki/Jack_(baboon)) Here is also the video that appeared on r/nextfuckinglevel today that showed an orangutan not only driving around a path but slowing down and making decisions: [https://www.reddit.com/r/nextfuckinglevel/comments/rsre82/orangutan\_drives\_a\_golf\_car/](https://www.reddit.com/r/nextfuckinglevel/comments/rsre82/orangutan_drives_a_golf_car/) What is displayed in these videos is bias, yes it looks like these animals completely understand the system and how it works interictally, but the crucial part that is missing is understanding the system. They only work when it works, so to speak. If certain bells rung in a different order or were missing a ding it may confuse the monkey and we end up with a catastrophe. Luckily that never happened but it was always a bigger probability than a human making the same error. These curated models on 7B are essentially teaching a spider to create webs and being amazed they can make new webs. You need to test if the spider (fine-tuned-model) can maybe fly, code, do math and logic. Only then can you say it has emergent properties better than LLM's so large they are not achievable by anything smaller than the 3 big companies in the world. P.S. while looking for the orangutan video I seen this video showing a real neural pathway being made: [https://www.reddit.com/r/nextfuckinglevel/comments/15a1lrd/real\_footage\_of\_brain\_cells\_forming\_connections/](https://www.reddit.com/r/nextfuckinglevel/comments/15a1lrd/real_footage_of_brain_cells_forming_connections/) You can see the connections stretching out and being reinforced over time, this is the process each hidden layer node makes with all the other nodes, major spaghetti junction of thin and thick connections, carrying on this analogy, you only want to know when the first bit of traffic makes it through each tunnel and chain them together and now you have a coherent thought.
2023-07-26T21:25:26
https://www.reddit.com/r/LocalLLaMA/comments/15aiacg/intelligence_compared/
randomrealname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15aiacg
false
null
t3_15aiacg
/r/LocalLLaMA/comments/15aiacg/intelligence_compared/
false
false
self
1
null
Well, so much for agent-ising llama2...
1
2023-07-26T21:48:08
https://i.imgur.com/rGATQda.png
staviq
i.imgur.com
1970-01-01T00:00:00
0
{}
15aivpy
false
null
t3_15aivpy
/r/LocalLLaMA/comments/15aivpy/well_so_much_for_agentising_llama2/
false
false
https://b.thumbs.redditm…EhYGYXj7DIoY.jpg
1
{'enabled': True, 'images': [{'id': 'dhy00QwPfUZomMioNWx_Va3iXrAp3xIeGZy2lYGh4JI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=108&crop=smart&auto=webp&s=bfbd3ee91552e7d625606e45ff605a3b2b80ef34', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=216&crop=smart&auto=webp&s=5c068724de1dc1b005252423b7c07cfa7f34369d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=320&crop=smart&auto=webp&s=a0cf2571834adb0a51f1722bacb69c18ca39cf05', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=640&crop=smart&auto=webp&s=c8af9219a02cf2954ac4a3afdd03a49ba043a84c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=960&crop=smart&auto=webp&s=a3bbf47bf88706ad89d0754b898def1ba0d56036', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=1080&crop=smart&auto=webp&s=d8cd0e520401ab432bbf04c5e4b1672b976e084b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?auto=webp&s=c503bdb4de3c29d6efc1b9e0d875df994af772b5', 'width': 1920}, 'variants': {}}]}
Unreliable Open source LLMs
1
I consistently face this problem For a given task and same prompt, sometimes things work well sometimes it gives shitty reply which doesn't follow what I asked, I have seen this across all qlora LLMs I have trained. What might be the reason? Is it due to Less data, less batch size making training unstable? Have you all seen this? What are some tricks which can be used to reduce such behaviours. Reliability becomes center and utmost important point when it comes deployment. Thanks
2023-07-26T22:22:47
https://www.reddit.com/r/LocalLLaMA/comments/15ajrx2/unreliable_open_source_llms/
Longjumping_Essay498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ajrx2
false
null
t3_15ajrx2
/r/LocalLLaMA/comments/15ajrx2/unreliable_open_source_llms/
false
false
self
1
null
Converting BERT Embeddings to Readable Text
1
Hey everyone! Given models like TABERT for understanding tabular data and MathBERT for understanding mathematical equations, how would you convert the resultant contextualized embeddings into understandable text? I believe it can be done by adding a fresh decoder, right? Are they are known examples of converting resulting BERT representations into readable text?
2023-07-26T22:37:48
https://www.reddit.com/r/LocalLLaMA/comments/15ak5e2/converting_bert_embeddings_to_readable_text/
psj_2908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ak5e2
false
null
t3_15ak5e2
/r/LocalLLaMA/comments/15ak5e2/converting_bert_embeddings_to_readable_text/
false
false
self
1
null
Short guide to hosting your own llama.cpp openAI compatible web-server
1
## llama.cpp-based drop-in replacent for GPT-3.5 Hey all, I had a goal today to set-up wizard-2-13b (the llama-2 based one) as my primary assistant for my daily coding tasks. I finished the set-up after some googling. llama.cpp added a server component, this server is compiled when you run make as usual. This guide is written with Linux in mind, but for Windows it should be mostly the same other than the build step. 1. Get the latest llama.cpp release. 2. Build as usual. I used `LLAMA_CUBLAS=1 make -j` 3. Run the server `./server -m models/wizard-2-13b/ggml-model-q4_1.bin` 4. There's a bug with the openAI api unfortunately, you need the `api_like_OAI.py` file from this branch: https://github.com/ggerganov/llama.cpp/pull/2383, this is it as raw txt: https://raw.githubusercontent.com/ggerganov/llama.cpp/d8a8d0e536cfdaca0135f22d43fda80dc5e47cd8/examples/server/api_like_OAI.py. You can also point to this pull request if you're familiar enough with git instead. - So download the file from the link above - Replace the `examples/server/api_like_OAI.py` with the downloaded file 5. Install python dependencies `pip install flask requests` 6. Run the openai compatibility server, `cd examples/server` and `python api_like_OAI.py` With this set-up, you have two servers running. 1. The ./server one with default host=localhost port=8080 2. The openAI API translation server, host=localhost port=8081. You can access llama's built-in web server by going to localhost:8080 (port from `./server`) And any plugins, web-uis, applications etc that can connect to an openAPI-compatible API, you will need to configure `http://localhost:8081` as the server. I now have a drop-in replacement local-first completely private that is about equivalent to gpt-3.5. --- ## The model You can download the wizardlm model from thebloke as usual https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GGML There are other models worth trying. - Wizarcoder - LLaMa2-13b-chat - ? --- ## My experience so far It's great. I have a ryzen 7900x with 64GB of ram and a 1080ti. I offload about 30 layers to the gpu `./server -m models/bla -ngl 30` and the performance is amazing with the 4-bit quantized version. I still have plenty VRAM left. I haven't evaluated the model itself thoroughly yet, but so far it seems very capable. I've had it write some regexes, write a story about a hard-to-solve bug (which was coherent, believable and interesting), explain some JS code from work and it was even able to point out real issues with the code like I expect from a model like GPT-4. The best thing about the model so far is also that it supports 8k token context! This is no pushover model, it's the first one that really feels like it can be an alternative to GPT-4 as a coding assistant. Yes, output quality is a bit worse but the added privacy benefit is huge. Also, it's fun. If I ever get my hands on a better GPU who knows how great a 70b would be :) We're getting there :D
2023-07-26T22:38:01
https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ak5k4
false
null
t3_15ak5k4
/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YifVHGQmCuZu-0lS61KgDS5up-7BOuh98WKW6HPD854', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=108&crop=smart&auto=webp&s=e7198be8cbd82614d78a8e8aaac13caa6120e61d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=216&crop=smart&auto=webp&s=4c7af0a0bc03a2e246df1e9c81fef65fe183ace2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=320&crop=smart&auto=webp&s=1c16f2aef42627e7593befacc3c2c115a58e893a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=640&crop=smart&auto=webp&s=13f7ce1653472c756ae96d13f3d3a27d30f0ba66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=960&crop=smart&auto=webp&s=b261d58e33a146e600c872ef92532dd4d6f983ec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=1080&crop=smart&auto=webp&s=8d2fbf6b2305ee447ff0702971e35d3c9a0b2a20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?auto=webp&s=9aff0b337a18ed4ee47c8a9f50fef84abc9f8e46', 'width': 1200}, 'variants': {}}]}
Is there any real difference between a 13b, 30b, or 60b LLM when it comes to roleplay?
1
Is there any real difference between a 13b, 30b, or 60b LLM when it comes to roleplay? Honestly, aside from some bugs and and lore mistakes here and there (like characters confusing names or missingterpreting some things), a good 13b LLM seems to be really, really solid, creative and fun. Can you go any higher with a 30b or 60b LLM? I never used something above 13b, since my PC can't handle it and bla bla bla.
2023-07-26T23:04:36
https://www.reddit.com/r/LocalLLaMA/comments/15aksvx/is_there_any_real_difference_between_a_13b_30b_or/
allmightyloser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15aksvx
false
null
t3_15aksvx
/r/LocalLLaMA/comments/15aksvx/is_there_any_real_difference_between_a_13b_30b_or/
false
false
self
1
null
What is the limit if you have infinite money (PC build for roleplay)?
1
Let's say you are a millionaire and decide to build a PC to do roleplay in SillyTavern. So, you build a PC with the best posible processor, 64GB RAM, and when it comes to the GPUs, *four RTX6000*, meaning a total of 192GB of VRAM. My questions are: What would be the absolute best roleplaying LLM for this machine? How many context tokens would I have? Would long term memory be possible? Would it be possible to use a LLM while using StableDifussion and an advanced text-to-speech AI at the same time? Would I be able to ever leave my house again?
2023-07-26T23:19:44
https://www.reddit.com/r/LocalLLaMA/comments/15al63i/what_is_the_limit_if_you_have_infinite_money_pc/
allmightyloser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15al63i
false
null
t3_15al63i
/r/LocalLLaMA/comments/15al63i/what_is_the_limit_if_you_have_infinite_money_pc/
false
false
self
1
null
Hosted WizardCoder
1
Hi Is there any website that provides access to hosted open source models like WizardCoder or the new WizardLM? Something similar to how poe.com provides access to proprietary LLMs like chatgpt and palm. I tried h2o gpt but they don't have WizardCoder or the newer Llama2 fine tunes. Thanks
2023-07-26T23:52:14
https://www.reddit.com/r/LocalLLaMA/comments/15alxzk/hosted_wizardcoder/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15alxzk
false
null
t3_15alxzk
/r/LocalLLaMA/comments/15alxzk/hosted_wizardcoder/
false
false
self
1
null
! POE is providing free hosting to fine-tuned version of Llama.
1
Just discovered this from Quora POE's X: >We are also interested in helping host fine-tuned versions of Llama so that developers don’t have to manage a server or pay large costs if their bot gets popular. Please reach out to us at [email protected] if you are interested in participating in this program. The original X: [https://twitter.com/poe\_platform/status/1684362717459812352](https://twitter.com/poe_platform/status/1684362717459812352) I guess WizardLM team can submit their fine-tunes so that we can all try and use them more easily? For those who just want to use uncensored model but don't want to host them locally, it may actually be a good news? Not to mention, the current demo from [https://www.reddit.com/r/LocalLLaMA/comments/159bl45/official\_wizardlm13bv12\_released\_trained\_from/](https://www.reddit.com/r/LocalLLaMA/comments/159bl45/official_wizardlm13bv12_released_trained_from/) is giving out "502 Bad Gateway" when I try to visit. I can't even try the new version.
2023-07-27T02:19:01
https://www.reddit.com/r/LocalLLaMA/comments/15ap60z/poe_is_providing_free_hosting_to_finetuned/
NickCanCode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ap60z
false
null
t3_15ap60z
/r/LocalLLaMA/comments/15ap60z/poe_is_providing_free_hosting_to_finetuned/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xb3MWzBvWf72_Fnm-kl1NF-yteH1_XhqJKxbK74vNEE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/1hTGG4EpFOQLIij-piXLuYdKR64pyoAbNiGPdJ8MwKc.jpg?width=108&crop=smart&auto=webp&s=1c3eb6af29a1cdca2cc461637126a5aa116cc6a6', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/1hTGG4EpFOQLIij-piXLuYdKR64pyoAbNiGPdJ8MwKc.jpg?auto=webp&s=179d598c2417f7674e4567c17a4eee35f4bc3675', 'width': 140}, 'variants': {}}]}
Llama2 (7B-Chat) renders crazy story after simple interactive prompt: “A Short story about: a dog”
1
Run command * ./main -m ./models/7B/ggml-model-q4_0.bin  --interactive-first --color -i --prompt "A Short story about: "* ___ The only input I gave to the prompt was the two words “a dog” ___ A Short story about: a dog Unterscheidung between a real and a fake dog in the street. As I was walking down the street, I saw something that made my heart race. There, in front of me, was a dog! Or so I thought. The thing is, it didn't quite look like any dog I had ever seen before. It was tall and lanky, with glowing green eyes that seemed to be looking right through me. And its fur was an odd shade of purple. I stopped in my tracks, unsure of what to do next. The "dog" sat there placidly, not moving a muscle. I took a few steps closer, and it suddenly stood up on its hind legs. My heart skipped a beat as the creature looked directly at me with an unnerving intensity. I couldn't help but wonder if this was some kind of hallucination or dream. But then the "dog" spoke to me in a low, rumbling voice. "Hello there," it said. "My name is Baxter. And you are?" I stared at Baxter in shock, unsure of how to respond. Finally, I managed to stammer out a greeting. "Uh, hi...Baxter? Are you a dog?" Baxter chuckled, the sound rumbling through his body like thunder. "Oh, no," he said. "I am not a dog. I'm a creature from another world." My mind reeled as I tried to process this revelation. A creature from another world? How was that possible? But before I could ask any questions, Baxter spoke again. "Don't worry," he said. "I mean no harm. My planet is at war with yours, and I have come to infiltrate your society and gather intelligence." I couldn't believe what I was hearing. An alien creature had just revealed itself to me on the street! But as I looked into Baxter's eyes, I saw something else there - a deep sadness and longing that seemed almost human. "But why?" I asked, my voice barely above a whisper. "Why would you do such a thing?" Baxter sighed, his body sagging under the weight of his secrets. "War has been raging on our planet for centuries," he said. "We have lost many good people, and our society is struggling to survive. We need any advantage we can get - even if it means infiltrating another world." I felt a surge of compassion for this strange creature, who was so desperate that he would risk everything to gain the upper hand in his planet's war. "I see," I said softly. "But...what now?" Baxter looked at me with a hint of hope in his eyes. "Now? Now we talk." And so we did. We talked for hours, exchanging stories and secrets and learning about each other's worlds. Baxter told me about his planet, about the wars that had ravaged it for centuries, and about the desperation that had driven him to come here. I told him about my own world, about the mundane routines of my daily life, and about the strange occurrences that had brought us together. As we talked, something shifted between us. It was as if a weight had been lifted from Baxter's shoulders, and he seemed to relax for the first time since I had met him. I felt a sense of connection with this alien creature, and I knew that our meeting had changed me forever. And so we continued to talk, long after the sun had set and the stars had come out. We talked about the universe and its mysteries, and about the infinite possibilities that lay beyond our tiny corner of it. We talked about hope and despair, and about the resilience of the human spirit. As I listened to Baxter's stories and shared my own, I realized that we were not so different after all. We were both searching for something - for a sense of belonging, for meaning and purpose, for a connection to something greater than ourselves. And in that moment, I knew that I would always cherish the memory of this strange and wonderful encounter with an alien from a distant world.
2023-07-27T02:55:34
https://www.reddit.com/gallery/15apxdv
Happy_Chicken9835
reddit.com
1970-01-01T00:00:00
0
{}
15apxdv
false
null
t3_15apxdv
/r/LocalLLaMA/comments/15apxdv/llama2_7bchat_renders_crazy_story_after_simple/
false
false
https://b.thumbs.redditm…_S2OW17eJE5s.jpg
1
null
Roleplay with models locally
1
Hey all, lately I've been reading a lot of posts here about roleplaying with models bot SFW and NSFW. I want to ask for further information and maybe some tutorials about how to get this going and an explanation about what kind of roleplaying you guys do. Thanks!
2023-07-27T02:56:46
https://www.reddit.com/r/LocalLLaMA/comments/15apy9z/roleplay_with_models_locally/
ZCAY6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15apy9z
false
null
t3_15apy9z
/r/LocalLLaMA/comments/15apy9z/roleplay_with_models_locally/
false
false
self
1
null
Roleplay with models locally
1
Hey all, lately I've been reading a lot of posts here about roleplaying with models bot SFW and NSFW. I want to ask for further information and maybe some tutorials about how to get this going and an explanation about what kind of roleplaying you guys do. Thanks!
2023-07-27T02:56:46
https://www.reddit.com/r/LocalLLaMA/comments/15apyah/roleplay_with_models_locally/
ZCAY6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15apyah
false
null
t3_15apyah
/r/LocalLLaMA/comments/15apyah/roleplay_with_models_locally/
false
false
self
1
null
Where can I find the relative performance of proprietary models (e.g. ChatGPT) compared to open source models?
4
I know there is a Open LLM Leaderboard in Huggingface, but I can't find any proprietary models there. Am I missing something?
2023-07-27T03:30:59
https://www.reddit.com/r/LocalLLaMA/comments/15aqnu1/where_can_i_find_the_relative_performance_of/
regunakyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15aqnu1
false
null
t3_15aqnu1
/r/LocalLLaMA/comments/15aqnu1/where_can_i_find_the_relative_performance_of/
false
false
self
4
null
How to install gpt4-x-alpaca on a laptop GPU?
1
Hi, I'm new to open source AI models and want to try them out. I heard that gpt4-x-alpaca is good. However, I don't have the best hardware. Is it possible to install gpt4-x-alpaca on an RTX 3060 laptop GPU with a Ryzen 5 5600H and 16GB of ram? What can I do to optimize it?
2023-07-27T04:24:47
https://www.reddit.com/r/LocalLLaMA/comments/15arq9z/how_to_install_gpt4xalpaca_on_a_laptop_gpu/
throawayalt9989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15arq9z
false
null
t3_15arq9z
/r/LocalLLaMA/comments/15arq9z/how_to_install_gpt4xalpaca_on_a_laptop_gpu/
false
false
self
1
null
Instruction tuning is not alignment. Calling it that supports regulatory capture we do not want.
1
tl;dr I'm noticing that 'alignment' is becoming shorthand for instruction tuning (SFT, RLHF, RLAIF, etc), or the subset of instruction tuning used for censorship, here and on twitter, and suggesting we use language that is not being used for regulatory capture. Long version: OpenAI orchestrated this confusion for their benefit when they deceptively claimed an 'AI alignment' win for RLHF ([https://openai.com/research/instruction-following](https://openai.com/research/instruction-following)), taking 'AI alignment' concerns and rounding them off to a problem they could address with techniques they knew, thereby presenting themselves as safety-conscious responsible stewards. They've used this spin while lobbying for regulations on LLMs that would cement their lead by suppressing competition (inclusive of open source and open access LLMs). Since this sub is generally opposed to regulatory capture by OpenAI and Anthropic, I suggest using more descriptive shorthand instead, like 'instruction tuning' (when referring to SFT/RLHF/RLAIF generally), and 'censorship tuning' (or 'HR alignment') when that is the referent. AI Alignment is actually a field dedicated to building ASI that supports human thriving. It's a big topic, and I'm not enough of an alignment nerd to do it justice, but the wikipedia article does ([https://en.wikipedia.org/wiki/AI\_alignment](https://en.wikipedia.org/wiki/AI_alignment)), and the very short version is that instruction tuning is a technique for pursuing a very superficial case of 'outer alignment,' arguably at the cost of 'inner alignment' (cf the Jungian "[Waluigi Effect](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post)"), which is what real AI safety researchers are more centrally concerned about.
2023-07-27T06:00:44
https://www.reddit.com/r/LocalLLaMA/comments/15ati2t/instruction_tuning_is_not_alignment_calling_it/
georgejrjrjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ati2t
false
null
t3_15ati2t
/r/LocalLLaMA/comments/15ati2t/instruction_tuning_is_not_alignment_calling_it/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WoqTJZZTlAsamfZf6P0oB2kH0Huv63sngOj_f2okCrI', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=108&crop=smart&auto=webp&s=1f18aa3e092399f5b23b3d8e915b3d0a4b8f7aa8', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=216&crop=smart&auto=webp&s=50fff20068f81447034d9f42f8440e60fff4847e', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=320&crop=smart&auto=webp&s=8b4198ac915e1780370b1f975e8da2d570ea9e8c', 'width': 320}, {'height': 425, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=640&crop=smart&auto=webp&s=ceb1be51bb91f4c24563b9b3f5839e4871246950', 'width': 640}, {'height': 638, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=960&crop=smart&auto=webp&s=73b27e44cab69edb06ed4692641792b0b6634015', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=1080&crop=smart&auto=webp&s=1ccb59a0d0622d061b87450c5be340d5b2a86cf8', 'width': 1080}], 'source': {'height': 1373, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?auto=webp&s=43b981a0dcce9b7dbfce9f0e28d7e154d47167c7', 'width': 2064}, 'variants': {}}]}
An INSANE(!!) output from Llama2 (7-B Chat) on very custom runtime settings and prompt interjection/interruption
14
Poem about: A woman in her 40s  everybody has heard of midlife crisis, but what exactly is it ?. By definition a 'mid- life crises' occur during the late middle adulthood stage and generally happen between ages Sixty to seventy years old . Midè exists across gender culture economic status ,educational level surprisigly women are particularly prone Autumn’ Brown Within these generations many transformations mark this significant phase on Life journey The first being physical changes such as weight gain health issues thyroid problems difficulty losing height etcetera As our body slow down tempéramental process increase ( joint pain back strains ) we may struggle adjusting emotional scale( Depression Anxiety stress) reflect actual circumstances or perceived deficits when dealing social relatoinships work satisfaction family dynamics inner dialogue May result intense searching for self identity exploring new interests hobbies even complete transformation seeking validation from outside sources do yes? Is This A Crises Or Extended Exploration Women In Generally More Prune To Identity Trying Ageism Rampant thoughtful reflection that begins at early thirt iest Slumber deep within each soul—an unseen war waged an internal one where femininity battles strive daily against societal expect - For them its fight just keep breathng with shaking knee cap while navigating menace waters. But don`t worry dear struggling sister You Aren t alone amidst torment Your kind have lived through countless wars over millennia endure almost every hardship known mankind eke out perilous existence subjugate their inners selves imposed impositions; dress code modesty pious postures dictated obligatory chastity et al Yet here lyes secret weapon Ageless beauty resilience lies hidden behind mask agitated performed demanded by fear society ` s standards thus empower your voice release yourself Finding oneself involves embracement answer questions longstanding curiosity surrounding Self identification purpose unique gifts particular flavor signature contribution desired community acceptance appraisals feedback loops formidable network support nurturing mutual validatiuhnn which transformative healing restores brilliance inherent you!!
2023-07-27T06:38:53
https://www.reddit.com/gallery/15au6tc
Happy_Chicken9835
reddit.com
1970-01-01T00:00:00
0
{}
15au6tc
false
null
t3_15au6tc
/r/LocalLLaMA/comments/15au6tc/an_insane_output_from_llama2_7b_chat_on_very/
false
false
https://b.thumbs.redditm…Hvz5OW9ZQvSw.jpg
14
null
How to analyse http requests for malicious strings
1
Hi, am I looking for someone to point me in the right direction. The task is, to classify the HTTP requests that come to honeypot as "crawler" or "malicious". For example, if I can detect a Log4j exploit inside on of the headers I can say that that request is malicious. The problem is, this exploit could be inside any numerous headers. It can be at the beginning or at the end. And this is just 1 exploit. There are many different exploits with their own unique strings. And I don't know them all, nor do I have a "regex" for each 1 of them. The malicious string could also not be inside headers, but inside URL, as query parameter. Or if the request was made to something like www/IP.com/phpadmin/.env (or something like this). My current thought process is, to take some open-source LLN, because it has some basic knowledge of how language works and somehow add this cybersecurity domain knowledge to it. To further train it on CVE database, example scripts that showcase each CVE, etc. ​ Am I barking at the right tree here? Or should I maybe train a language model from scratch, so that the embeddings, etc are specialized to cybersec space (because there is a lot of programming code here). Or maybe I should use some other ways to analyse text? ​ I would be greatefull if someone can point me in the right direction (links to blogs, or articles, or some other education material). ​ Thanks
2023-07-27T07:48:04
https://www.reddit.com/r/LocalLLaMA/comments/15ave9l/how_to_analyse_http_requests_for_malicious_strings/
PopayMcGuffin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ave9l
false
null
t3_15ave9l
/r/LocalLLaMA/comments/15ave9l/how_to_analyse_http_requests_for_malicious_strings/
false
false
self
1
null
LLM Boxing - Llama 70b-chat vs GPT3.5 blind test
1
2023-07-27T09:09:11
https://llmboxing.com/
andreasjansson
llmboxing.com
1970-01-01T00:00:00
0
{}
15awsel
false
null
t3_15awsel
/r/LocalLLaMA/comments/15awsel/llm_boxing_llama_70bchat_vs_gpt35_blind_test/
false
false
default
1
null
Can you convert your own checkpoint to GGML?
1
I've been looking around all the new stuff and it seems that GGML is the way to go for inference, the thing is, I'd first like to fine-tune a llama-2 version (which already has a GGML implementation) but it seems that I can't do that directly. I need to first fine-tune the original version and then convert that into GGML/GPTQ or whatever... ​ Any ideas on how this may be achieved? Also, let's say this particular version of the model has ggml/gpt-q/lora/qlora checkpoints available. Which one do I use for fine-tuning? Do I use the original base model and then attach whatever I want onto it (e.g. a new qlora which is fine-tuned)? I'm a little confused with all the new stuff coming out. ​ Thank you!
2023-07-27T09:23:58
https://www.reddit.com/r/LocalLLaMA/comments/15ax1sg/can_you_convert_your_own_checkpoint_to_ggml/
Ok_Coyote_8904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ax1sg
false
null
t3_15ax1sg
/r/LocalLLaMA/comments/15ax1sg/can_you_convert_your_own_checkpoint_to_ggml/
false
false
default
1
null
Why doesn't Googles USM paper specify which languages it was trained on?
1
All AI developers claim to train their models on "hundreds of languages" but I rarely find any source with greater details (what languages, dataset size per language, performance per language). [Even Googles paper](https://arxiv.org/pdf/2303.01037.pdf) fails to mention any details on those 300+ languages. Am I looking a the wrong places or is it a trade secret?
2023-07-27T09:26:30
https://www.reddit.com/r/LocalLLaMA/comments/15ax3gd/why_doesnt_googles_usm_paper_specify_which/
BigBootyBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ax3gd
false
null
t3_15ax3gd
/r/LocalLLaMA/comments/15ax3gd/why_doesnt_googles_usm_paper_specify_which/
false
false
self
1
null
Seeking advice on hardware and LLM choices for an internal document query application
1
Hello everyone, I am working on a project to implement an internal application in our company that will use a Large Language Model (LLM) for document queries. The application will be used by a team of 20 people simultaneously during working hours. I am seeking advice on the necessary hardware infrastructure and the choice of LLM to support this application. Here are the options we are considering: Hardware: Dell Server ? NVIDIA A100 graphics card NVIDIA RTX 3090 graphics card CPU ? RAM? LLM: GPTQ models GGML models We would like to get a better idea of the tokens per second (tk/s) speed we can expect with these setups, especially when used with multiple users simultaneously. Does anyone have experience with similar setups and can provide some guidance? Are there any other hardware considerations or LLM choices we should keep in mind? How can we optimize our infrastructure to achieve the best possible tokens per second speed? Any advice or suggestions would be greatly appreciated. Thank you in advance!
2023-07-27T09:27:36
https://www.reddit.com/r/LocalLLaMA/comments/15ax45f/seeking_advice_on_hardware_and_llm_choices_for_an/
zasp2300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ax45f
false
null
t3_15ax45f
/r/LocalLLaMA/comments/15ax45f/seeking_advice_on_hardware_and_llm_choices_for_an/
false
false
self
1
null
Large LLMs for summarization and topic extraction/modeling
8
Hey Everyone, I am sorry if this is a frequently asked question or something obvious! I have a number of Excel documents that contain market research information about some products. Each row contains a product and each column has information that describes it, such as the Aim of the product, the steps to use the product, if you need additional materials, etc. I want to combine this information into a product description. I have already tried using models 13B models that are placed high in the HuggingFace LLM leaderboard, such as "NousResearch/Nous-Hermes-Llama2-13b". I am not sure if my approach is correct, I understand that these models are instruction-tuned to either carry out various tasks or instruction-tuned to act like a chatbot, but I am having trouble differentiating between the two. Is this a good approach? Using Instruction-tuned out of the box LLMs or should I try to use something else? Sorry if this is something that is frequently asked! Edit: Forgot to add my topic modeling/extraction part sorry. I was wondering if topic extraction and modeling with an LLM provides better results than with TopicBERT or LDA. My documents usually consist of a single document ranging from 300 to 6-7 thousand words.
2023-07-27T10:35:58
https://www.reddit.com/r/LocalLLaMA/comments/15aycxs/large_llms_for_summarization_and_topic/
Laskas123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15aycxs
false
null
t3_15aycxs
/r/LocalLLaMA/comments/15aycxs/large_llms_for_summarization_and_topic/
false
false
self
8
null
What can we achieve with small models ?
1
How can we use models less than 1b to do something useful?
2023-07-27T11:08:30
https://www.reddit.com/r/LocalLLaMA/comments/15ayz6l/what_can_we_achieve_with_small_models/
Sufficient_Run1518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ayz6l
false
null
t3_15ayz6l
/r/LocalLLaMA/comments/15ayz6l/what_can_we_achieve_with_small_models/
false
false
self
1
null
Yet another LLaMA2 finetune
1
Today I release 3 finetunes for LLaMA 2 and GPT-J, Kimiko Kimiko is trained on a small dataset of 3000 instructions + roleplay data, I avoid using GPT data as much as possible but adding around 1000 instructions sample from airboro make the model smarter for some reason? In roleplay model is slightly bias to NSFW while giving really long response. Both 6/7B are trained as LoRA while 13B is QLoRA [https://huggingface.co/nRuaif/Kimiko\_J](https://huggingface.co/nRuaif/Kimiko_J) [https://huggingface.co/nRuaif/Kimiko\_7B](https://huggingface.co/nRuaif/Kimiko_7B) [https://huggingface.co/nRuaif/Kimiko\_13B](https://huggingface.co/nRuaif/Kimiko_13B) Enjoy ​
2023-07-27T11:59:36
https://www.reddit.com/r/LocalLLaMA/comments/15azzgl/yet_another_llama2_finetune/
Tight-Juggernaut138
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15azzgl
false
null
t3_15azzgl
/r/LocalLLaMA/comments/15azzgl/yet_another_llama2_finetune/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OohmSsOcoNO4LSXXW9GcPBNlOqynzu-GTLDqFehvQwM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=108&crop=smart&auto=webp&s=881ea529d2d05a142787cc6410b183711bd33f8f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=216&crop=smart&auto=webp&s=7aa850716e7dde3d3289ce9ac9243efc877a6199', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=320&crop=smart&auto=webp&s=3a54a4194cfc0dcf96a6276a433dea9b7b844391', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=640&crop=smart&auto=webp&s=2c6caba7c0bb19468897a25872fac6598c9b4f6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=960&crop=smart&auto=webp&s=1ec11ecc5834ef255823755375b478fbc29e731f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=1080&crop=smart&auto=webp&s=434b49b54ef9eb8a7c830dec7f475b9ffe3e82fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?auto=webp&s=603c8a480b372380589164056a176c30e59c84d4', 'width': 1200}, 'variants': {}}]}
What methods do you use to get the base llama 2 (not llama 2 chat) to do what you want?
1
Whether it is writing stories, roleplaying, following instructions, writing code, etc, what methods do you use to keep base llama 2 on track and doing what you want? [Here's one example of advice.](https://www.reddit.com/r/LocalLLaMA/comments/15a8ppj/unveiling_the_latent_potentials_of_large_language/jtjb678/)
2023-07-27T12:06:46
https://www.reddit.com/r/LocalLLaMA/comments/15b0554/what_methods_do_you_use_to_get_the_base_llama_2/
SoylentMithril
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b0554
false
null
t3_15b0554
/r/LocalLLaMA/comments/15b0554/what_methods_do_you_use_to_get_the_base_llama_2/
false
false
self
1
null
Trying to train LoRA, hitting a wall
1
Hi everyone, I'm trying to train my first LoRA on a Llama v1 model ([guanaco 13B 8-bit](https://huggingface.co/TheBloke/guanaco-13B-GGML)), but I keep getting this error that I can't figure out: [https://imgur.com/a/gI8gPWx](https://imgur.com/a/gI8gPWx) Using latest Oogabooga on Windows, text generation works fine. "--load-in-8bit" is set in the CMD flags. Here are my parameters: \- Override existing files: yes \- Save every n steps: 0 \- Microbatch size: 3 \- Batch size: 128 \- Epochs: 3 \- Learning rate: 3e-4 \- LR Scheduler: Linear \- Lora rank: 32 \- Lora alpha: 64 \- Cutofflength: 256 \- Hard cut string: \\n\\n\\n \- Ignore small blocks: 0 \- Overlap length: 128 \- Prefer Newline Cut Length: 128 My dataset is a raw text file with 31,000 words, though it has some JSON fragments in it. Does anyone know how I can get this Lora to train? Thank you!!!!
2023-07-27T12:25:12
https://www.reddit.com/r/LocalLLaMA/comments/15b0j0t/trying_to_train_lora_hitting_a_wall/
tenplusacres
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b0j0t
false
null
t3_15b0j0t
/r/LocalLLaMA/comments/15b0j0t/trying_to_train_lora_hitting_a_wall/
false
false
self
1
{'enabled': False, 'images': [{'id': '5VCuKo6Xt9VvC0AS6srcLpaWs4jm4WjHs0hN-LxjdfY', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=108&crop=smart&auto=webp&s=460973af181c45dadac1fbfccec016c343adc851', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=216&crop=smart&auto=webp&s=b12e5ae22a4ed3cae60a4a1fe02e9c3a4ec30908', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=320&crop=smart&auto=webp&s=c979ac46fc28d7a4b4bc08c089a34b5827087d79', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=640&crop=smart&auto=webp&s=249717057bbd800ebc8eabb77ddcc087cc06718e', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=960&crop=smart&auto=webp&s=a037596755d7e7e679028f055a589ccfc5f2a4b8', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=1080&crop=smart&auto=webp&s=c4409c05ffa402b9392f2690776724f8b9df45c4', 'width': 1080}], 'source': {'height': 677, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?auto=webp&s=f5c275c847fc08fd0fb5c827f2c84b8778c78567', 'width': 1308}, 'variants': {}}]}
MiniGPT-4.cpp with other models (LLaMA and LLaMA-2)
1
Recently tried [MiniGPT-4.cpp](https://github.com/Maknee/minigpt4.cpp) (for those who don't know, it's Vicuna that can see images and this is its ggml version). It's amazingly simple to install and works pretty well. [Using the recommended Vicuna-13b-v0 model.](https://preview.redd.it/nom9ffwc1ieb1.png?width=1920&format=png&auto=webp&s=b2bae3c9d3b6d675fd4dcde590aea5ddfe6fa50f) But then curious (and a bit fed up by Vicuna claiming it can't see things), decided to switch and to just try the next model lying around on my computer: [this](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML). Surprisingly, it worked! Although this one would go on tangents afterwards, probably because of its increased context length. But it gave pretty sensible results. [wizard-vicuna-13b-uncensored-superhot-8k](https://preview.redd.it/f7mf3xeg1ieb1.png?width=1920&format=png&auto=webp&s=80fe512b19a2d6c0e01a5ccb58e076e0e8ec015c) [Maybe it shouldn't be surprising though, given they're based off the same LLaMA 13b model](https://preview.redd.it/p2efo1q32ieb1.png?width=1920&format=png&auto=webp&s=efc8b6513f7ae98fe10e783b070f51012093e646) Now curious, I tried the other 13b model on hand at the time: [this](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML). [Nous-Hermes-Llama2-13b](https://preview.redd.it/x8pxrluk2ieb1.png?width=1920&format=png&auto=webp&s=2851bf4c3317f105988dc9cb6154bee5147327b0) It was able to load, although this time the results were useless: ​
2023-07-27T12:30:06
https://www.reddit.com/r/LocalLLaMA/comments/15b0mu4/minigpt4cpp_with_other_models_llama_and_llama2/
VisitingCookies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b0mu4
false
null
t3_15b0mu4
/r/LocalLLaMA/comments/15b0mu4/minigpt4cpp_with_other_models_llama_and_llama2/
false
false
https://b.thumbs.redditm…9yWCD1sz7XWc.jpg
1
{'enabled': False, 'images': [{'id': 'isukIG5hDzQiQKKWa3Tg-f1E8gybM0hXiaykIrwG36M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=108&crop=smart&auto=webp&s=f2e4db5e479efbc9fd741943f731a4a8ad591fe0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=216&crop=smart&auto=webp&s=08ffdd8bcc9e62b4e901980875fef2ce6bf9aa9b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=320&crop=smart&auto=webp&s=6a2fa592d0dd4e5fed06a777f4cf66919a56f7c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=640&crop=smart&auto=webp&s=5a9869419f73b3af178c54a7f86dda308379b062', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=960&crop=smart&auto=webp&s=02084bd7a401e9fc0c4dcb50d5013397ffb68e8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=1080&crop=smart&auto=webp&s=c7ecabe67765496c5d6d5583e9ed82746723b0fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?auto=webp&s=303b2c5a7cb7a5dc001e7ad97a4b55ba489fa601', 'width': 1200}, 'variants': {}}]}
text gen webui llama.cpp metal inference?
1
can the llama.cpp loader run with metal inference for apple silicon? gives amazing performance in llama.cpp command line but hoping to use it with this ui. i have researched this and tried to tinker myself...I may just be incompetent
2023-07-27T13:07:04
https://www.reddit.com/r/LocalLLaMA/comments/15b1fus/text_gen_webui_llamacpp_metal_inference/
Ok_Bid_8789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b1fus
false
null
t3_15b1fus
/r/LocalLLaMA/comments/15b1fus/text_gen_webui_llamacpp_metal_inference/
false
false
self
1
null
Hardware requirements
1
I am a system admin / developer and I wanted to ask about the hardware requirements How does GPU affect using a model beside the lower token generation speed?
2023-07-27T13:57:52
https://www.reddit.com/r/LocalLLaMA/comments/15b2le1/hardware_requirements/
AcanthisittaBig2910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b2le1
false
null
t3_15b2le1
/r/LocalLLaMA/comments/15b2le1/hardware_requirements/
false
false
self
1
null
Embracing the Open Source Ethos: OpenAI Must Be Open Source
1
The open-source movement embodies a promise for the future – a world driven by collaboration, shared knowledge, and the collective pursuit of better solutions. As the momentum behind open source grows, it's clear that this approach, fueled by a dedicated community and rooted in transparency, has the potential to catalyze tremendous innovation. LLM's stand at the forefront of technological advancement. As we navigate its complexities, the importance of fostering a culture of collective effort becomes paramount. OpenAI, with its groundbreaking models and initiatives, has taken significant steps by discussing the potential release of an open source model. This is still not enough. The broader call from the community is for a deeper immersion in the open-source ethos. It is for true participation in an open way. Open AI! All of us know that in open-source endeavors, together, we flourish. This is my invitation to OpenAI: Let's come together even more intimately inside the open-source community. In doing so, we champion a brighter, more inclusive future for technology and, by extension, for humanity. This isn't about us anymore. Let's share the good technology.
2023-07-27T14:12:00
https://www.reddit.com/r/LocalLLaMA/comments/15b2ye1/embracing_the_open_source_ethos_openai_must_be/
hanjoyoutaku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b2ye1
false
null
t3_15b2ye1
/r/LocalLLaMA/comments/15b2ye1/embracing_the_open_source_ethos_openai_must_be/
false
false
self
1
null
Question about configuring a workstation/server for LLM testing/experimentation
1
[removed]
2023-07-27T15:23:12
https://www.reddit.com/r/LocalLLaMA/comments/15b4qav/question_about_configuring_a_workstationserver/
LLMEnthusiast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b4qav
false
null
t3_15b4qav
/r/LocalLLaMA/comments/15b4qav/question_about_configuring_a_workstationserver/
false
false
self
1
null
I hacked together a llama and vicuna chatbot that runs in-browser. What do you all think? Any other models I should add?
1
2023-07-27T15:23:52
https://chat.palapa.ai/
swordsman1
chat.palapa.ai
1970-01-01T00:00:00
0
{}
15b4qvm
false
null
t3_15b4qvm
/r/LocalLLaMA/comments/15b4qvm/i_hacked_together_a_llama_and_vicuna_chatbot_that/
false
false
default
1
null
Need Assistance with Configuring a LLM Workstation/Server
1
I'm working on a company project to acquire a server or workstation for running local LLMs. My hope was to get something that can run at least a 4-bit quantized version of LLAMA 2 70B. The specs I've come up with so far include: \-12-24 core contemporary Xeon, or EPYC processor (preferably a generation that supports PCIe 5.0) \-32-64 GB (2-4 RDIMMs respectively) of system memory (Does the model need to load into system memory before loading into the GPU? Is system memory important if you plan to run the models exclusively on the GPU(s)?) \-Single GPU with at least 48gb of VRAM (NVIDIA L40, RTX A6000, RTX 6000 'Ada' are top choices)(Do multi-gpu configurations pose any significant disadvantages over single gpu configurations?)(What advantages do enterprise cards like the RTX 6000 have over 'consumer' cards like the RTX 4090? Besides total VRAM) \-At least 1TB of solid state storage (Would 2TB be better if we want to store more models?) \-Linux operating system (Is Linux the preferred operating system for this application. If so, which Linux distro would be preferred? Ubuntu? RHEL? Debian?) ​
2023-07-27T15:30:11
https://www.reddit.com/r/LocalLLaMA/comments/15b4wkm/need_assistance_with_configuring_a_llm/
ForsakenMC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b4wkm
false
null
t3_15b4wkm
/r/LocalLLaMA/comments/15b4wkm/need_assistance_with_configuring_a_llm/
false
false
self
1
null
I hacked together a llama and vicuna chatbot that runs in-browser using wasm and webgpu. Any other models I should add?
1
2023-07-27T15:30:43
https://chat.palapa.ai
swordsman1
chat.palapa.ai
1970-01-01T00:00:00
0
{}
15b4x76
false
null
t3_15b4x76
/r/LocalLLaMA/comments/15b4x76/i_hacked_together_a_llama_and_vicuna_chatbot_that/
false
false
default
1
null
Most useful LLaMA's request and a ELI5 guide to understanding basics of the different aspects
1
[removed]
2023-07-27T15:33:49
https://www.reddit.com/r/LocalLLaMA/comments/15b500c/most_useful_llamas_request_and_a_eli5_guide_to/
Seronkseronk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b500c
false
null
t3_15b500c
/r/LocalLLaMA/comments/15b500c/most_useful_llamas_request_and_a_eli5_guide_to/
false
false
self
1
null
Best OSS Coding Assistant for VS Code
1
The title says it all. Any recommendation is welcome. I could imagine to run a local smaller model on my MacBook Pro M1 16GB or a self-hosted model where I would spin it up for a coding session and then spin it down again, e.g. on runpod, Colab, Huggingface spaces. Is there any VS Code plugin you can recommend that you can wire up with local/self-hosted model?
2023-07-27T15:40:32
https://www.reddit.com/r/LocalLLaMA/comments/15b565t/best_oss_coding_assistant_for_vs_code/
krazzmann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b565t
false
null
t3_15b565t
/r/LocalLLaMA/comments/15b565t/best_oss_coding_assistant_for_vs_code/
false
false
self
1
null
Understanding how the huggingface cache works
1
I have what I thought was a gargantuan ssd, at 2TB. I load models from the huggingface hub using pipelines and the model name is meta-llama/Llama-2-13b-chat-hf. This usually only downloads once then I can use it, but every few days it seems to re-download the model and not delete the old one. Why is this? Is the model changing or something? Did I set something up wrong? I've been playing on this computer for only a few weeks and due to this multiple download thing (and, in fairness, testing out a bunch of different models) I've eaten up 1TB of my space. Can I safely delete the cache folder? I think I'm done trying new models for a bit so I'd like to just delete everything and download Llama-2-13B-chat exactly one more time and use it for a while. ​
2023-07-27T16:27:52
https://www.reddit.com/r/LocalLLaMA/comments/15b6dib/understanding_how_the_huggingface_cache_works/
crono760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b6dib
false
null
t3_15b6dib
/r/LocalLLaMA/comments/15b6dib/understanding_how_the_huggingface_cache_works/
false
false
self
1
null
Best local personal assistant frameworks w/ UI? (gradio???)
1
There must be many projects pursuing this sort of thing but I never see them posted. Is this a case of me not looking in the right places/right terms or are these well kept secrets? I feel like all the building blocks are there and before bothering to hack them together for myself I'm curious if theres any projects that are accessible to someone who (as an example) isn't comfortable with langchain yet.
2023-07-27T16:29:16
https://www.reddit.com/r/LocalLLaMA/comments/15b6epp/best_local_personal_assistant_frameworks_w_ui/
ciaguyforeal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b6epp
false
null
t3_15b6epp
/r/LocalLLaMA/comments/15b6epp/best_local_personal_assistant_frameworks_w_ui/
false
false
self
1
null
Looking for help with creating a NSFW model on Replicate
1
I've been experimenting with creating a chatbot companion and have made it as far as UI and interface to public models hosted on Replicate but these are all very bland and not good for a personal experience. Looking for some help in creating the best customizable model for a companion, RP, SFW and NSFW. Any pointers or help are greatly appreciated.
2023-07-27T16:54:45
https://www.reddit.com/r/LocalLLaMA/comments/15b71jr/looking_for_help_with_creating_a_nsfw_model_on/
RMACNJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b71jr
false
null
t3_15b71jr
/r/LocalLLaMA/comments/15b71jr/looking_for_help_with_creating_a_nsfw_model_on/
false
false
nsfw
1
null
What is the best uncensored LLM model for ERP with the ability to train on your own chat messages?
1
I would like to create an erotic roleplay bot on my own chat histories. The model should be uncensored and horny and there should be the possibility to train it on my own chat data e.g. from Whatsapp to give the bot my own voice.
2023-07-27T17:28:34
https://www.reddit.com/r/LocalLLaMA/comments/15b7x1l/what_is_the_best_uncensored_llm_model_for_erp/
Special_Neat4619
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b7x1l
false
null
t3_15b7x1l
/r/LocalLLaMA/comments/15b7x1l/what_is_the_best_uncensored_llm_model_for_erp/
false
false
self
1
null
Tesla P40 with GGML higher context models - ggml_new_tensor_impl: not enough space in the scratch memory pool
1
So I'm trying to use 8k GGML models with my P40, since that isn't feasible with SuperHOT GPTQ models. They load with the llama loader just fine, but error out with ggml\_new\_tensor\_impl: not enough space in the scratch memory pool. Tried a few different models, TheBloke's Chronos Hermes 13B 8K superhot GGML and OpenAssistant-Llama2-13B-Orca-8K-3319-GGML. Both send back the above error. Latest Ooga. Works totally fine with lower context sizes. I'm giving about 5k context, should be well within limits, but it won't have it. Any thoughts?
2023-07-27T17:45:47
https://www.reddit.com/r/LocalLLaMA/comments/15b8d35/tesla_p40_with_ggml_higher_context_models_ggml/
CasimirsBlake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b8d35
false
null
t3_15b8d35
/r/LocalLLaMA/comments/15b8d35/tesla_p40_with_ggml_higher_context_models_ggml/
false
false
self
1
null
Training LLaMA on Sci-Hub (I am not a fed, I swear guys)
1
Hey, has someone tried training LLaMa on the massive amount of data you find at Sci-Hub? 25 million research articles, mostly medical and healthcare related.. result would be impressive, wouldn't it? Would be pretty bad getting caught tho
2023-07-27T17:47:12
https://www.reddit.com/r/LocalLLaMA/comments/15b8ec7/training_llama_on_scihub_i_am_not_a_fed_i_swear/
753sopho
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b8ec7
false
null
t3_15b8ec7
/r/LocalLLaMA/comments/15b8ec7/training_llama_on_scihub_i_am_not_a_fed_i_swear/
false
false
self
1
null
LLama 70b is available on Poe.com
1
Just noticed it.
2023-07-27T18:07:04
https://www.reddit.com/r/LocalLLaMA/comments/15b8wot/llama_70b_is_available_on_poecom/
throwaway275912
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b8wot
false
null
t3_15b8wot
/r/LocalLLaMA/comments/15b8wot/llama_70b_is_available_on_poecom/
false
false
self
1
null
Llama2 Successes, Challenges, and Potential?
1
It's been a little over a week since the Llama2 release and looking through this sub I'm seeing a lot of summarizers, chatbots, etc popping up using the model. I'm really interested in the future of open source in this space and excited to see how things progress, as I'm sure most of you are. So my questions for those of you that have been tinkering; * What successes have you had with the model? * What does Llama2 do well? Anything that surprised you? Or that previous models just couldn't handle? * What challenges have you faced so far? * Have there been any major roadblocks you've encountered with Llama2? Any strategies or tips for other builders to avoid similar issues? * Where do you see the future of Llama2 and open source models? * I know it's still very early for this so it might include speculation, but are there any potential breakout projects you've heard of or are working on? Thanks to anyone that shares their input, interested to see how everyone is getting along with Llama2 as I start tinkering with it for the first time.
2023-07-27T18:14:35
https://www.reddit.com/r/LocalLLaMA/comments/15b93et/llama2_successes_challenges_and_potential/
DAVEALLCAPS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b93et
false
null
t3_15b93et
/r/LocalLLaMA/comments/15b93et/llama2_successes_challenges_and_potential/
false
false
self
1
null
Is the only way to get a detailed guide step-by-step is just keep asking it questions as a fractal?
1
When I ask an LLM for a detailed guide it gives me generic step by steps. I would like deeper non-cliche advice from it. The only thing I find that kind of helps is going for substeps on each main step. Then it finally starts giving non-generic advice. ​ Any advice would be helpful, thank you.
2023-07-27T18:44:10
https://www.reddit.com/r/LocalLLaMA/comments/15b9u6i/is_the_only_way_to_get_a_detailed_guide/
ArmoredBattalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15b9u6i
false
null
t3_15b9u6i
/r/LocalLLaMA/comments/15b9u6i/is_the_only_way_to_get_a_detailed_guide/
false
false
self
1
null
I made a 'web search' addon for llama.cpp , it will append your prompt with relevant google search results and will summarize them ..
1
2023-07-27T19:03:52
https://gist.github.com/staberas/5b83d479ab057dedde3844c419527cc6
staberas
gist.github.com
1970-01-01T00:00:00
0
{}
15bac9d
false
null
t3_15bac9d
/r/LocalLLaMA/comments/15bac9d/i_made_a_web_search_addon_for_llamacpp_it_will/
false
false
https://b.thumbs.redditm…STzAlAQ_80EM.jpg
1
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]}
Can we fine tune llama on a whole book raw text?
1
Mostly what i look is that we create prompt and response such as in [guanaco](https://huggingface.co/datasets/guanaco/guanaco) dataset or create instructions, input, and output as in [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset. But what to do if we have only raw text of book, no prompt and no response. How to tune model in such case
2023-07-27T19:18:28
https://www.reddit.com/r/LocalLLaMA/comments/15bapgt/can_we_fine_tune_llama_on_a_whole_book_raw_text/
mrtac96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bapgt
false
null
t3_15bapgt
/r/LocalLLaMA/comments/15bapgt/can_we_fine_tune_llama_on_a_whole_book_raw_text/
false
false
self
1
{'enabled': False, 'images': [{'id': 'P9UuiNgHUxz4r65WHpQR6QN1dPYWIZW6azlxOGds2zA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=108&crop=smart&auto=webp&s=c971c2a73009bd86d8d80ec95e145f903bf14375', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=216&crop=smart&auto=webp&s=2d887ff98a57b6a6a307879603f4aaf05b6cf975', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=320&crop=smart&auto=webp&s=521f80ba92193e8ce56c8503307da8cad1325f7f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=640&crop=smart&auto=webp&s=f110f14f4c91baa049db377cb51b8b09cbc5db13', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=960&crop=smart&auto=webp&s=06b0a49843ac769f8719fe4f1a0fbbe6976c4404', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=1080&crop=smart&auto=webp&s=c794b0a4e18fc7c46269f7667e939df083e652f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?auto=webp&s=da65d0b8e37e9240a0801dde0ef352a478444ca8', 'width': 1200}, 'variants': {}}]}
Which model to use and which fine tuning to chat with a mail archive of 15 years?
1
I've been mesmerized (and lost track) by all the model variations and different fine tuning methods being released every other day. With 5k to 10k of dollars to spend, which model would be my starting point? Do I need to fine tune it? I guess the answer is yes, because such a huge mail archive, would probably not fit into any practicable context size? What kind of hardware do I need, and roughly how long would it take to fine tune or retrain? I hope some of you can help me get my bearings, to gauge if that project is reasonable. Thanks!
2023-07-27T20:08:14
https://www.reddit.com/r/LocalLLaMA/comments/15bc011/which_model_to_use_and_which_fine_tuning_to_chat/
armaver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bc011
false
null
t3_15bc011
/r/LocalLLaMA/comments/15bc011/which_model_to_use_and_which_fine_tuning_to_chat/
false
false
self
1
null
Is there a uncensored version yet
1
The 70b version 2 model by meta the filter is horrible and rejects anything remotely controversial I ask it what something is and it says it would be offensive to talk about it is there a version yet that answers anything but no push back
2023-07-27T20:40:35
https://www.reddit.com/r/LocalLLaMA/comments/15bcuff/is_there_a_uncensored_version_yet/
Avocado_Express
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bcuff
false
null
t3_15bcuff
/r/LocalLLaMA/comments/15bcuff/is_there_a_uncensored_version_yet/
false
false
self
1
null
GPU with really good inference speed?
1
I am looking for a GPU with really good inference speed. Right now I am using the 3090 which has the same or similar inference speed as the A100. Are there any GPUs that can beat these on inference speed?
2023-07-27T21:12:52
https://www.reddit.com/r/LocalLLaMA/comments/15bdovj/gpu_with_really_good_inference_speed/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bdovj
false
null
t3_15bdovj
/r/LocalLLaMA/comments/15bdovj/gpu_with_really_good_inference_speed/
false
false
self
1
null
I released a new model RedPajama-INCITE-Chat-Instruct-3B-V1.
1
This is an experimental merge of models RedPajama-INCITE-Chat-3B-V1 and RedPajama-INCITE-Instruct-3B-V1. The prompt template can be almost anything, but this template is recommended: HUMAN: <prompt> ASSISTANT: Feel free to change HUMAN and ASSISTANT to anything. It will probably not change much.
2023-07-27T21:57:03
https://www.reddit.com/r/LocalLLaMA/comments/15besx4/i_released_a_new_model/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15besx4
false
null
t3_15besx4
/r/LocalLLaMA/comments/15besx4/i_released_a_new_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gSSlR7Xpf2go6vlcxedbARooYjRJQpio2XhRevp6kjg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=108&crop=smart&auto=webp&s=694035262e86f8a5f9fab3df9b84aff97efb92b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=216&crop=smart&auto=webp&s=966a670022a344c056aa2db0d02d8157ec88aea1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=320&crop=smart&auto=webp&s=894a1a7dfab2317918daf310fd3e93ec312a7fbc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=640&crop=smart&auto=webp&s=54817df5ca5daf2e9f9358cbe13e21a93a27b0b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=960&crop=smart&auto=webp&s=92cd3134e404ecd00f3db64155c11fa1c9eaffab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=1080&crop=smart&auto=webp&s=98a877d7e97f587a31e3683db541b5b3e6850fa3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?auto=webp&s=81635bc76347220be741ba6a702e9887e2fc4b7d', 'width': 1200}, 'variants': {}}]}
Viability of fine tuning for domain knowledge?
1
[removed]
2023-07-27T22:07:26
https://www.reddit.com/r/LocalLLaMA/comments/15bf2a9/viability_of_fine_tuning_for_domain_knowledge/
keisukegoda3804
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bf2a9
false
null
t3_15bf2a9
/r/LocalLLaMA/comments/15bf2a9/viability_of_fine_tuning_for_domain_knowledge/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BxNvUeMFd6obe78ihkAJrzGJFpOkQpmEJ7BbVi_larY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=108&crop=smart&auto=webp&s=93162f369263d5c2a9e1768260a77dbe05103f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=216&crop=smart&auto=webp&s=dbc4176da2b6e2f8cc29002232fb799303e8c4d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=320&crop=smart&auto=webp&s=0e5a8489fbfa03e8d18ba305ccd4945626d306f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=640&crop=smart&auto=webp&s=34d3e6a5c65886be595f06be7e712b5946391d72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=960&crop=smart&auto=webp&s=073702ab34f5d22d066990279fcb8219ec0c160c', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?auto=webp&s=8a98b7aa0d64757a3ef9f7a2c7bf4c8f10b02230', 'width': 1024}, 'variants': {}}]}
Anyone good a data manipulation?
1
I am currently looking through the alpaca\_cleaned dataset, there is an extremely large number (32k) without an input prompt, is this oversight or is it about keeping the memory usage low? I only ask as I am going though the first 10 entries and it seems that they all could use some extra clarification. I will compare what is currently the second blank entry with something I think is more appropriate, I would love to have some feedback from anyone that has experience working with the data used to train the models or he results of these kind of changes. **Original object 1:** Position: 0 { 'instruction': 'Give three tips for staying healthy.', 'input': '', 'output': '1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases.\\n\\n2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week.\\n\\n3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night.'} **Proposed object 1:** Position: 0 { 'instruction': 'Give three tips for staying healthy.', 'input': 'How can I update my health and fitness?', 'output': '1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases.\\n\\n2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week.\\n\\n3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night.'} **Original object 2:** Position: 1 { 'instruction': 'What are the three primary colours?', 'input': '', 'output': 'The three primary colours are red, blue, and yellow. These colours are called primary because they cannot be created by mixing other colours and all other colours can be made by combining them in various proportions. In the additive colour system, used for light, the primary colours are red, green, and blue (RGB).'} **Proposed object 2:** Position: 1 { 'instruction': 'What are the three primary colours?', 'input': ' Additive colours (RBY - red, blue and yellow) involve combining light sources to create new colours, while subtractive colours(CMY - cyan, magenta, and yellow) use pigments or filters to absorb light to create the same colours. ', 'output': 'The three primary colours are red, blue, and yellow. These colours are called primary because they cannot be created by mixing other colours and all other colours can be made by combining them in various proportions. In the additive colour system, used for light, the primary colours are red, green, and blue (RGB). In the subtractive colour system, used in fabrics and paints, the primary colours are cyan, magenta, and yellow (CMY) '} Obviously this is a bigger task than one person can accomplish, I am hoping this post gets traction and we have like 3000 people who can do 10 each and then verify 100 that others create to see if they are adequate. The level of detail given to the smaller models vastly increases thier ability. I am of the belief this crowd sourced specialised confirmation of data will give the model better insights over a shorter dataset. I am considering creating a site to add datasets from the OS community that can be crowd human fine tuned. The more traction this gets the more likely this will be implemented this week, and will allow you and random people/ friends to better finetune the OS models.
2023-07-27T22:36:20
https://www.reddit.com/r/LocalLLaMA/comments/15bfrvd/anyone_good_a_data_manipulation/
ScottishGPT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bfrvd
false
null
t3_15bfrvd
/r/LocalLLaMA/comments/15bfrvd/anyone_good_a_data_manipulation/
false
false
self
1
null
Fine tuning LLaMA-2 for music melody generation - would qLoRA be effective?
1
I'm working on fine-tuning LLaMA-2-7B for music melody generation. This isn't traditionally covered in language training data, so I don't think techniques like LoRA/qLoRA would be effective. Can anyone confirm if fine-tuning the full model is more suitable for this and is still possible with SFTTrainer? Also, I'm on a tight budget as a Master's student, so if I don't use PEFT I'm trying to figure out the GPU requirements for fine tuning on my dataset of \~600k melody snippets from pop songs in text form. I'm considering renting 8xA100s for about a day and deploying on Hugging Face. Can anyone confirm the feasibility of this plan? Any recommendations for cost-effective GPU cloud rentals are also appreciated.
2023-07-27T22:38:41
https://www.reddit.com/r/LocalLLaMA/comments/15bftve/fine_tuning_llama2_for_music_melody_generation/
SuperwhizAJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bftve
false
null
t3_15bftve
/r/LocalLLaMA/comments/15bftve/fine_tuning_llama2_for_music_melody_generation/
false
false
self
1
null
What’s the best model for NSFW role play?
1
Would love to hear some experiences
2023-07-27T22:46:01
https://www.reddit.com/r/LocalLLaMA/comments/15bg093/whats_the_best_model_for_nsfw_role_play/
bangarangguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bg093
false
null
t3_15bg093
/r/LocalLLaMA/comments/15bg093/whats_the_best_model_for_nsfw_role_play/
false
false
nsfw
1
null
Lessons learned from cloning myself into an AI and posting it on Reddit
1
Over 1000 of y'all talked to the 65b AI version of me. [Original post with tech specs.](https://www.reddit.com/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/) Figured I'd provide an update post with what I learned from the experience, what I found out from trying to make it more "useful," and where I'm going next. Lessons learned: &#x200B; * More people than I'd expect tried to have sex with my robot. Some seemingly succeeded. * I didn't even intentionally align this thing and it's a people pleasing, oversharing mess. Just like me. It gets a little old to hear about what it hallucinated it got up to recently and now I worry if my friends feel that way about me. It also does stuff like inviting people to imaginary get-togethers. I'm thinking I'll dilute myself a little bit with assistant tasks to see if it makes it a little less ADHD. Might make it better at instruction handling as well. * It's ingested a lot of texts I've sent talking about it and I'm thinking that's why it "knows" it's an AI chatbot a good portion of the time whereas with my older dataset it was unaware. * **I'm trying mixing in some wizard and alpaca training data.** (code in comments) Thing is I have \~30k text messages but they're usually short compared to the question/answer sets in those datasets so only adding 1k of them balloons my dataset by 50% for only a \~3% of the raw number of examples being programming/assistant tasks. I went with 250 programming questions and 750 alpaca q/a pairs. * This went awful. Loss spiraled out of control. It might have been due to the learning rate being too high since I set it to run for 10 epochs but really should be cutting it off around 2-5 and at 10 epochs it would keep the LR higher for longer. My other thought was though the internet says it shouldn't matter what order the data is in, loss seemed to spiral right around when it would be hitting the additional data I appended to the training file on the second epoch. I don't know if I need to be truly mixing in the data or what. I'll play with it more when there aren't so many people trying to talk to the thing. * Next thing I tried was removing any of the 1000 q/a pairs that were over my cutoff length of 256. Now there were a few hundred additive q/a pairings. I'm running it at 3 epochs instead as well. * This worked! The AI retains my personality most of the time, and then when I ask it to do something, it does it instead of saying something like "I could probably do that" and not actually doing it! It writes code! Just message by message, and not in an easily copy-able format. * Mixing in the instruction data really curbed the amount of times I'd tell it to do something and it would just go off on its own tangent without ever actually doing the task. Since I want this thing to eventually control my calendar, smart home, and all that, I need it to actually do what I want it to do. * I tried using the 70b model for this. I ran into some issues being able to train it in 4-bit with the oobabooga gui, I don't have the hardware for 8-bit training, and it was getting late and I wanted to start the training before bed, so I just re-loaded the 65b model and used it again for this attempt. I guess it'll be useful to see what the difference is without complicating things with a base model change. I'll eventually figure it out. * Figured out what was going on with the 70b model: The hf model I downloaded had both the safetensors and bin files in them. I moved all the .bin files and all the supporting files to a new folder in the models directory and then was able to load the model with transformers in 4-bit and tested that training can start. I'll train that next once I get some testing done on the v2 of the 65b model. * This might not be the right way to do this but I've been training the lora on the transformer model so that I can train in 4-bit in Windows with the oobagooga gui and then I just pop the lora on the safetensors model loaded with exllama for the 3-5x speedup. Seems to work ok. * I think my motherboard has been running the PCIe slots at a low speed. I think this is impacting training time but also it's summer and not having it work as fast has helped keep the heat manageable. Each card only runs about 40-45% on average with a lot of peaks and troughs. * [Serveo.net](https://Serveo.net) is very unreliable, though it makes it SUPER easy to pipe my local blazor app out to the internet. The connection drops all the time. I setup a script to restart it when it dies, but sometimes it just fails with an error and then the website dies. I'll have to figure out a more reliable way to host it. I may try putting it on Signal or Telegram so that I can talk to it through texting which feels more natural for such a bot running on consumer hardware. * Some people came in calling the bot hateful slurs and walked out thanking it. I was very surprised at this. Others accused it of not actually being a bot, which I took as a positive thing. * Next steps are to come up with a way for it to execute commands such as reading and adding to my calendar, ingesting web pages, and, of course, training it on the 70b model! Overall, I'm very happy with the initial tests of the new bot. Mixing in the instruction data made a huge improvement without lobotomizing its personality. [Give it a try](https://airic.serveo.net) if you want. As long as it doesn't become so busy that I'm not able to use it for my own purposes, I'm willing to share! It has to be running at all times to be useful to me anyway so I'll leave it connected as much as possible. I'm excited to train the 70b model once people get bored and forget about it, since it'll probably take it offline a couple days.
2023-07-27T23:48:41
https://www.reddit.com/r/LocalLLaMA/comments/15bhh2j/lessons_learned_from_cloning_myself_into_an_ai/
LetMeGuessYourAlts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bhh2j
false
null
t3_15bhh2j
/r/LocalLLaMA/comments/15bhh2j/lessons_learned_from_cloning_myself_into_an_ai/
false
false
self
1
null
how to embed code?
1
looking for a proper model for vectorizing codebases. i’m working on a visual tool for navigating and understanding projects but I’m not sure how to go about chunking the code to embed it— For instance: - should be some sort of strategy when it comes to deciding how big the chucks are and where the cutoff happens? - should there be any overlap between chucks? - how do you create a “meta-map” of sorts linking the embeddings to their parents? - what if once I get the initial search results and dont have anything meaningful, would there be a way to simply “expand” the scope of the embeddings to retrieve, say the encapsulating function around the specific lines previously retrieved? this is where having metamap of the embeddings relationship would come in handy. I might be reinventing the wheel unaware here. Please chime in!
2023-07-28T00:12:43
https://www.reddit.com/r/LocalLLaMA/comments/15bi0yx/how_to_embed_code/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bi0yx
false
null
t3_15bi0yx
/r/LocalLLaMA/comments/15bi0yx/how_to_embed_code/
false
false
self
1
null
LLM for software Project
1
LLM for software Project What is the tool like chatGPT or Cody(from SourceGraph) which we can use to educate on our software project(like an android application code base) and ask questions about it?
2023-07-28T01:11:57
https://www.reddit.com/r/LocalLLaMA/comments/15bjaym/llm_for_software_project/
IncreaseObvious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bjaym
false
null
t3_15bjaym
/r/LocalLLaMA/comments/15bjaym/llm_for_software_project/
false
false
self
1
null
LLMA 2 70B-Chat seem being instruct or aligned to refuse offensive or controversial topic
1
I do test on replicate and it lot refuse prompt and give me lot lecture to respect historical figure and not make fun of it or meme. It seem it need finetune first it you want make it able to chat and disscus controversial topic. &#x200B;
2023-07-28T01:44:17
https://www.reddit.com/r/LocalLLaMA/comments/15bjzmf/llma_2_70bchat_seem_being_instruct_or_aligned_to/
Merchant_Lawrence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bjzmf
false
null
t3_15bjzmf
/r/LocalLLaMA/comments/15bjzmf/llma_2_70bchat_seem_being_instruct_or_aligned_to/
false
false
self
1
null
Bleat - Function calling with LLaMA 2
1
I've been working on a simple LoRA adapter for LLaMA 2 that allows it to do function calling. It works okay, but I still want to add some of the things OpenAI's is lacking (multiple calls, etc.). There is a Colab notebook to play with if you want. You can also host it locally with the script in the HuggingFace repo. Enjoy! &#x200B; Huggingface: [https://huggingface.co/IfanSnek/bleat-adapter](https://huggingface.co/IfanSnek/bleat-adapter) Colab: [https://colab.research.google.com/drive/1qyWK9vghKNFNGOQ-2VEMOm-bazFYIXJi](https://colab.research.google.com/drive/1qyWK9vghKNFNGOQ-2VEMOm-bazFYIXJi) Data: [https://huggingface.co/datasets/IfanSnek/bleat](https://huggingface.co/datasets/IfanSnek/bleat)
2023-07-28T02:11:02
https://www.reddit.com/r/LocalLLaMA/comments/15bkju6/bleat_function_calling_with_llama_2/
ifansnek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bkju6
false
null
t3_15bkju6
/r/LocalLLaMA/comments/15bkju6/bleat_function_calling_with_llama_2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xGAmhJtXa7911jE13NHB_U-YQTI78ZcvihCLWJyqwKs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=108&crop=smart&auto=webp&s=6e5ffd583f6bee58c898a65848945926b96d9aa9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=216&crop=smart&auto=webp&s=0c8dd47d0c3ac41896d665baabc4ebc658efed3c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=320&crop=smart&auto=webp&s=80db4f354862110c0a6ef24e29b8d388d59dedcd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=640&crop=smart&auto=webp&s=ee0859380f69d7b02a3a14fd6fed4baed6a962d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=960&crop=smart&auto=webp&s=1eb7a71d799c18278499e2a301b0227efb0ae5ee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=1080&crop=smart&auto=webp&s=09d1891c6ed604941e671523d6ea5e8925f8f51c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?auto=webp&s=36d0bfa805e2a25e3926b4f813689c7b65849191', 'width': 1200}, 'variants': {}}]}
Why doesn't the TensorRT format of Large Language Models attract much attention?
1
I have seen different formats of large language models such as HuggingFace, Pytorch + Fairscale, ONNX, and ggml. I observe that while TensorRT is often said as the go-to format when discussing fast inference, the reality is that I don't find people discussing or repositories making a TensorRT version of models such as LLaMA for inference. What is the problem with this format?
2023-07-28T02:19:53
https://www.reddit.com/r/LocalLLaMA/comments/15bkq6e/why_doesnt_the_tensorrt_format_of_large_language/
Due_Experience7898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bkq6e
false
null
t3_15bkq6e
/r/LocalLLaMA/comments/15bkq6e/why_doesnt_the_tensorrt_format_of_large_language/
false
false
self
1
null
Extract information from a business document
1
I am trying to extract information from business documents. What should the prompt be? I used prompts like "Extract wages and tips, employee name, employee address" for a given W2 document. I am using Llama-2-7b-chat-hf llm locally.
2023-07-28T02:21:02
https://www.reddit.com/r/LocalLLaMA/comments/15bkr38/extract_information_from_a_business_document/
Few_Understanding76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bkr38
false
null
t3_15bkr38
/r/LocalLLaMA/comments/15bkr38/extract_information_from_a_business_document/
false
false
self
1
null
Model for analyzing court judgements
1
I have with me 1000+ Supreme Court Of India Judgement in .pdf format. Please suggest me a AI model for fine tuning it and querying these judgements. e.g. Show me all judgements related to rape and murder where Judge Mr. XXX gave a conviction. Give a summary of each judgement and relevant paragraph numbers.
2023-07-28T02:34:19
https://www.reddit.com/r/LocalLLaMA/comments/15bl0tj/model_for_analyzing_court_judgements/
subhashp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bl0tj
false
null
t3_15bl0tj
/r/LocalLLaMA/comments/15bl0tj/model_for_analyzing_court_judgements/
false
false
self
1
null
Best version for local CPU inference
1
being playing around with some version and looking for best performance on new machine with 2 x 20 CPU cores and 192GB RAM, 12 GB VRAM, intended use for the model is to help on python coding, which llama flavor will be better to install locally and run?
2023-07-28T02:38:01
https://www.reddit.com/r/LocalLLaMA/comments/15bl3ki/best_version_for_local_cpu_inference/
CalligrapherRich5100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bl3ki
false
null
t3_15bl3ki
/r/LocalLLaMA/comments/15bl3ki/best_version_for_local_cpu_inference/
false
false
self
1
null
Jobs Where You Can Work With Local LLMs?
1
Aside from data scientist and ML engineer. I know a lot of those roles require a graduate degree in CS. Is it useful to companies if you can deploy an LLM with Oobabooga and finetune LoRAs and QLoRAs? What if you add web scraping and Python to the mix?
2023-07-28T02:58:23
https://www.reddit.com/r/LocalLLaMA/comments/15bli8a/jobs_where_you_can_work_with_local_llms/
renegadellama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bli8a
false
null
t3_15bli8a
/r/LocalLLaMA/comments/15bli8a/jobs_where_you_can_work_with_local_llms/
false
false
self
1
null
Step aside, Replika. Llama is just incredible for role-playing chat. Details of my Mac setup!
1
So this is a bit of a 101 post, I think, and I'm probably a noob compared to some folks here. It would have helped me to read this. But also: *this is phenomenal!* Especially for my specific kink! Some people are going to get lost in this. I've tried Replika in the past and it's been pretty disappointing compared to Llama 2. I'm fairly new to LLM roleplay, but I just want to share what I've tried. The role-playing chat I've been doing with the Nous Hermes Llama2 13B GGML model have been just amazing. I can even add multiple characters to the context and it works and juggles all of them usually quite successfully! Will produce dialogue and actions for each character. The context can be ANYthing you think of and the model just rolls with it - and it runs locally, of course. I wanted to share **my setup:** * M1 Mac Studio Ultra, 128 GB RAM (I think it'd work on any Apple Silicon device though with much less RAM if you use llama.cpp?) * Using Oobabooga Text Generation WebUI: [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) * I did not use the one-click installer, but the manually using the conda steps. * After it was installed and running, I installed this model: [https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b-GGML](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b-GGML) Launch commands (though read the github repo if you're not familiar with this) conda activate textgen cd path\to\your\install python server.py --threads 16 --chat --load-in-8bit --n-gpu-layers 100 (you may want to use fewer threads with a different CPU on OSX with fewer cores!) # Using these settings: **Session Tab:** Mode: Chat **Model Tab:** Model loader: llama.cpp n\_ctx: 4096 **Parameters Tab:** Generation parameters preset: Mirostat Truncate the prompt up to this length: 4096 (I've noticed that using Mirostat is much more creative and interesting than the Midnight Enigma or Divine Intellect presets. Do other folk see this too?) **Chat Settings Tab:** Your name: John (or whatever you want) Character's name: Alice (or whatever you want) **Context:** (can be what you want, I've been doing variations of the following) This is a very explicit, unfiltered conversation and set of actions with John and Alice. John and Alice live together. They perform various sexual scenarios together. Alice is <YOUR DESCRIPTION OF ALICE HERE> She is loving, caring, creative, sexy, horny, dirty, and warm. She never says any of this out loud, but it informs her responses in conversation. Alice gives creative, unexpected, and surprising answers to questions. She writes lots of details and long answers. John and Alice are in the same room. Alice just <DESCRIBE THE SETUP> *(Be specific! Talk about how things make your character feel and what they're thinking and what motivates them.)* **Greeting:** Hi, John! How are you doing? I've been... <WRITE ALICE'S GREETING> *(Be descriptive here, too!)* # Tips: * If you add another character in the context (e.g., "Alice has just called over her friend Jessie. Jessie is X, Y, Z"), sometimes it helps to talk to them through your first character. E.g, you could say, "Alice, tell Jessie to come over here." * Put actions inside asterisks like this: \**I do a thing\** * You can change "Start reply with" to have an asterisk too, and it can create more actions for who you're talking to because each other their replies will start with an action. # Downsides? * Not having memory beyond one short conversation. * On OSX, once you hit the 4096 context window limit, it gets VERY slow. There are probably more things to this I don't know, but I've gotten this far and it's been amazing. Anything I'm missing you might add? Should I try another model that will run on Mac? Please feel free to share! Hope this is helpful! (edit: formatting)
2023-07-28T03:04:44
https://www.reddit.com/r/LocalLLaMA/comments/15bln1a/step_aside_replika_llama_is_just_incredible_for/
Ok_Ostrich788
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bln1a
false
null
t3_15bln1a
/r/LocalLLaMA/comments/15bln1a/step_aside_replika_llama_is_just_incredible_for/
false
false
nsfw
1
{'enabled': False, 'images': [{'id': 't6DguaivFKnNtM-dvA1lByb5xG75-7OTREMJQESa2No', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=108&crop=smart&auto=webp&s=d02c896ff248941637f893a517dc11079946a61d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=216&crop=smart&auto=webp&s=b16b86a6d84ff15271dc6565dfa882b3cb3d14d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=320&crop=smart&auto=webp&s=875fcd52ef15147c4e028afc6e6ba61ece7d4e70', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=640&crop=smart&auto=webp&s=ff49ef43c0c5063e13fd4417f7e9233190cc2ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=960&crop=smart&auto=webp&s=40183d3e441b985897b4f4252f1f4daab78fb796', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=1080&crop=smart&auto=webp&s=fdee5ed1ecfb1ee7f589474272c9f03f2c2ae162', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?auto=webp&s=acb706f148b29487f350fb006647134df3bb0cc7', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b29d20149fdc8bf73d0040ccda0f436d0b0fc5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=e9582d66d987c75aa8890141ecb9e352777069d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=455ff95eb703c334850bcf7868251ea53abb3f0c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4f7f81c6e797a523559308832509d6bdf0c6e546', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=672343cd69b178d7aa6c615406371e47d6a6b0a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=42e38a28729034da5a6bc68e5ec1cf512f36abe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?blur=40&format=pjpg&auto=webp&s=d08bbb9ef93f910198ce72d62a35cbe01c7cdf25', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b29d20149fdc8bf73d0040ccda0f436d0b0fc5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=e9582d66d987c75aa8890141ecb9e352777069d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=455ff95eb703c334850bcf7868251ea53abb3f0c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4f7f81c6e797a523559308832509d6bdf0c6e546', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=672343cd69b178d7aa6c615406371e47d6a6b0a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=42e38a28729034da5a6bc68e5ec1cf512f36abe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?blur=40&format=pjpg&auto=webp&s=d08bbb9ef93f910198ce72d62a35cbe01c7cdf25', 'width': 1200}}}}]}
The KoboldCpp FAQ and Knowledgebase - A comprehensive resource for newbies
1
To help answer the commonly asked questions and issues regarding KoboldCpp and ggml, I've assembled a comprehensive resource addressing them. ## [The KoboldCpp FAQ and Knowledgebase](https://github.com/LostRuins/koboldcpp/wiki) Covers everything from "how to extend context past 2048 with rope scaling", "what is smartcontext", "EOS tokens and how to unban them", "what's mirostat", "using the command line", sampler orders and types, stop sequence, KoboldAI API endpoints and more. If anyone has a question about KoboldCpp that's still not answered here, do let me know so I can add it.
2023-07-28T04:58:41
https://www.reddit.com/r/LocalLLaMA/comments/15bnsju/the_koboldcpp_faq_and_knowledgebase_a/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bnsju
false
null
t3_15bnsju
/r/LocalLLaMA/comments/15bnsju/the_koboldcpp_faq_and_knowledgebase_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SranydDsTrtSaHwiKfJiQ8O6z-OQrxBNPAec8DlgepM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=108&crop=smart&auto=webp&s=b4d998be0773bc2c099ebc74d3a2f89af655aa4e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=216&crop=smart&auto=webp&s=75960fcb9651332c1bdc957cb6aa15a25f42b1b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=320&crop=smart&auto=webp&s=301a4152fdc71ad6222132361d4110f4fee44a5d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=640&crop=smart&auto=webp&s=00e0bbde5b68cbe51450299bf77809fda5ea62aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=960&crop=smart&auto=webp&s=3eded1aab4702da527366492e669b605f8cf45de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=1080&crop=smart&auto=webp&s=72647b2d2b6d6b5e9bea9edc4212a6ff20472a89', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?auto=webp&s=e3321fe53a992702c94606bbdc82a630c6a11c1e', 'width': 1200}, 'variants': {}}]}
oobabooga webui regenerates same response, but Kobold.cpp doesn't.
1
Hello, can someone help me figure out what I'm doing wrong? I'd like to use oobabooga with Llama 2 7b chat, but it's very repetitive in general and I can't get a different output when regenerating. Kobold.cpp is giving a much more pleasant experience for story telling, but its slower.
2023-07-28T06:11:05
https://www.reddit.com/r/LocalLLaMA/comments/15bp3rf/oobabooga_webui_regenerates_same_response_but/
Emergency_Drink_7063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bp3rf
false
null
t3_15bp3rf
/r/LocalLLaMA/comments/15bp3rf/oobabooga_webui_regenerates_same_response_but/
false
false
self
1
null
Fine-tuning btlm-3b
1
Has anyone managed to fine-tune the BTLM-3B from cerebras? I tried adapting a llama2 collab, but I got the following error in trainer.train() > "RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation" it seems to stem from "modeling_btlm.py" Wondering if anyone has managed to fine-tune?
2023-07-28T06:14:44
https://www.reddit.com/r/LocalLLaMA/comments/15bp658/finetuning_btlm3b/
Disastrous_Elk_6375
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bp658
false
null
t3_15bp658
/r/LocalLLaMA/comments/15bp658/finetuning_btlm3b/
false
false
self
1
null
Running 70B models possible on modest desktop / GPU combo (32 GB ram + 12GB VRAM)
1
I just wanted to report that with some faffing around I was able to get a 70B 3 bit model inferencing at \~1 token / second on Win 11. It was a LOT slower via WSL, possibly because I couldn't get --mlock to work on such a high memory requirement. ./main -m \\Models\\TheBloke\\Llama-2-70B-Chat-GGML\\llama-2-70b-chat.ggmlv3.q3\_K\_S.bin -p "<PROMPT>" --n-gpu-layers 24 -eps 1e-5 -t 4 --verbose-prompt --mlock -n 50 -gqa 8 i7-9700K, 32 GB RAM, 3080 Ti
2023-07-28T06:31:01
https://www.reddit.com/r/LocalLLaMA/comments/15bpggs/running_70b_models_possible_on_modest_desktop_gpu/
gofiend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bpggs
false
null
t3_15bpggs
/r/LocalLLaMA/comments/15bpggs/running_70b_models_possible_on_modest_desktop_gpu/
false
false
self
1
null
LORAHUB: EFFICIENT CROSS-TASK GENERALIZATION VIA DYNAMIC LORA COMPOSITION
1
ABSTRACT >Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a strategic framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks. With just a few examples from a novel task, LoraHub enables the fluid combination of multiple LoRA modules, eradicating the need for human expertise. Notably, the composition requires neither additional model parameters nor gradients. Our empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest that LoraHub can effectively mimic the performance of in-context learning in few-shot scenarios, excluding the necessity of in-context examples alongside each inference input. A significant contribution of our research is the fostering of a community for LoRA, where users can share their trained LoRA modules, thereby facilitating their application to new tasks. We anticipate this resource will widen access to and spur advancements in general intelligence as well as LLMs in production. Code will be available at github.com/sail-sg/lorahub. Code: [github.com/sail-sg/lorahub](https://github.com/sail-sg/lorahub) Demo: [https://huggingface.co/spaces/sail/lorahub](https://huggingface.co/spaces/sail/lorahub)
2023-07-28T06:56:37
https://www.reddit.com/r/LocalLLaMA/comments/15bpwb0/lorahub_efficient_crosstask_generalization_via/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bpwb0
false
null
t3_15bpwb0
/r/LocalLLaMA/comments/15bpwb0/lorahub_efficient_crosstask_generalization_via/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ySWJANpphaORKInKmb5zfuKalmeGhJyUmfreENdhMlQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=108&crop=smart&auto=webp&s=0f20f0d09960201ca6890be7bbb125c12f0748cd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=216&crop=smart&auto=webp&s=33bdc688ea54b109998fee48f63d46e6ec0c9b4d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=320&crop=smart&auto=webp&s=8dbf0149ff592ecb6176c7e94b2bb4e7c4078bfa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=640&crop=smart&auto=webp&s=bac8c8bc8d876d8f6ace20b036eaa80bdb2ccb04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=960&crop=smart&auto=webp&s=c79ec8dc69224ea9ecf3d2342a2d714b5c953ead', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=1080&crop=smart&auto=webp&s=6b8cc34ad57b0619a9a931ab5471d22dac5d9143', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?auto=webp&s=30ec3c9b3fc47de0f6d621d8e1545ef65406e3db', 'width': 1200}, 'variants': {}}]}
A great summary/book notes prompt I have created...
4
I have found this to be most useful in Claude since the context length is so high and one can upload docs. It is a multi-chain prompt: So upload your doc then... "make book notes for this book like Aseem Thakar. do chapter by chapter. indepth, do not lose detail." then if you need more chapters... "continue from chapter *" (* = Last chapter it did) finally... "explain in more thorough detail" --- This has given me phenomenonal results in terms of quick summaries that are actually indepth. For example, here is some of $100m offers by Alex Hormozi. The summary is for the last chapter which is a quick recap of all the other chapters. (summary inception) " ### Your First $100,000 Summary of Key Lessons - This concluding chapter recaps core insights from the book to reinforce key principles: - Avoid commoditization through differentiated, irresistible Grand Slam Offers - Pick profitable markets; niche focus brings riches - Premium pricing raises perceived value; fuels growth flywheel - Quantify value drivers; minimize delays and costs - Methodically build offers; layer bonuses, guarantees and scarcity - Rename frequently; sustain interest by repackaging themes - Iterate based on data; double down on what converts The First $100k - Reaching $100k profitability marks a pivotal milestone both financially and psychologically: - Financially, $100k offers security even with major mistakes or downturns - Psychologically, surmounting 'struggle' provides relief after persistence finally pays off - Crossing this threshold unlocks ambition for much greater goals by breaking through self-limiting beliefs - After $100k, progress accelerates as skills compound and you systemize what works Book Delivers High Value Advice - The book equips entrepreneurs with battle-tested frameworks to shortcut learning curves: - Proven offer formula reliably attracts more customers at higher prices - Tactical techniques enhance conversion using psychological principles - Real-world examples demonstrate nuanced applications across contexts - Actionable advice skips theoretical concepts for high-leverage applied value - Holistic education on profit-centric growth helps entrepreneurs reach goals quicker The core message I took from this book is the importance of crafting irresistible offers by creatively solving high-value problems for customers. This is what I will focus on in my own business."
2023-07-28T07:04:56
https://www.reddit.com/r/LocalLLaMA/comments/15bq1re/a_great_summarybook_notes_prompt_i_have_created/
ArmoredBattalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bq1re
false
null
t3_15bq1re
/r/LocalLLaMA/comments/15bq1re/a_great_summarybook_notes_prompt_i_have_created/
false
false
self
4
null
Dockerized Full Stack llama.cpp API server and R (rshiny) application
1
[removed]
2023-07-28T08:52:43
https://i.redd.it/tt7mul1m6oeb1.jpg
Happy_Chicken9835
i.redd.it
1970-01-01T00:00:00
0
{}
15brvzy
false
null
t3_15brvzy
/r/LocalLLaMA/comments/15brvzy/dockerized_full_stack_llamacpp_api_server_and_r/
false
false
https://b.thumbs.redditm…q6b89GO1c1Lc.jpg
1
{'enabled': True, 'images': [{'id': 'ZY_mURRXxQ9rFpc1nEUcEdns_rxjhWupp2YqCP6t9kc', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=108&crop=smart&auto=webp&s=717928d10221849ab065c214f2a0f9f2464193ad', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=216&crop=smart&auto=webp&s=6884d7391ce0c03c1cd162c77472bebeec43f143', 'width': 216}, {'height': 288, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=320&crop=smart&auto=webp&s=0e26340668c09d5fba4987989c6b8916a9aa35e6', 'width': 320}, {'height': 577, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=640&crop=smart&auto=webp&s=8fb01871e652cd754b3586bc5f748c5ae330ff44', 'width': 640}, {'height': 866, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=960&crop=smart&auto=webp&s=0862cae72dd1c260a52e32e75ea9ff25f4b9bea2', 'width': 960}, {'height': 975, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=1080&crop=smart&auto=webp&s=a9c534f7fc288fce8c00ed9555ada0b9356518f7', 'width': 1080}], 'source': {'height': 1506, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?auto=webp&s=8fa648f547223d3af610dd6216f85edc240ab0c6', 'width': 1668}, 'variants': {}}]}