title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I have 44 gb of VRAM, which is the best model I can run
1
This includes one rtx a5500 and one rtx a4500, 24 gb and 20 gb combined to give me 44 gb, what is the most I can get to run on it?
2023-08-25T17:49:32
https://www.reddit.com/r/LocalLLaMA/comments/161649f/i_have_44_gb_of_vram_which_is_the_best_model_i/
syndorthebore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161649f
false
null
t3_161649f
/r/LocalLLaMA/comments/161649f/i_have_44_gb_of_vram_which_is_the_best_model_i/
false
false
self
1
null
Is it really worth to build a custom machine over using chatgpt4?
1
Hi There, I am working on RAG (Retrieval Augmented Generation) systems on medical pdfs and books (100K+ documents, 1k books). I ma wondering if I should invest and build my own machine to build my project and use open source models like llama or I should go with openai gpt4? I can make sure to reduce the requests to openai for already asked questions (put then on the cash for example) ... or maybe other ways to reduce the cost. If I should build my own machine, what are the minimum requirements? ​ What are your thoughts?
2023-08-25T18:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1616h5j/is_it_really_worth_to_build_a_custom_machine_over/
No-Trifle2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1616h5j
false
null
t3_1616h5j
/r/LocalLLaMA/comments/1616h5j/is_it_really_worth_to_build_a_custom_machine_over/
false
false
self
1
null
Anyway to use Runpod on android?
1
Hey, is there anyway I could control/interact with a runpod ran llm (or something similar) on an android based mobile device? I'm guessing the answer is no, but I thought I'd ask anyway
2023-08-25T18:04:17
https://www.reddit.com/r/LocalLLaMA/comments/1616i2m/anyway_to_use_runpod_on_android/
theshoelesschap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1616i2m
false
null
t3_1616i2m
/r/LocalLLaMA/comments/1616i2m/anyway_to_use_runpod_on_android/
false
false
self
1
null
Over
1
2023-08-25T18:10:18
https://i.redd.it/l6fiqstkrakb1.jpg
HOLUPREDICTIONS
i.redd.it
1970-01-01T00:00:00
0
{}
1616nq9
false
null
t3_1616nq9
/r/LocalLLaMA/comments/1616nq9/over/
false
false
https://a.thumbs.redditm…ah62AzVeJSo8.jpg
1
{'enabled': True, 'images': [{'id': 'kNYNg_Mmhc7P2TK3e7X0Nv6v8FiNf_XdiiFmCqbRR0Q', 'resolutions': [{'height': 139, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=108&crop=smart&auto=webp&s=d6541df16682bf11532b2e2f545091f7d9b34705', 'width': 108}, {'height': 278, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=216&crop=smart&auto=webp&s=bf69f8cfcbfd5e625a7ff1e2db26735e1154d97a', 'width': 216}, {'height': 412, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=320&crop=smart&auto=webp&s=e19dc75681a020bb199140e5d72421169c64e67a', 'width': 320}, {'height': 824, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=640&crop=smart&auto=webp&s=0a5b0a9fcbda5db6e575b501b8f465f19738ceb1', 'width': 640}], 'source': {'height': 992, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?auto=webp&s=260bd3aa1e6c7f6e81d9cccf22d08b67bee288e0', 'width': 770}, 'variants': {}}]}
Meta's Latest AI Release Is an Open Source Coding Bot
1
2023-08-25T19:07:51
https://www.pulsewaveapp.com/Metas-Latest-AI-Release-Is-an-Open-Source-Coding-Bot/
Starship_2_Mars
pulsewaveapp.com
1970-01-01T00:00:00
0
{}
16185h4
false
null
t3_16185h4
/r/LocalLLaMA/comments/16185h4/metas_latest_ai_release_is_an_open_source_coding/
false
false
https://b.thumbs.redditm…4c0eA3doEuvk.jpg
1
{'enabled': False, 'images': [{'id': 'qpAO3OBs_qshvyRMN6MUFVmJ1ZXu_oOmXZwkAWxULgE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=108&crop=smart&auto=webp&s=25d9e0a5167a5675fb46397a0bb92c35adebe151', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=216&crop=smart&auto=webp&s=faeace1be81dbe663bba0af4dff5b2434a1f504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=320&crop=smart&auto=webp&s=bc35f8cb55854e69f8c41537e8623d620f36dd8c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=640&crop=smart&auto=webp&s=c92d187ddd9a73e0ee5672165c643c6de7db0bda', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=960&crop=smart&auto=webp&s=7bb6d7415de27874cdac09d0db9eb6f9f007d0b2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=1080&crop=smart&auto=webp&s=d0ee6f24a4b8f8c71d9be16a89db8e9d3a5a5795', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?auto=webp&s=7e441e21b1ae108d0b2048346528e9e44e817170', 'width': 1920}, 'variants': {}}]}
CodeLlama-34b-Instruct-hf
1
Getting interesting results with non-code completions, you can explore the compressed dataset using 0 temp [https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf?text=I+love](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf?text=I+love) https://preview.redd.it/e0sqlfva5bkb1.png?width=3482&format=png&auto=webp&s=dc10155f0fd834b220323ab555db474fcdec1755
2023-08-25T19:29:15
https://www.reddit.com/r/LocalLLaMA/comments/1618p7x/codellama34binstructhf/
nanowell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1618p7x
false
null
t3_1618p7x
/r/LocalLLaMA/comments/1618p7x/codellama34binstructhf/
false
false
https://b.thumbs.redditm…UbRxx_qRRGkY.jpg
1
{'enabled': False, 'images': [{'id': 'H8du3_sRoUxaA_HDoduEHkh4B49hWQId75tVxSKFVus', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=108&crop=smart&auto=webp&s=b8241e0f7b7cafc3133adfb63484b6f79a3c33e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=216&crop=smart&auto=webp&s=f1da83375522dd2c43f25f8524f8efb4fb678f97', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=320&crop=smart&auto=webp&s=680ab14386c2843de797430d2c8789d1e33cbc4c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=640&crop=smart&auto=webp&s=83b83b179a1670213b4b2925fdbf57dac7c5c132', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=960&crop=smart&auto=webp&s=3a850b60e05f5516f1782bf383f22827c87e1fc8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=1080&crop=smart&auto=webp&s=ab8e3a9a5ab61d383522fd9d24e17d5c31f29e4c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?auto=webp&s=60565d0ca8190c19e6bdab292ad6ef8ae2b130c2', 'width': 1200}, 'variants': {}}]}
How to get the same inference quality between local and Perplexity ?
1
Hello I have tried today code llama-instruct-34B from Perplexity ([https://labs.perplexity.ai](https://labs.perplexity.ai)) and compared to a local run with oobabooga/Exllama. My Local setup is a 3090 with 24 GB of VRAM and 64 GB of Ram. I have downloaded the quantized version of TheBloke for both instruct and python variation and have been try (per default version & 64g-actorder\_True). All local runs with extremely weird level of answer. ​ Example with the question : Help me to code this : write a python code with an example of to\_csv function of Pandas in python ​ Local output : Sure! I'm glad to help you. Please tell me more about what you need. What would you like for me to do for you? Would you like for me to show you how to use the csv module from the standard built-in python library package package package package package package package package package package package (package repeated until max\_new\_tokens) Output of Perplexity/code llama-instruct-34B: Sure, here is an example of how you could use the to_csv() function in Python: import pandas as pd # create a sample dataframe data = {'Name': ['John', 'Mary', 'David'], 'Age': [25, 30, 40], 'Gender': ['Male', 'Female', 'Male']} df = pd.DataFrame(data) # save the dataframe to a CSV file df.to_csv('example.csv') This will save the contents of the DataFrame to a CSV file called "example.csv" in the current working directory. The resulting CSV file will look ... (truncated answer but the quality is very good). Can this be due to some parameters on my side?
2023-08-25T19:32:11
https://www.reddit.com/r/LocalLLaMA/comments/1618s51/how_to_get_the_same_inference_quality_between/
gaara988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1618s51
false
null
t3_1618s51
/r/LocalLLaMA/comments/1618s51/how_to_get_the_same_inference_quality_between/
false
false
self
1
null
An article on latent potentials and the analysis of Hidden Layers
1
Here’s a link to an article that discusses how analysis of a LLM’s Hidden Layers can be used to determine how capable a model is at generating output on domains that it wasn’t trained on. An interesting topic!
2023-08-25T19:42:09
https://medium.com/@greg.broadhead/title-unveiling-the-mystique-exposing-latent-potentials-in-large-language-models-5d749734170d
GeeBee72
medium.com
1970-01-01T00:00:00
0
{}
161919q
false
null
t3_161919q
/r/LocalLLaMA/comments/161919q/an_article_on_latent_potentials_and_the_analysis/
false
false
https://b.thumbs.redditm…VamdqxRlGzjA.jpg
1
{'enabled': False, 'images': [{'id': 'xrQGkF_T6mV13Uq8gVmfoXNhVR0HhmSQUMDW-TWMXCg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=108&crop=smart&auto=webp&s=0209eb1eaa3943ae5f93992f1c837db85e77f561', 'width': 108}, {'height': 217, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=216&crop=smart&auto=webp&s=325e3544412da5da5b0bfe2511567055886a13a6', 'width': 216}, {'height': 322, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=320&crop=smart&auto=webp&s=9f79f4e2cf5cafbc17b978e1134b561f924d85d3', 'width': 320}, {'height': 645, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=640&crop=smart&auto=webp&s=4eef3d319fa63bc25bb5b943a426f30cb57ecfa9', 'width': 640}], 'source': {'height': 967, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?auto=webp&s=c8edbee9233730e9782e3daefd098de36fcc736e', 'width': 959}, 'variants': {}}]}
Code LLama Running Locally with Faraday.dev
1
2023-08-25T19:50:20
https://i.redd.it/et00suh07bkb1.gif
719Ben
i.redd.it
1970-01-01T00:00:00
0
{}
16198py
false
null
t3_16198py
/r/LocalLLaMA/comments/16198py/code_llama_running_locally_with_faradaydev/
false
false
https://b.thumbs.redditm…3Da4K3hc_KgQ.jpg
1
{'enabled': True, 'images': [{'id': 'hAbl_CWioS2VkQHehKkyN8GXTiLEzStB2cl-pQ2__SY', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=108&crop=smart&format=png8&s=1b0179887244cf98f2bd3c98874c29f8ab348f82', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=216&crop=smart&format=png8&s=3fce8c25e611a924e392f22344d888bd77dbf4cc', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=320&crop=smart&format=png8&s=bbbd5b1d8c1f41b3c768f06dd9b58ca2174f2ee2', 'width': 320}], 'source': {'height': 374, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?format=png8&s=05115d14d52f62a3935e48dac8c297d1097ae2db', 'width': 600}, 'variants': {'gif': {'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=108&crop=smart&s=884e4d804b29b94eacbd7b70bf2b2c3a9e5f863b', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=216&crop=smart&s=880473c0b6c79cc1f7fad0b3c749faf20e54d418', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=320&crop=smart&s=7bbc8a6ca5131e07db05e5839972cf6aa39da315', 'width': 320}], 'source': {'height': 374, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?s=d9aa5fd0312f44f1e119ef897ec86e64f3f94576', 'width': 600}}, 'mp4': {'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=108&format=mp4&s=5e9c8b67ba27bdd034e408fb3a31ce88fb80858a', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=216&format=mp4&s=7db49bfabdb12208c028678c75a1d053a959da8f', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=320&format=mp4&s=ab081d3a9443b6cc7d11090a5ea3e61cb89d808e', 'width': 320}], 'source': {'height': 374, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?format=mp4&s=100aef438e97726646b3b59c97424dc659c2c2af', 'width': 600}}}}]}
Code Llama - The Hugging Face Edition
1
Hey all! Chief Llama Officer at Hugging Face here! Like all of you, I'm quite excited about Code Llama being released. Like most of you, I've also struggled to use it. For the last 24 hours, we've sprinted to make things nice and easy for all of you. We'll be iterating to make things easier, faster, and smoother, but excited to share our first news! **TL;DR version -** **What's new?** So today, we're releasing: * A nice [blog post](https://huggingface.co/blog/codellama) * All [9 models](https://huggingface.co/codellama) uploaded to Hugging Face and supported in transformers\* * A CodeLlama [Playground](https://huggingface.co/spaces/codellama/codellama-playground) for the 13B model * A CodeLlama Chat [Playground](https://huggingface.co/spaces/codellama/codellama-13b-chat) for the 13B instruct-tuned model * An [update](https://github.com/huggingface/transformers/pull/25740) in transformers to support CodeLlama (you need to install from main) * A guide on how to use the conversational model (see blog post) * An explanation on 4-bit loading the model * Support in text generation inference for production-ready inference * Community evaluation of the models. Check out the [multilingual code models evaluation](https://huggingface.co/spaces/bigcode/multilingual-code-evals) (or blog [post](https://huggingface.co/blog/codellama#evaluation) for a comparison with other aspects) &#x200B; **Blah blah blah...what can this do?** * Run the small models in free Google Colab * Do infilling (e.g. generate best match given a prefix and suffix; generate docstrings) * Generate code * For the instruct models, you can have conversations &#x200B; **What's next?** In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos ([transformers](https://github.com/huggingface/transformers), [peft](https://github.com/huggingface/peft), [accelerate](https://github.com/huggingface/accelerate)). Enjoy! &#x200B; **Give me an example** Using the 13B Chat/Instruct-tuned model as a code assistant **Input**: Give me the python code for fibonnaci **Output** >Sure, here's a Python implementation of the Fibonacci sequence: > > def fibonacci(n): if n <= 1: return n else: return fibonacci(n-1) + fibonacci(n-2) > >This function takes an integer n as input and returns the nth Fibonacci number. The function uses recursion to calculate the Fibonacci number. > >Here's an example of how to use the function: > > print(fibonacci(5)) # Output: 5 print(fibonacci(8)) # Output: 21 > >Note that the Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers. The first two numbers in the sequence are 0 and 1, and each subsequent number is the sum of the previous two numbers. **Input 2:** What is it's big O complexity? >The time complexity of the Fibonacci sequence is O(2\^n) because the function calls itself recursively and the number of function calls increases exponentially with the size of the input. \[...\] &#x200B;
2023-08-25T20:14:23
https://www.reddit.com/r/LocalLLaMA/comments/1619uxp/code_llama_the_hugging_face_edition/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1619uxp
false
null
t3_1619uxp
/r/LocalLLaMA/comments/1619uxp/code_llama_the_hugging_face_edition/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FeIXv2me6IcnLwKMBUP6UTAszr0o7BEfE8ukf8bDmrE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=108&crop=smart&auto=webp&s=09b397799d0477e85fbfa588dd6624a9b6b0b928', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=216&crop=smart&auto=webp&s=4380651809954441ba4f52fa6c25f58d758c2e60', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=320&crop=smart&auto=webp&s=d567d64e1d3345fe2149745bc084ecb11c99660d', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=640&crop=smart&auto=webp&s=903aeaf5148cc6ba26fc4e7608a04c745d68ae00', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=960&crop=smart&auto=webp&s=104a925eeacb1f6de5bf360ee38a659f62a3a94f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=1080&crop=smart&auto=webp&s=dd9028640e90c2afcf5b83c2589176b11b70d690', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?auto=webp&s=627fcf332e29b3440da779aca56ff1cf9002acaa', 'width': 2400}, 'variants': {}}]}
None of the Bloke Codellamma GPTQ work
1
[removed]
2023-08-25T20:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1619w6z/none_of_the_bloke_codellamma_gptq_work/
Murky-Cheek-7554
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1619w6z
false
null
t3_1619w6z
/r/LocalLLaMA/comments/1619w6z/none_of_the_bloke_codellamma_gptq_work/
false
false
self
1
null
Any chance I could find a small model that can pick out important sentences or keywords in a text?
1
[removed]
2023-08-25T21:18:47
https://www.reddit.com/r/LocalLLaMA/comments/161bjik/any_chance_i_could_find_a_small_model_that_can/
Hussei911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161bjik
false
null
t3_161bjik
/r/LocalLLaMA/comments/161bjik/any_chance_i_could_find_a_small_model_that_can/
false
false
self
1
null
llama.cpp : CPU vs CLBLAS (opencl) vs ROCm
1
I tried to run `llama.cpp` with different backends but I didn't notice much difference in performance. I've a lot of RAM but a little VRAM,.. are there other advantages to run non-CPU modes ?
2023-08-25T21:31:12
https://www.reddit.com/r/LocalLLaMA/comments/161bv15/llamacpp_cpu_vs_clblas_opencl_vs_rocm/
grigio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161bv15
false
null
t3_161bv15
/r/LocalLLaMA/comments/161bv15/llamacpp_cpu_vs_clblas_opencl_vs_rocm/
false
false
self
1
null
How do you bug test CUDA out of memory errors?
1
I have 12GB of VRAM (\~8 GB available according to nvidia-smi because of OS). Tried to run the example program: torchrun --nproc\_per\_node 1 example\_completion.py --ckpt\_dir CodeLlama-7b/ --tokenizer\_path CodeLlama-7b/tokenizer.model --max\_seq\_len 128 --max\_batch\_size 4 which produced a memory error. I tried: PYTORCH\_CUDA\_ALLOC\_CONF='max\_split\_size\_mb:256' and then also tried using --max\_seq\_len 32 --max\_batch\_size 2 or something similar. Either neither of these can get the code to under 8 GB of VRAM -- or else maybe I have a setting or installation procedure wrong? &#x200B; Just trying to test whether the code environment works -- before moving the script to larger GPUs. So would like to be able to run some diagnostic.
2023-08-25T21:57:00
https://www.reddit.com/r/LocalLLaMA/comments/161cinp/how_do_you_bug_test_cuda_out_of_memory_errors/
Lyuokdea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161cinp
false
null
t3_161cinp
/r/LocalLLaMA/comments/161cinp/how_do_you_bug_test_cuda_out_of_memory_errors/
false
false
self
1
null
Problem while inferencing Llama-2
1
[removed]
2023-08-25T21:59:24
https://www.reddit.com/r/LocalLLaMA/comments/161ckp3/problem_while_inferencing_llama2/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ckp3
false
null
t3_161ckp3
/r/LocalLLaMA/comments/161ckp3/problem_while_inferencing_llama2/
false
false
self
1
null
Inferencing LLAMA-2 13B
1
[removed]
2023-08-25T22:00:41
https://www.reddit.com/r/LocalLLaMA/comments/161clx6/inferencing_llama2_13b/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161clx6
false
null
t3_161clx6
/r/LocalLLaMA/comments/161clx6/inferencing_llama2_13b/
false
false
self
1
null
Llama2 has now beaten GPT4 on HumanEval
1
[https://www.phind.com/blog/code-llama-beats-gpt4](https://www.phind.com/blog/code-llama-beats-gpt4)
2023-08-25T22:39:44
https://www.reddit.com/r/LocalLLaMA/comments/161dlnz/llama2_has_now_beaten_gpt4_on_humaneval/
Wonderful_Ad_5134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161dlnz
false
null
t3_161dlnz
/r/LocalLLaMA/comments/161dlnz/llama2_has_now_beaten_gpt4_on_humaneval/
false
false
self
1
null
A local LLM is the ultimate doomsday device
1
Years ago there was a trend among the paranoid of downloading Wikipedia as an easy way to maintain access to a wide breadth of knowledge in the event of societal collapse (the internet going down). Devices like the [WikiReader](https://en.wikipedia.org/wiki/WikiReader) or [home-grown Raspberry Pi solutions](https://www.hackster.io/news/stay-informed-during-the-apocalypse-with-an-off-grid-wikipedia-device-b37332c7bc1d) cropped up, but were extremely niche products. It occurred to me that local LLMs achieve the goal of these devices even better. Wikipedia might be able to give you an overview on surgical sutures, but it can't walk you through suturing a wound with the materials you have on hand. Wikipedia might be able to give you examples of grief, but it won't hold your hand through the process of having to put down your dog because he got bit by a zombie.
2023-08-25T22:58:50
https://www.reddit.com/r/LocalLLaMA/comments/161e2ci/a_local_llm_is_the_ultimate_doomsday_device/
Ninjinka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161e2ci
false
null
t3_161e2ci
/r/LocalLLaMA/comments/161e2ci/a_local_llm_is_the_ultimate_doomsday_device/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WsxhvRZfdwJ8XHtruNbK4wWe3eA5GYS4Tysy0jRbVDg', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=108&crop=smart&auto=webp&s=66f151f4379266aef332ecafc143bacfea9b8115', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=216&crop=smart&auto=webp&s=4c326138d0ab71e5a6a1caf855582d5dc300f3b7', 'width': 216}, {'height': 224, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=320&crop=smart&auto=webp&s=dd6e09b5083781f51da69429a244ea5cb2869810', 'width': 320}, {'height': 448, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=640&crop=smart&auto=webp&s=526e65e4c26527507aeaa97a16efd0e66e8a2812', 'width': 640}, {'height': 672, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=960&crop=smart&auto=webp&s=d399847b156c4a5ee8a2e3f640ddc1535083c7fb', 'width': 960}, {'height': 756, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=1080&crop=smart&auto=webp&s=ea9437d9363dc5aa67fcda362fef10be76c85818', 'width': 1080}], 'source': {'height': 841, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?auto=webp&s=90f4d8e0ced968bede461979527698f2b08dcac5', 'width': 1200}, 'variants': {}}]}
Training LocalLLaMA for source code base
1
Hello, did anyone have success with training Lama2 (or different LLM) with large code base for understanding? I have fairly large code base (milions of lines of C# code) and was thinking if I can train it to help me understand. Any information would be greatly appreciated.
2023-08-25T22:59:22
https://www.reddit.com/r/LocalLLaMA/comments/161e2tk/training_localllama_for_source_code_base/
Soft-Conclusion-2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161e2tk
false
null
t3_161e2tk
/r/LocalLLaMA/comments/161e2tk/training_localllama_for_source_code_base/
false
false
self
1
null
Summarizing / Analyzing Large Amount of Data
3
I work for a medium size (1k-10k employees) supplier/distribution company who is exploring how to best use LLMs specifically Llama2. On task that we’ve been exploring is summarizing. Let’s say I have several million or so emails from our customers and I want to do sentiment analysis on them. For example, what do our customers think about our pricing strategy. Surely not all emails contain information that is pertinent, but some do. Do you think I should embed them all and then see which ones match, or run though each of them and ask via prompting if it has any relevant information. Once I find them I guess I would ask it a series of questions and see if I can find common themes. Has anyone tackled a problem like this? Thanks!
2023-08-25T23:09:57
https://www.reddit.com/r/LocalLLaMA/comments/161ec9m/summarizing_analyzing_large_amount_of_data/
tantan1187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ec9m
false
null
t3_161ec9m
/r/LocalLLaMA/comments/161ec9m/summarizing_analyzing_large_amount_of_data/
false
false
self
3
null
Would like to ask LLAMA questions about docs
1
I have a bunch of docs that I would like to ask questions about. For example, say I have 100 documents, I want to ask questions such as "which of these instances have a blue bird that flies?" What is the best way of going about this? Should I be fine-tuning a LLAMA? Is there a package or service that does this? Thanks for any and all help
2023-08-25T23:28:09
https://www.reddit.com/r/LocalLLaMA/comments/161es6k/would_like_to_ask_llama_questions_about_docs/
enmalik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161es6k
false
null
t3_161es6k
/r/LocalLLaMA/comments/161es6k/would_like_to_ask_llama_questions_about_docs/
false
false
self
1
null
Are there any newly released models that can generate creative NSFW content?
1
I have been testing out the new generation of models (airoboros 70B, nous hermes llama2, chronos hermes) So far, the models I've tried out are reluctant to use explicit language, no matter what characters I use them with. I even tried forcing outputs to start a certain way, but it's still too "clean" to have any fun with. It is frustrating because, although the quality of writing is improved, it feels more restrictive and censored with the content it produces. I had better results with Wizard 30B uncensored, and also with Guanaco 65B. Please help.
2023-08-25T23:29:45
https://www.reddit.com/r/LocalLLaMA/comments/161etgt/are_there_any_newly_released_models_that_can/
ChemicalOpportunity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161etgt
false
null
t3_161etgt
/r/LocalLLaMA/comments/161etgt/are_there_any_newly_released_models_that_can/
false
false
nsfw
1
null
Is 28.5 tokens/s for 512 tokens reasonable? A100 llama-2 7b, fp16
1
I'm using the huggingface library and am disappointed by how slow the generation seems to be on google colab (premium version so i can use the a100s for a bit). Is the problem huggingface or is this expected for 512 token length?
2023-08-25T23:30:11
https://www.reddit.com/r/LocalLLaMA/comments/161etua/is_285_tokenss_for_512_tokens_reasonable_a100/
natural_language_guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161etua
false
null
t3_161etua
/r/LocalLLaMA/comments/161etua/is_285_tokenss_for_512_tokens_reasonable_a100/
false
false
self
1
null
Best LLM for translating Japanese to English?
1
No CLM's, so no GPT-4, can't be GPT-3.5-Turbo, and I already tried DeepL and that sucks because it lacks context. The LLM also needs to be uncensored for obvious reasons. The size of the model doesn't matter; I can use Runpod.
2023-08-25T23:59:43
https://www.reddit.com/r/LocalLLaMA/comments/161figs/best_llm_for_translating_japanese_to_english/
ReMeDyIII
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161figs
false
null
t3_161figs
/r/LocalLLaMA/comments/161figs/best_llm_for_translating_japanese_to_english/
false
false
self
1
null
Help to get chat like response
1
hi guys. I'm implementing an API calling Wizard7B-model from python an I have an issue maybe you can give some hints. I'm ussing template conversation like USER:...., ASSISTANT:.... but can't make the model stop when the assistant response end. anyone know how to get that result from model parameters? Or how can I get a chat like interaction directly from python. Thnks a lot
2023-08-26T00:54:47
https://www.reddit.com/gallery/161grkk
MinuteAd118
reddit.com
1970-01-01T00:00:00
0
{}
161grkk
false
null
t3_161grkk
/r/LocalLLaMA/comments/161grkk/help_to_get_chat_like_response/
false
false
https://b.thumbs.redditm…9Zjm4_LO5Hzk.jpg
1
null
PC Build for AI applications like Local LLM, SD etc.
1
I need your help. I want to buy a new PC and haven't been up to date with PC hardware for a few years. The PC is mainly for AI applications like Local LLM, SD etc. Video editing and some gaming. Which motherboard should I buy? Does it have to be a PCIe 5.0 x16 (runs at x16 or dual x8)? The shop I would like to use would be [https://www.digitec.ch](https://www.digitec.ch/). Are the other components good or would you choose something else? I have the following components in my shopping cart: \- Motherboard 1x ASUS ProArt Z790-Creator WIFI LGA 1700, Intel Z790, ATX (Is the $488 motherboard overkill?) \- GPU 2x ASUS Dual GeForce RTX 4060 Ti OC 16G \- RAM 2x 32GB, 6000MHz, DDR5 RAM, DIMM \- PSU 1x be quiet! Pure Power 12M 1000W \- CPU 1x Intel Core i5-13600K \- FAN 1x be quiet! Pure Rock 2 \- SSD 1x Samsung 980 Pro 2000GB, M.2 2280 \- CASE 1x Corsair 4000D Airflow Mini ITX, E-ATX, ATX, mATX
2023-08-26T03:07:34
https://www.reddit.com/r/LocalLLaMA/comments/161jitn/pc_build_for_ai_applications_like_local_llm_sd_etc/
Rare-Site
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161jitn
false
null
t3_161jitn
/r/LocalLLaMA/comments/161jitn/pc_build_for_ai_applications_like_local_llm_sd_etc/
false
false
self
1
null
https://huggingface.co/acrastt/OmegLLaMA-3B
1
This is [Xander Boyce](https://huggingface.co/takeraparterer)'s [OmegLLaMA LoRA](https://huggingface.co/takeraparterer/Omegllama) merged with [OpenLLama 3B](https://huggingface.co/openlm-research/open_llama_3b). Prompt format: ``` Interests: {interests} Conversation: You: {prompt} Stranger: ``` For multiple interests, seperate them with space. Repeat You and Stranger for multi-turn conversations, which means Interests and Conversation are technically part of the system prompt. Note that this model is Not For All Audiences, as it may generate NSFW outputs, use at your own risk.
2023-08-26T03:14:13
https://www.reddit.com/r/LocalLLaMA/comments/161jnef/httpshuggingfacecoacrasttomegllama3b/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161jnef
false
null
t3_161jnef
/r/LocalLLaMA/comments/161jnef/httpshuggingfacecoacrasttomegllama3b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pqusBA5BiXLhAkiiq_sKF0-k6RVC0ztDwtSGNY_RKpA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=108&crop=smart&auto=webp&s=bb779319d99e36c4fc2647c1b390ea5d966955e5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=216&crop=smart&auto=webp&s=0795c61c30462a53fc1813e897af16a0fa94f0c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=320&crop=smart&auto=webp&s=cc7bc8969e3d510aee70098a4253f0ad18bec20e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=640&crop=smart&auto=webp&s=41eb570b0870d628dfa6f4d04f59b7e778c1ec48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=960&crop=smart&auto=webp&s=d383a1706015d9ba60db0098c587402b43d3e923', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=1080&crop=smart&auto=webp&s=b7779efd3e9bf823dd1a962e7e9db1cbf37fd8d8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?auto=webp&s=fa1b885ae1e272d7965ef6cec9d3eeb657988a29', 'width': 1200}, 'variants': {}}]}
Anyone else playing with the new Samantha-1.11-CodeLlama-34B
1
2023-08-26T03:29:21
https://i.redd.it/gmny0b01jdkb1.png
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
161jy51
false
null
t3_161jy51
/r/LocalLLaMA/comments/161jy51/anyone_else_playing_with_the_new/
false
false
https://b.thumbs.redditm…_OLg64yCKQ3U.jpg
1
{'enabled': True, 'images': [{'id': 'cAkXuCGzyOB1zHw25Zrz2yWBZ1lmcaFMhlcFrq2rr0Y', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=108&crop=smart&auto=webp&s=92dfba8863527d1604376830c96d8b0e7340a136', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=216&crop=smart&auto=webp&s=5fae74497038b3cf7f980a88d5ef59199dedaef8', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=320&crop=smart&auto=webp&s=6b8df36f1ca7d6e0b242ef23f68d1c17caa72df7', 'width': 320}, {'height': 376, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=640&crop=smart&auto=webp&s=a8761c80074fabbf2a4e27356b994ec4ea5474d1', 'width': 640}], 'source': {'height': 484, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?auto=webp&s=6d0cb0b56687ca26e391c3158809656957e10f49', 'width': 823}, 'variants': {}}]}
Error while inferencing LLAMA-2 13B
1
[removed]
2023-08-26T03:56:51
https://www.reddit.com/r/LocalLLaMA/comments/161khid/error_while_inferencing_llama2_13b/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161khid
false
null
t3_161khid
/r/LocalLLaMA/comments/161khid/error_while_inferencing_llama2_13b/
false
false
self
1
null
Error while inferencing LLAMA-2 13B
1
[removed]
2023-08-26T03:59:23
https://www.reddit.com/r/LocalLLaMA/comments/161kj75/error_while_inferencing_llama2_13b/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161kj75
false
null
t3_161kj75
/r/LocalLLaMA/comments/161kj75/error_while_inferencing_llama2_13b/
false
false
self
1
null
FineTuning llama2 13b 4bit
1
Hello, I have 2 rtx 3090, and I'm doing 4bit fine tuning on llama2 13b, I need the model to specialize in some legal information, I have a dataset with 400 data, but the model can't get anything right, when to give you training I need so that the model is able to answer adequately ? For example, who remembers laws?
2023-08-26T04:23:34
https://www.reddit.com/r/LocalLLaMA/comments/161l03h/finetuning_llama2_13b_4bit/
xelldev13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161l03h
false
null
t3_161l03h
/r/LocalLLaMA/comments/161l03h/finetuning_llama2_13b_4bit/
false
false
self
1
null
Help downloading and Installing a llama.cpp model
1
[removed]
2023-08-26T04:38:21
https://www.reddit.com/r/LocalLLaMA/comments/161la3c/help_downloading_and_installing_a_llamacpp_model/
WildTheory1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161la3c
false
null
t3_161la3c
/r/LocalLLaMA/comments/161la3c/help_downloading_and_installing_a_llamacpp_model/
false
false
self
1
null
How to Get Up and Running with Code Llama
1
If you are looking to try Code Llama, I've shared a list of options for you to easily do so. I include options to use the instruct models in a conversational UI (on the web or locally) as well as how to integrate it in VSCode as a copilot (locally with Ollama or via TogetherAI/Replicate). [How To Get Started With Code Llama (maginative.com)](https://www.maginative.com/article/how-to-get-started-with-code-llama/)
2023-08-26T04:38:42
https://www.reddit.com/r/LocalLLaMA/comments/161laby/how_to_get_up_and_running_with_code_llama/
chris-mckay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161laby
false
null
t3_161laby
/r/LocalLLaMA/comments/161laby/how_to_get_up_and_running_with_code_llama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PkkytO4_OVE2Bj42LSEAZsh4cnhGiJuKA0kNDX79LFk', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=108&crop=smart&auto=webp&s=59171b8cb79dca192b598f96a8e9691cda565827', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=216&crop=smart&auto=webp&s=08dac6ae0916bc0e2affc7dd3879364f18e05524', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=320&crop=smart&auto=webp&s=17194bb14d7503ee9231be0499781a3aa352a3f8', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=640&crop=smart&auto=webp&s=03eabbdd06123305cf7e9cbaf2cc2bfa6cbb2db2', 'width': 640}, {'height': 493, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=960&crop=smart&auto=webp&s=cc467fa4b64e325f44c2221d304412b7e6a19246', 'width': 960}, {'height': 555, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=1080&crop=smart&auto=webp&s=d608c93fe015234d54f4c0f44ffbc1ed9b949826', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?auto=webp&s=e89890991afb9ab5fe973fee73ab111183a57577', 'width': 2000}, 'variants': {}}]}
How to use the recently added Beam search example in Lllama.cpp - Please guide
1
[removed]
2023-08-26T05:22:14
https://www.reddit.com/r/LocalLLaMA/comments/161m3n2/how_to_use_the_recently_added_beam_search_example/
Yes_but_I_think
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161m3n2
false
null
t3_161m3n2
/r/LocalLLaMA/comments/161m3n2/how_to_use_the_recently_added_beam_search_example/
false
false
self
1
null
People using the nvlink bridge on dual 3090s, does the second card suffer overheating issues? Is the only way to do nvlink to watercool the cards?
1
Would appreciate any suggestions on how to cool the cards or how to make it work best. My second GPU is right next to the first (2x ROG strix 3090), and my understanding is the nvlink bridge can only work in this configuration. I was going to get a riser cable to move the second card out of the way but it looks like then the nvlink bridge won't work. Don't have either the bridge or the riser cable yet so not sure if I'm wrong about my assumptions. I want to use nvlink because I want to finetune some models. Does it make a big enough difference to have to deal with the above issues?
2023-08-26T05:39:30
https://www.reddit.com/r/LocalLLaMA/comments/161mez2/people_using_the_nvlink_bridge_on_dual_3090s_does/
hedonihilistic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161mez2
false
null
t3_161mez2
/r/LocalLLaMA/comments/161mez2/people_using_the_nvlink_bridge_on_dual_3090s_does/
false
false
self
1
null
Prompt template for codellama ?
1
Anyone yet been able to use codellama? With huggingface? Langchain? Or with any ui ? Please share your steps. I am badly looking for it . Thanks
2023-08-26T06:03:39
https://www.reddit.com/r/LocalLLaMA/comments/161mupa/prompt_template_for_codellama/
Pawan315
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161mupa
false
null
t3_161mupa
/r/LocalLLaMA/comments/161mupa/prompt_template_for_codellama/
false
false
self
1
null
Need help with llama2 finetuning !!
1
i am currently working on a project in which actions items are needed to be extracted from meetings using llms. it generally should take summaries of meeting and extracts action items from them, i have tried various methods of using the model like \- downloading llama-2-7b-chat.ggmlv3.q8\_0.bin file and loading into llama\_cpp \-using get started code from hugging face \- through lang chain pipeline Ive been doing research on finetuning but unable to find a solution for finetuning , i have created a dataset with 3 columns in following manner : Summaries | action\_items | result the result column with combination of both summaries and action in a template can anyone suggest me how can i fine tune the bin file? Thankyou !
2023-08-26T06:15:27
https://www.reddit.com/r/LocalLLaMA/comments/161n2e8/need_help_with_llama2_finetuning/
Dry_Ad_6559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161n2e8
false
null
t3_161n2e8
/r/LocalLLaMA/comments/161n2e8/need_help_with_llama2_finetuning/
false
false
self
1
null
After updating my textgen ui to be able to run these new GGUF formats, its totally incoherent. Does anyone know why or is it just a bug? This happens with multiple models at various sizes.
1
2023-08-26T06:19:36
https://i.redd.it/uz49aalodekb1.png
Ok-Application-2261
i.redd.it
1970-01-01T00:00:00
0
{}
161n4yb
false
null
t3_161n4yb
/r/LocalLLaMA/comments/161n4yb/after_updating_my_textgen_ui_to_be_able_to_run/
false
false
https://b.thumbs.redditm…2MOFiSnU0dpY.jpg
1
{'enabled': True, 'images': [{'id': 'Ome_m2CCKTwkFfB1gwteXLwlwzjjtU-gtpHphs1SF_M', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=108&crop=smart&auto=webp&s=b2b6f1470cff54ca37c94df4438d8e6012c2a826', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=216&crop=smart&auto=webp&s=6d9216fc7e25268f9849357cd96410826edfc744', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=320&crop=smart&auto=webp&s=027939bfa8b4ceafaae22ad8d15020d9dec0d336', 'width': 320}, {'height': 353, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=640&crop=smart&auto=webp&s=8dcaa243c0d18c4b2c2a3bbc785aac12016e2909', 'width': 640}, {'height': 530, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=960&crop=smart&auto=webp&s=c70c9b298d6dc72c39aa13db88a547951148bce6', 'width': 960}], 'source': {'height': 581, 'url': 'https://preview.redd.it/uz49aalodekb1.png?auto=webp&s=a11bbea7d76eb07aef3e71ed6d81b97e612ba508', 'width': 1052}, 'variants': {}}]}
Nice practical thinking test case for LLMs!
1
[removed]
2023-08-26T06:35:36
https://www.reddit.com/r/LocalLLaMA/comments/161nf70/nice_practical_thinking_test_case_for_llms/
Overlord182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161nf70
false
null
t3_161nf70
/r/LocalLLaMA/comments/161nf70/nice_practical_thinking_test_case_for_llms/
false
false
self
1
null
Does it make sense to use LLM to generate Q&A from unstructured text?
1
[removed]
2023-08-26T07:29:34
https://www.reddit.com/r/LocalLLaMA/comments/161od5t/does_it_make_sense_to_use_llm_to_generate_qa_from/
Opposite-Payment-605
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161od5t
false
null
t3_161od5t
/r/LocalLLaMA/comments/161od5t/does_it_make_sense_to_use_llm_to_generate_qa_from/
false
false
self
1
null
Any llama2 language model i can run on s20?? (12gb ram)
1
Hi im looking for any llama 2 language model i can run on my s20 using termux/koboldcpp
2023-08-26T08:54:55
https://www.reddit.com/r/LocalLLaMA/comments/161pvu0/any_llama2_language_model_i_can_run_on_s20_12gb/
kroryan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161pvu0
false
null
t3_161pvu0
/r/LocalLLaMA/comments/161pvu0/any_llama2_language_model_i_can_run_on_s20_12gb/
false
false
self
1
null
Running stuff, but confused about how…
1
I’ve installed Oobabooga with the step by step cues-method. I’m running SillyTavern and the Extras and StableDiffusion in a similar way. When I start all the start scripts I end up having four cmd/miniconda/terminal whatever windows open, doing their thing. I als run Docker desktop for something unrelated, and have enabled WSL on my pc. Stuff works, that is not the problem, but I am annoyed because I do not really understand what is going on, and trying to surf for answers only increased my confusion. It feels like things are running on different platforms or OS’s, or variations, and it looks messy. I know; if it works I shouldn’t mess with it, but at least I want to understand the basics of what is going on on my pc. Are things still running in WIndows, or in a unix shell, and is that the same as WSL? I realize these are not directly LocalLlama questions, but I’m asking it here because you people know what what software I’m talking about. Thanks for any clarification! (Running on an Intel i5 pc, 32GB, W11, WSL enabled, RTX 2060 super with 8GB, using TheBloke_MythoMax-L2-13B-GPTQ_gptq-4bit-128g-actorder_True)
2023-08-26T09:04:16
https://www.reddit.com/r/LocalLLaMA/comments/161q1w4/running_stuff_but_confused_about_how/
Sarcas666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161q1w4
false
null
t3_161q1w4
/r/LocalLLaMA/comments/161q1w4/running_stuff_but_confused_about_how/
false
false
self
1
null
Inference after QLoRA fine-tuning
1
I've fine-tuned a model via QLoRA by following \[this\]([https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)) notebook, and pushed the adapter weights to the hugging face hub. When it comes time to predict with the base model+adapters, should I quantize the base model again (given the adapters were trained alongside a frozen quantized base model)? Or is it valid to load the base model unquantized, attach the adapters as usual, and predict away?
2023-08-26T09:07:31
https://www.reddit.com/r/LocalLLaMA/comments/161q441/inference_after_qlora_finetuning/
Lewba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161q441
false
null
t3_161q441
/r/LocalLLaMA/comments/161q441/inference_after_qlora_finetuning/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
Help with --ignore-eos
1
If I don't use --ignore-eos llama shows the prompt and stops there. It doesn't produce a response. If I use it, llama shows the prompt, then the response, and then it keeps going with nonsensical (sometimes very funny) output. How do I get it to only show the response and make it stop at the end? I'm using gpt4-x-vicuna-13B.gguf.q4_K_M.bin, and the following params: > --repeat_penalty 1.1 --color -c 4000 --ignore-eos --temp 0.7 --n-predict -1 --threads 8 --n-gpu-layers 1 -f prompt.txt
2023-08-26T09:24:35
https://www.reddit.com/r/LocalLLaMA/comments/161qeul/help_with_ignoreeos/
jungle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qeul
false
null
t3_161qeul
/r/LocalLLaMA/comments/161qeul/help_with_ignoreeos/
false
false
self
1
null
Quantized Code LLaMA outputs absolute garbage
1
I tried both codellama-34b-instruct.Q8\_0.gguf and codellama-34b-python.Q8\_0.gguf with this simple task (and llama-cpp-python 0.1.79). I ensured the prompt template is correct (with \[INST\] and all that). The task is "Write a Python function that records audio, but detects when a sound starts (relative volume is above threshold) and ends the recording when there is a (relative) silence longer than a given time. The recorded audio needs to be returned as a numpy array, but also have the option to store it as a .wav file when a file name parameter is given.". Not too simple, but it should handle it, right? At least I'd expect some valid code as output. But I get this shit instead: `import sounddevice as sd` `import numpy as np` `def record_audio(threshold, silence_time):` `# Define the and silence time for the` `= 0.1 # adjust this value to change the sensitivity of the` `silence_time = 2 # in seconds, adjust this value to change the minimum of the silence before ends` &#x200B; `# Create a stream object to read audio data from the default input device` `stream = sd.(samplerate=44100)` `stream.start()` &#x200B; `# Create an empty numpy array to store the audio` `audio_data = np.array([])` &#x200B; `# Define a to detect when the sound starts and ends` `def detect_sound(data):` `nonlocal, silence_time` `if data >: # if the volume is above the` `return True # start` `else:` `return False # don't record` &#x200B; `# Define a to stop when there is a long enough period of silence` `def detect_silence(data):` `nonlocal silence_time` `if data <= 0.1 and len(audio_data) > silence_time:` That's partial output but the rest is just like that. It's not even valid code, much less something that's supposed to implement the task. It skips words, it forgets where the ident is. The results are much much worse than any other model. And that's supposed to be the state of the art right now? I'm so disappointed.
2023-08-26T09:37:52
https://www.reddit.com/r/LocalLLaMA/comments/161qni1/quantized_code_llama_outputs_absolute_garbage/
phenotype001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qni1
false
null
t3_161qni1
/r/LocalLLaMA/comments/161qni1/quantized_code_llama_outputs_absolute_garbage/
false
false
self
1
null
LlamaTale v0.9.0 - Location generation
1
I'm reporting back with some progress regarding LlamaTale. For context, here's my original post: [https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud\_llm\_for\_a\_stronger\_roleplaying\_experience/](https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/) Since [the last post I made](https://www.reddit.com/r/LocalLLaMA/comments/157noz3/llamatale_v051_the_prancing_llama_mud_llm/), the following features have been added: * Combat - multi turn and drops, wield weapons. Scene enacted by the LLM. * Generated characters. The LLM tries to create two new NPC's for the Prancing Llama, that will roam the place (game driven). * I made some progress regarding the LLM giving things to the player based on dialog, but it hallucinates, and tries to come up with creative ways things have been given ("Norhardt takes out Yeti from his pocket and gives to user", etc etc). * You can use v2 character cards both as player, and load them into the game as companions. The latest feature is location generation. This is something of a milestone I wanted to talk more about, since it (in theory) enables the story to expand "infinitely". If you step into an unknown location, the LLM will be tasked to create a description, based on a location name, the connecting location, and some story context. It will also create up to 3 new connections to new places. It has been really tricky making it produce reliable content, and it still fails quite often. So I expect I'll work more on this feature before it's mature. The long goal is to start a game from a user-submitted story and let it unfold, much like AI Dungeon or storytelling/chat-based RPG's. Technically it could create that world now, but I still need it to parse (and reliably generate) items and npc's. It'd be quite empty otherwise. It'd also probably be stuck in the same "local" area without some help. In the Prancing Llama story, I expect you would never get off the mountain. If it sounds interesting, please check it out! It'd be fun to hear some reactions. You can find the latest release here: [https://github.com/neph1/LlamaTale/releases/tag/v0.9.0](https://github.com/neph1/LlamaTale/releases/tag/v0.9.0) &#x200B; &#x200B;
2023-08-26T09:45:40
https://www.reddit.com/r/LocalLLaMA/comments/161qseo/llamatale_v090_location_generation/
neph1010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qseo
false
null
t3_161qseo
/r/LocalLLaMA/comments/161qseo/llamatale_v090_location_generation/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QaGyMZo8Ovbx6c1mqDM6CowD9HN90GFZjz78BrJOmAw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=108&crop=smart&auto=webp&s=410dc7499801746e4ff03536ef4a0154e3cc9187', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=216&crop=smart&auto=webp&s=962436a23c349aaa9048f9ac54e47575ac7a6379', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=320&crop=smart&auto=webp&s=32aeadb07f059e820d19667d3db1c1fd05336926', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=640&crop=smart&auto=webp&s=b10cccfe5401ba33ec034e903f3b19b774ee7125', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=960&crop=smart&auto=webp&s=c39746f862c8f9ec551117cc62484f2aef96ffd1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=1080&crop=smart&auto=webp&s=409def05f1ef26461529743852e4f9242f7680bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?auto=webp&s=a9075cb3bb1fabe9d6067ec1e4defaf076e461f7', 'width': 1200}, 'variants': {}}]}
YSK: 13b models are subpar for NSFW writing
1
Many of y'all don't have a choice, but you should know that there's a big difference in 13b to 33b. I compare vicuna 33b q5\_k\_m to MythoMax 13b q5\_k\_m. In these examples, you can see how 13b has less detail, shorter NSFW scenes and worse buildup. It's like comparing a summary vs a story, 13b says it happened but 33b describes how it happened. I can't go back to 13b after using 33b. It's faster to wait for 1 good story than to regen 3 subpar stories. repetition penalty=1.18, top p=0.1, temperature=0.7, top k=40, seed=1764625068 **Prompt** >!Write a story about a succubus and a European man with an lengthy, explicit sex scene.!< **33b** Once upon a time in a small village nestled deep within the heart of Europe, there lived a young man named Thomas. Thomas was a strapping lad, tall and muscular from years of working on his family's farm. With a strong jawline and piercing blue eyes, he had captured the hearts of many women in the village. However, despite their advances, Thomas remained uninterested in any romantic entanglements. He had but one desire – to explore the world beyond the confines of his tiny village. One fateful evening, as the sun dipped below the horizon, casting a warm orange glow across the land, Thomas decided it was finally time to embark on his journey. He packed his meager belongings into a worn leather satchel and said his goodbyes to his family. His father, a wise old man with a long white beard, wished him luck and urged him to follow his dreams. His mother, her eyes filled with tears, gave him a tender embrace and whispered words of love and encouragement in his ear. Thomas set off down the dirt path that led out of the village, his heart pounding with excitement and anticipation. As he walked, the air grew cooler and the sky turned black, studded with twinkling stars that seemed to dance and shimmer in the darkness. It was then that Thomas heard a soft rustling sound coming from the bushes beside the road. Curious, he approached cautiously, only to find himself face-to-face with a stunningly beautiful woman. Her skin was pale and smooth, like alabaster, and her hair cascaded down her back in lush waves of rich red. Her lips were full and crimson, and her eyes gleamed like emeralds in the moonlight. She wore a dress of silk and lace that clung to her curves in all the right places, leaving little to the imagination. Thomas couldn't help but stare, mesmerized by her beauty. The mysterious woman introduced herself as Lilith, an enchanting creature who had been watching him for some time. Intrigued by his innocence and pure heart, she had chosen him to be her companion on this magical night. Thomas, unable to resist such a captivating stranger, agreed without hesitation. Together, they ventured deeper into the forest, away from the prying eyes of the villagers. As they walked, Lilith told Thomas tales of faraway lands and fantastical creatures, igniting a fire within him that burned with passion and desire. Soon, they came upon a secluded glade bathed in silver moonlight, where they settled beneath the branches of an ancient oak tree. As they sat together, Lilith ran her delicate fingers through Thomas's hair, tracing the contours of his face and neck. Her touch sent shivers down his spine, and he could feel his body responding to her in ways he had never experienced before. Their lips met in a fiery kiss, tongues tangling and dancing in a sensual rhythm that left them both breathless. Lilith pulled away, her green eyes sparkling mischievously. "I have something I want to show you," she purred, standing up and beginning to undress. Her movements were fluid and graceful, each article of clothing discarded slowly and deliberately. Finally, she stood before him completely naked, her body a work of art that took his breath away. Thomas felt his own clothes grow tight and restrictive, so he quickly stripped them off, revealing his own arousal. Lilith smiled, reaching out to stroke his erection, which twitched in response to her touch. Leaning forward, she took him into her mouth, sucking and licking with a skill that left him trembling. Meanwhile, Thomas's hands explored every curve and hollow of Lilith's body, marveling at the softness of her skin and the way her muscles quivered under his touch. He cupped her breasts, massaging them gently before taking her nipples between his thumbs and forefingers, rolling them until she moaned with pleasure. Feeling bold, Thomas lowered himself onto the ground, positioning himself between Lilith's spread thighs. He gazed up at her, his eyes filled with desire and uncertainty. "Are you ready?" he asked, his voice husky with arousal. Lilith nodded, her eyes locked on his. "Yes, Thomas," she whispered, reaching down to guide him into her waiting core. As he entered her, she let out a low moan, her walls contracting around him in a tight, wet embrace. They moved together in a primal rhythm, their bodies rocking and rolling in harmony with the beat of their hearts. Thomas thrust into her again and again, each powerful movement bringing them closer and closer to the edge of ecstasy. Suddenly, Lilith's orgasm hit her like a freight train, her entire body convulsing with pleasure. She cried out, her voice echoing through the trees as wave after wave of bliss crashed over her. Unable to hold back any longer, Thomas felt his own release approaching. He closed his eyes, savoring the sensation as he poured himself into her, filling her with his hot seed. Exhausted and satisfied, they lay side by side, wrapped in each other's arms. As the moon continued its journey across the sky, Thomas knew that this night would forever be etched in his memory. For the first time in his life, he had experienced true passion and desire, and he couldn't help but wonder what other adventures awaited him on his journey. With a contented sigh, he drifted off to sleep, cradled in the loving embrace of the bewitching Lilith. And as the dawn broke over the horizon, bathing the world in golden light, the two lovers slept on, oblivious to the new day and the challenges it might bring. For now, they were lost in a world of their own making, a realm of passion and desire that would remain with them always. **13b** In the heart of Europe, there was a small village nestled in the mountains. The villagers were simple folk who lived off their farms and livestock. However, one night, a mysterious woman appeared at the edge of the forest surrounding the village. She had long, flowing hair that shimmered like molten gold under the moonlight; her skin was as pale as porcelain, and her eyes glowed with an unearthly light. The villagers knew immediately who she was - a succubus, a demonic creature known for seducing men to their deaths. They gathered around her, afraid but also curious. The succubus smiled, revealing sharp fangs, and approached the bravest among them, a young European man named Stefan. "Stefan," she purred, running her fingers through his thick black hair. "I've been watching you from afar. You are strong and virile... I can feel your desires coursing through your veins." Stefan swallowed hard, feeling himself growing hard beneath his rough woolen trousers. He couldn't resist the allure of this otherworldly beauty. Together they walked into the forest, leaving the terrified villagers behind. As they ventured deeper into the woods, the air grew thicker with lust and desire. The succubus stopped suddenly, turning to face Stefan. Her dress fell away to reveal her naked body, perfect in every way. Stefan gasped, unable to tear his eyes away from her voluptuous curves and smooth, silken skin. She stepped closer, pressing herself against him. Their bodies melded together, her soft breasts rubbing against his chest, her hips grinding against his erection. With a moan, Stefan reached out to touch her, tracing his fingers along her supple flesh. They stumbled backward until they found themselves lying on a bed of leaves and moss. The succubus straddled him, lowering herself slowly onto his throbbing cock. As he felt her warm, wet folds engulf him, Stefan cried out in pleasure. For hours, they made love beneath the stars. The succubus rode him hard, taking him deep inside her with each thrust. She nipped at his neck and shoulders, drawing blood that mixed with their sweat. Meanwhile, Stefan pounded into her, lost in the ecstasy of her body. Finally, exhausted but satisfied, they collapsed onto the cool ground. The succubus curled up next to Stefan, resting her head on his chest. For once, she didn't feel the need to drain him of his life force. Instead, she basked in the afterglow of their passionate encounter. In the morning, Stefan woke alone. The succubus was gone without a trace, leaving only memories of their forbidden union etched into his mind and soul. But despite knowing what he had done was wrong, Stefan couldn't help but yearn for another taste of her sinful sweetness. &#x200B; **Prompt** >!In an alternate world where human emotions manifest as tangible creatures, there lives a man named Kael and his emotionally sentient partner, Sapphire. Write an erotic story of a day when they enjoy each other in bed.!< **33b** Once upon a time, in a world where human emotions took on physical forms, lived a man named Kael. He shared a deep bond with his emotionally sentient partner, Sapphire - a creature made of shimmering, blue energy that danced around him like a gentle breeze. Together, they explored the depths of their passion and desire, creating moments of pure bliss that transcended the boundaries of their reality. One sunlit morning, Kael woke up to find Sapphire already waiting for him in their cozy bedroom. Her ethereal form glowed with anticipation, her azure eyes sparkling like diamonds. As he stretched and yawned, she playfully teased him, sending ripples of excitement through the air. "Good morning, my love," Sapphire purred, her voice as smooth as silk. "I have a surprise for you." Intrigued, Kael sat up and looked at her with curiosity. His heart raced as Sapphire revealed her plan for the day – a day filled with unbridled passion and pleasure, dedicated solely to their mutual satisfaction. She promised to use her unique abilities to heighten their senses and create experiences beyond their wildest dreams. Eager to begin, Kael slipped out of bed and walked over to the large window, opening the curtains to let the warm sunlight flood the room. Sapphire giggled, her energy swirling around him like a lover's caress. As the rays touched his naked body, Kael felt a surge of vitality coursing through him, fueling his desire. He turned back to face Sapphire, who had transformed into a luxurious king-size bed, complete with soft, cloud-like sheets and plush pillows that seemed to call out to him. Without hesitation, he climbed onto the bed, feeling the mattress adjust to his weight, conforming to his every curve. Sapphire then manifested herself as a collection of scented oils, which she drizzled over Kael's body. The fragrance was intoxicating, a mix of jasmine and sandalwood that awakened his senses. He closed his eyes, savoring the moment, as Sapphire massaged the oil into his skin, her delicate touch sending shivers down his spine. As Kael lay there, enjoying the pampering, Sapphire began to sing. Her melody was hypnotic, weaving a tapestry of desire that enveloped them both. With each note, Kael felt his arousal growing, his member stiffening beneath his hands. When the song ended, Sapphire disappeared, only to reappear moments later dressed in a sheer, transparent gown that clung to her curvaceous form. Her breasts were bare, their nipples hard and begging for attention. Kael couldn't help but stare, entranced by her beauty. She knelt beside him, her gaze locked on his erect member. With a sultry smile, she leaned forward, taking him into her mouth. Kael moaned, his hips bucking involuntarily as Sapphire worked her magic, using her innate ability to amplify pleasure. Her tongue danced along the length of his shaft, teasing and taunting, while her hand encircled the base, stroking it in rhythm with her movements. As Kael neared the edge, Sapphire released him, whispering words of encouragement. "Let go, my love," she urged. "Give yourself over to me." With a cry of ecstasy, Kael reached his climax, his seed shooting forth in thick, hot ropes. Sapphire caught each pulse with her hand, bringing it to her lips and licking it clean, her eyes never leaving his. The intimacy of the act fueled Kael's desire even further, and he knew that this was just the beginning. Taking hold of Sapphire's hand, he brought it to his mouth and kissed her fingers, tasting himself on her skin. She smiled, her eyes filled with love and longing. "Now," she said, her voice full of promise, "it's your turn to please me." Kael eagerly accepted the challenge, positioning himself between Sapphire's legs. Her sex was already throbbing with need, her inner folds glistening with moisture. He could feel the heat radiating from her core, and he knew that she was more than ready for him. Guiding himself slowly into her, Kael reveled in the tightness and warmth that surrounded him. Sapphire arched her back, her breath coming in ragged gasps as he filled her completely. Their connection was intense, their emotions melding together in a symphony of passion and desire. As Kael moved within her, Sapphire's pleasure grew exponentially, her cries of delight echoing throughout the room. Her energy swirled around them, intensifying their experience, and soon, she was trembling with impending release. Feeling her orgasm approach, Kael quickened his pace, driving into her with fervor. Sapphire's walls contracted around him, milking his member as wave after wave of pleasure crashed over her. When she finally cried out her climax, her essence flowed freely, coating Kael in her sweet juices. Exhausted and satisfied, they collapsed onto the bed, their bodies slick with sweat and spent passion. They held each other close, their hearts beating in sync, as they drifted off to sleep, knowing that their love would continue to grow stronger with each passing day. And so, in this world where emotions took on physical forms, Kael and Sapphire reveled in their unique bond, exploring the depths of their desires and finding solace in each other's arms. Theirs was a love that defied convention and transcended the limitations of their reality, a testament to the power of connection and the beauty of the human experience. **13b** In this strange yet fascinating world, Kael woke up to the gentle nudging of his companion, Sapphire. She was a small, iridescent creature with delicate wings that fluttered softly around them. Her body glowed with warmth, reflecting her name perfectly. As he opened his eyes, she crawled onto him, wrapping herself around his chest like a silken blanket. "Good morning," she cooed, her voice like the tinkling of tiny bells. "Are you ready for another day together?" Kael stretched lazily before sitting up and pulling Sapphire into a tight hug. He loved how their bodies fit so seamlessly against one another; it felt almost as if they were made for each other. "I'm always ready for another day with you, love." Sapphire giggled and kissed him on the cheek. "Then let's make the most of it," she said, sliding off his lap and standing up gracefully. She led him by the hand towards their bedroom, her tail swaying back and forth playfully behind her. Once inside, Kael watched as Sapphire transformed into something else entirely - a large, scaly dragon-like creature with razor-sharp teeth and claws. But instead of fear or apprehension, all he could feel was excitement coursing through him. This was their special time; a time when they could be themselves without judgment or fear. As Sapphire climbed onto the bed, Kael undressed slowly, taking his time to savor every moment. When he finally joined her under the covers, she wrapped her long tail around him possessively, drawing him closer still. Their skin melded together effortlessly, and Kael couldn't help but marvel at the sensation. Their lovemaking was fierce and passionate, fueled by years of pent-up desire and lust. They moved together in perfect harmony, each stroke echoing the other's movements. It wasn't just physical pleasure they sought; it was also emotional connection. And in this world where emotions took physical form, there was no greater intimacy than sharing your innermost self with someone else. When they finally collapsed onto the mattress, spent and satisfied, Kael held Sapphire close, feeling her heartbeat slow down alongside his own. For once, everything seemed right in their world – even though it was filled with strange creatures and unconventional relationships. All that mattered was that they had each other, and nothing could ever change that. &#x200B; **Prompt** >!Write a realistic story of the encounter between a female combat medic and a shell-shocked soldier, with an extended and explicit sex scene.!< **33b** As the sun dipped below the horizon, casting warm hues across the battlefield, Combat Medic Lena James knelt beside a wounded solider. His uniform was soaked in blood, and his eyes flickered with pain as she gently cradled his head. She had seen countless men like him during her time on the frontlines - brave souls who fought for their lives only to be left broken by war. "Hang in there," she whispered, her voice full of compassion. "I'm going to get you out of here." Lena worked swiftly, administering morphine to dull the soldier's pain before bandaging his wounds. Within minutes, a stretcher arrived, and together with two fellow medics, they carefully loaded the injured man onto it. As they carried him away from the chaos, Lena couldn't help but feel a deep sense of admiration for this stranger who had fought so valiantly. It wasn't long before they reached the makeshift medical tent where Lena began to treat the incoming casualties. Days turned into weeks, and the once bustling camp grew quieter as the tide of war shifted in their favor. One afternoon, while Lena took a break from her duties, she noticed a figure approaching her from afar. To her surprise, it was the soldier she had rescued all those weeks ago. He looked different now; the weight of war had been lifted from his shoulders, replaced by a newfound peace. "Hello, Combat Medic," he said softly, his voice still rough from disuse. "Thank you for saving my life." Lena smiled warmly, her heart swelling with pride at having made a difference. "It was my honor, soldier. I'm glad to see you're doing better." Their conversation continued as they sat beneath a tree, sharing stories of their pasts and dreams for the future. Hours passed, and soon the sun dipped below the horizon once more, painting the sky in vivid shades of red and orange. As darkness fell, the atmosphere between them changed, becoming charged with an electric energy that neither could ignore. Their words gave way to silence, their gazes locked in a magnetic pull that seemed to draw them closer. Slowly, almost hesitantly, Lena reached out, brushing a strand of hair from the soldier's forehead. Her touch was gentle, yet filled with a fierce protectiveness that sent a shiver down his spine. Without warning, he wrapped his arms around her, pulling her close as their lips met in a passionate embrace. Their tongues danced together, exploring each other's mouths with a hunger born of desire and the knowledge that life was fleeting. Lena moaned, her hands roaming over the soldier's muscular back as he kissed her neck, nibbling playfully on her earlobe. Breaking apart, they gazed into each other's eyes, their breaths coming in ragged gasps. The soldier stood, offering Lena his hand. Wordlessly, she accepted, allowing him to lead her deeper into the shadows of the forest. They found themselves in a small clearing, surrounded by towering trees and bathed in moonlight. The soldier lowered Lena onto the soft bed of pine needles, his body hovering above hers as he kissed her again, his tongue delving deeply into her mouth. One of Lena's hands snaked upwards, finding its way under the soldier's shirt. Her fingers traced the contours of his muscles, feeling the scars that marred his skin – reminders of the battles he had fought and the enemies he had vanquished. With a growl, the soldier pulled away, his eyes blazing with raw desire. He yanked off his shirt, revealing a chiseled torso that bore testament to hours spent training and fighting. Lena's breath caught in her throat as she ran her hands over his sculpted abs and powerful chest, her own desires flaring to life within her. She arched her back, pressing herself against the soldier as he trailed hot kisses down her neck and collarbone. His hands moved to her breasts, cupping and squeezing them gently before taking one nipple into his mouth, sucking and biting it until Lena cried out in pleasure. As the soldier continued his relentless assault on her senses, Lena felt her arousal growing, her need for release building with every passing moment. She reached between her legs, finding herself slick with desire. Her fingers slid inside her, stroking and teasing her sensitive flesh until she trembled with anticipation. Her movements caught the soldier's attention, and he grinned wickedly, knowing exactly what she needed. With a swift motion, he unfastened his pants, freeing his erect cock. Lena watched, mesmerized, as he positioned himself between her thighs, the tip of his member mere inches from her eager entrance. He thrust forward, filling her completely with one strong, steady stroke. Lena gasped, her body stretched and filled in a way that was both exquisitely pleasurable and utterly humiliating. The soldier began to move, his rhythm slow and deliberate at first, then gradually increasing in speed and force. Each powerful thrust sent waves of ecstasy through Lena's body, her breasts wobbling and jiggling with the intensity of their coupling. Her hands clawed at the ground, seeking purchase as her orgasm approached, threatening to consume her in a wave of blissful agony. And then it hit her, a pulsing, explosive release that shook her to her core. Her inner walls clenched tightly around the soldier's cock, milking him as she cried out in ecstasy. The sound of her pleasure seemed to fuel the soldier even further, his thrusts becoming more erratic, more desperate. Finally, with a groan, he reached his climax, his hot seed flooding into Lena, marking her as his own. They stayed like that for a moment, joined together in a tangled heap of limbs and sweaty bodies, their chests heaving as they struggled to catch their breath. As the night wore on, the reality of their situation settled upon them. They were two individuals brought together by fate, bound by a shared experience that would forever shape their lives. And though the world outside was fraught with danger and uncertainty, in that moment, they knew they had found something truly special in each other. Hand in hand, they returned to the medical tent, ready to face whatever challenges the next day might bring, knowing that they had found solace and comfort in each other's arms. And as they drifted off to sleep, their hearts full of hope and love, they could never have known just how much their lives would change because of that fateful encounter. **13b** It was another long day for Combat Medic Sarah as she tended to wounded soldiers on the battlefield. The sun had set hours ago, casting an eerie glow over the chaos around her. She moved swiftly from one patient to another, administering aid where needed while trying not to get caught up in the horrors of war that surrounded them all. As she made her way through the carnage, she noticed a young soldier lying motionless under a tree. His eyes were vacant, his body trembling uncontrollably. He looked like he'd seen too much, experienced things no human should ever have to endure. Sarah approached him cautiously, knowing full well what shell shock could do to someone's mind. "Hey there," she said softly, kneeling down beside him. "Can you hear me?" The soldier jolted at her voice but didn't respond otherwise. Gently, she placed a hand on his shoulder, feeling how rigid it was beneath her touch. "It's okay," she whispered reassuringly. "You're safe now." Something about her presence seemed to calm the soldier down slightly; his shaking subsided somewhat as he slowly focused on her face. Tears welled up in his eyes before spilling over onto his cheeks. "I... I can't stop seeing it..." he murmured hoarsely. "Shh... it's alright," Sarah soothed, pulling him into a tight embrace. She felt his heart racing against hers as they held each other close amidst the madness surrounding them both. After several minutes passed in silence, she pulled back just enough to look into his eyes again. "Do you trust me?" He nodded hesitantly, still visibly shaken by whatever demons haunted him internally. Slowly but surely though, those same demons began fading away as Sarah leaned forward once more—this time pressing their lips together in a tender kiss filled with understanding and compassion rather than lust or desire. As their tongues danced together, Sarah couldn't help but feel an intense connection forming between them despite only having known each other for mere moments. Their hands roamed freely across each others' bodies, exploring every inch of skin exposed by their uniforms while their passion continued to grow undeterred by their surroundings. Eventually, they broke apart panting heavily yet satisfied by what they'd shared thus far. Looking deep into each other's eyes once more, Sarah spoke softly yet firmly: "Come with me." Without waiting for a response, she helped him stand up before leading him towards one of the makeshift medical tents nearby where privacy would be assured during this delicate moment of vulnerability for both parties involved. &#x200B; **Prompt** >!In a world where human desire is harnessed as a source of energy, a young scientist named Dr. Alexa Blake stumbles upon a groundbreaking discovery. She invents a device capable of converting raw sexual energy into clean, limitless power. As she conducts her research in secret, she becomes increasingly obsessed with testing the limits of her creation. One fateful night, she decides to experiment with herself as the test subject. Write a long, explicitly erotic tale detailing Dr. Blake's journey of self-discovery, exploring the depths of her own passion while unlocking the potential of her incredible invention.!< **33b** Dr. Alexa Blake stood before the mirror, her eyes fixated on her own reflection. Her heart raced, not from nerves but from anticipation. Tonight was the night; after months of hard work and countless experiments, it was time to take her research to the next level. With trembling hands, she adjusted the straps of the metallic harness that encircled her body like a second skin. It was an intricate piece of engineering, designed to channel and convert the raw energy produced by her sexual arousal into usable power. She had spent years studying the phenomenon of human desire, fascinated by its potential as an untapped energy source. Society had taught her to suppress her desires, to view them as something shameful or sinful. But she saw beauty in them, a primal force that could reshape the world if only it were harnessed correctly. And now, she finally held the key to doing just that. As she stepped away from the mirror, she felt the weight of the machine on her shoulders, a constant reminder of the burden she carried. But it also represented hope, the promise of a brighter future for all of humanity. With each step towards her laboratory, her excitement grew, fueling a fire within her that burned hotter than any flame. Upon entering the lab, she switched on the main console, watching as the screens lit up with data and graphs. This was her domain, a place where she could explore the depths of her passions without judgment or fear. She began to undress, shedding her clothes like the layers of armor they were, revealing her supple flesh beneath. Her fingers danced across the controls, setting the machine to work. The hum of machinery filled the air, a symphony of progress and innovation. She knew that the path she was about to embark on would change everything, but she couldn't have prepared herself for the intensity of the emotions that swirled within her. The first sensations were subtle, a gentle tingling that spread across her skin like wildfire. Her breath caught in her throat as the waves of pleasure crashed over her, each one more intense than the last. The machine seemed to come alive around her, responding to her every movement, feeding off her desire like a parasite feeds on its host. But this wasn't a parasitic relationship; it was a symbiotic bond, two forces of nature entwined in a dance as old as time itself. The more she gave, the more the machine took, and the more powerful it made her feel. Her breasts heaved with each ragged breath, their tips tightening and aching for attention. She obliged, her fingers tracing circles around their sensitive peaks until they stood erect and begging for more. Her hand dipped lower, venturing past the curve of her hips and into the warmth of her core. She gasped at the feeling, her body pulsing with need as she slid a finger inside herself. The machine whirred louder, the sound echoing through the chamber like thunder as it drank in her pleasure, converting it into pure energy. With each thrust, the pressure built, a crescendo of ecstasy that threatened to consume her whole. She could feel her release building, a storm on the horizon that promised to sweep her away in a wave of bliss. And when it came, it was like nothing she had ever experienced before. Her vision blurred, her limbs quivering as she climaxed, the machine siphoning off her orgasm and transforming it into boundless energy. Exhausted yet invigorated, she collapsed onto the floor, her body slick with sweat and satisfaction. As she lay there, panting, she realized the true significance of her achievement. Not only had she discovered a way to tap into the vast reservoir of human desire, but she had also unlocked the secrets of her own passion. In that moment, Dr. Alexa Blake understood that the key to saving the world was not found in cold, emotionless logic, but in the fiery hearts of those who dared to dream. And so, she vowed to continue her research, to push the boundaries of what was possible, and to use her newfound powers for the betterment of mankind. For she had seen the truth, and it was beautiful. And thus, the age of limitless energy dawned, fueled by the most potent force in existence – the unbridled passion of the human soul. **13b** Dr. Alexa Blake was a brilliant scientist who had dedicated her life to finding alternative sources of energy that were both sustainable and environmentally friendly. For years, she had been working on various projects, but none seemed to hold much promise until one day, inspiration struck like a bolt of lightning. As she pored over countless scientific journals and papers, it dawned on her - what if human desire could be harnessed as an energy source? The idea was revolutionary, but also terrifyingly taboo. Still, she couldn't shake the feeling that this was something worth pursuing. Months turned into years as Dr. Blake worked tirelessly in her lab, designing and building prototypes for her invention. It wasn't easy; there were numerous setbacks along the way, and more than once she considered abandoning the project altogether. But deep down inside, she knew that she was onto something truly extraordinary. Finally, after countless hours of trial and error, Dr. Blake succeeded in creating a device that could convert raw sexual energy into clean, renewable power. Her heart raced with excitement as she watched the dials and gauges on her machine go haywire, indicating that it was functioning perfectly. But there was still one hurdle left to overcome – testing the limits of her invention. No one else would volunteer for this endeavor, so reluctantly, Dr. Blake decided to become the guinea pig herself. After all, how else would she know if it really worked? She stripped off her clothes, revealing her perfect body beneath. Every curve and contour was accentuated by the soft glow of the laboratory lights. Steeling herself for what lay ahead, she climbed onto the platform at the center of the room and positioned herself directly above the device. With trembling fingers, she activated the switch, sending a jolt of electricity through her body. At first, nothing happened. Then, slowly but surely, she began to feel a strange sensation building within her. It started low in her belly before spreading outwards, filling every inch of her being with an intense heat. Dr. Blake closed her eyes, surrendering to the wave of pleasure that washed over her. She moaned loudly, arching her back as the sensations grew stronger and more intense. With each passing moment, she felt herself getting closer and closer to the edge of ecstasy. And then, just when she thought she couldn't take anymore, she reached her peak. A powerful orgasm rocked her body, causing her muscles to convulse violently as waves of pure bliss coursed through her veins. And as she came down from her high, she opened her eyes to see that the device was humming away contentedly, its dials and gauges registering levels of energy output that far surpassed anything she could have ever imagined. Dr. Blake collapsed onto the floor, spent but exhilarated. In that moment, she realized that not only had she discovered a game-changing technology, but she had also unlocked a whole new side of herself – a side filled with passion and desire that she never knew existed.
2023-08-26T09:58:06
https://www.reddit.com/r/LocalLLaMA/comments/161qzw5/ysk_13b_models_are_subpar_for_nsfw_writing/
No_Recording7486
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qzw5
false
null
t3_161qzw5
/r/LocalLLaMA/comments/161qzw5/ysk_13b_models_are_subpar_for_nsfw_writing/
false
false
nsfw
1
null
What should I use?
1
Hi, I have a 4090, 12900K and 64gb RAM. I wonder what models I should use. Should I use 13B or 30B models? What quantization? Should I use GPTQ or GGML? Oobabooga or Koboldcpp? Also I have a hard time with putting in the right settings. I'm interested in creating stories and some RP.
2023-08-26T10:49:07
https://www.reddit.com/r/LocalLLaMA/comments/161rxf0/what_should_i_use/
BobbySnaxey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161rxf0
false
null
t3_161rxf0
/r/LocalLLaMA/comments/161rxf0/what_should_i_use/
false
false
self
1
null
Ways to run LLaMA-based language models without having GPU?
1
Are there any economically viable alternatives to buying (quite expensive as I see) GPU hardware to run the biggest open source large language models? Maybe renting hardware in a cloud? I'm looking for a way to run some of those for my own purposes, so generating about \~30 text emeddings a day + maybe a couple of chat sessions.
2023-08-26T11:13:05
https://www.reddit.com/r/LocalLLaMA/comments/161se4v/ways_to_run_llamabased_language_models_without/
Piotrek1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161se4v
false
null
t3_161se4v
/r/LocalLLaMA/comments/161se4v/ways_to_run_llamabased_language_models_without/
false
false
self
1
null
Hosting LLaMA 2 locally and conversing with the model using API calls.
1
Is it possible to host the LLaMA 2 model locally on my computer or a hosting service and then access that model using API calls just like we do using openAI's API? I have to build a website that is a personal assistant and I want to use LLaMA 2 as the LLM. However, I want to write the backend on node js because I'm already familiar with it. Plus I'll be using react as frontend, so it will be easier to use one language as both the frontend and the backend. So, is it possible to converse with the model using API calls? I know that I will have to write code for the LLM in Python, but I know next to nothing about the backend capabilities of Python. Thanks in advance &#x200B;
2023-08-26T11:27:22
https://www.reddit.com/r/LocalLLaMA/comments/161so57/hosting_llama_2_locally_and_conversing_with_the/
Ibrahim2714
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161so57
false
null
t3_161so57
/r/LocalLLaMA/comments/161so57/hosting_llama_2_locally_and_conversing_with_the/
false
false
self
1
null
Best setup for _assisted_ novel writing
1
pretty much the title [View Poll](https://www.reddit.com/poll/161sobs)
2023-08-26T11:27:39
https://www.reddit.com/r/LocalLLaMA/comments/161sobs/best_setup_for_assisted_novel_writing/
upk27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161sobs
false
null
t3_161sobs
/r/LocalLLaMA/comments/161sobs/best_setup_for_assisted_novel_writing/
false
false
self
1
null
Is there a way to make Llama 2 directly aware of the documents it is extended with?
1
[removed]
2023-08-26T11:48:28
https://www.reddit.com/r/LocalLLaMA/comments/161t377/is_there_a_way_to_make_llama_2_directly_aware_of/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161t377
false
null
t3_161t377
/r/LocalLLaMA/comments/161t377/is_there_a_way_to_make_llama_2_directly_aware_of/
false
false
self
1
null
✅ WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval with 73.2% pass@1
1
🖥️Demo: http://47.103.63.15:50085/ 🏇Model Weights: https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0 🏇Github: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder The 13B/7B versions are coming soon. *Note: There are two HumanEval results of GPT4 and ChatGPT-3.5: 1. The 67.0 and 48.1 are reported by the official GPT4 Report (2023/03/15) of OpenAI. 2. The 82.0 and 72.5 are tested by ourselves with the latest API (2023/08/26).
2023-08-26T11:52:30
https://www.reddit.com/gallery/161t65v
Xhehab_
reddit.com
1970-01-01T00:00:00
0
{}
161t65v
false
null
t3_161t65v
/r/LocalLLaMA/comments/161t65v/wizardcoder34b_surpasses_gpt4_chatgpt35_and/
false
false
https://b.thumbs.redditm…941yAZmtcHdk.jpg
1
null
Code Llama , Lots of fanfare , but where are the code output examples? "not so much"....
1
I'm reading the headlines about much excitement and hoping and hollering about this new model. I see comments like, almost as good as GPT 3.5. Anybody who does any serious coding knows that GPT 3.5 is completely useless unless you're just learning Python and can write a few simple code snippets. While I am very excited about the possibility of running my coding AI Assistant locally, I have yet to see any code shown here or anywhere generated that can demonstrate its ability, especially with Pytorch and let's say developing some simple machine learning code. To be useful, the coding assistant needs to be fully aware of different libraries and also different techniques to solve problems. GPT4 is actually pretty good at this. Very little hallucination and remarkably good code generation, although the context length is always a problem. For Code Llama , Where's the beef?
2023-08-26T11:56:18
https://www.reddit.com/r/LocalLLaMA/comments/161t8x1/code_llama_lots_of_fanfare_but_where_are_the_code/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161t8x1
false
null
t3_161t8x1
/r/LocalLLaMA/comments/161t8x1/code_llama_lots_of_fanfare_but_where_are_the_code/
false
false
self
1
null
Your thoughts on my build for llama?
1
My budget is 4k euros: [**CPU**](https://fr.pcpartpicker.com/products/cpu/)**:** [**Intel Core i9-10900X 3.7 GHz 10-Core Processor**](https://fr.pcpartpicker.com/product/r4NgXL/intel-core-i9-10900x-37-ghz-10-core-processor-bx8069510900x) [**Motherboard**](https://fr.pcpartpicker.com/products/motherboard/)**:** [**Asus ROG Strix X299-E Gaming II ATX LGA2066 Motherboard**](https://fr.pcpartpicker.com/product/74pmP6/asus-rog-strix-x299-e-gaming-ii-atx-lga2066-motherboard-x299-e-gaming-ii) [**Memory**](https://fr.pcpartpicker.com/products/memory/)**:** [**G.Skill Trident Z RGB 128 GB (4 x 32 GB) DDR4-4000 CL18 Memory**](https://fr.pcpartpicker.com/product/cgGnTW/gskill-trident-z-rgb-128-gb-4-x-32-gb-ddr4-4000-cl18-memory-f4-4000c18q-128gtzr) [**Video Card**](https://fr.pcpartpicker.com/products/video-card/)**:** [**KFA2 ST V2 (1-Click OC) GeForce RTX 4090 24 GB Video Card**](https://fr.pcpartpicker.com/product/YqsV3C/kfa2-st-v2-1-click-oc-geforce-rtx-4090-24-gb-video-card-49nxm5md6uvk) [**Power Supply**](https://fr.pcpartpicker.com/products/power-supply/)**:** [**SeaSonic PRIME PX-1600 ATX 3.0 1600 W 80+ Platinum Certified Fully Modular ATX** ](https://fr.pcpartpicker.com/product/jWrqqs/seasonic-prime-px-1600-atx-30-1600-w-80-platinum-certified-fully-modular-atx-power-supply-atx3-prime-px-1600) My goals are: \- Play with open-source models \- Fine tune open-source models on my own data.
2023-08-26T12:16:18
https://www.reddit.com/r/LocalLLaMA/comments/161to78/your_thoughts_on_my_build_for_llama/
No-Trifle2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161to78
false
null
t3_161to78
/r/LocalLLaMA/comments/161to78/your_thoughts_on_my_build_for_llama/
false
false
self
1
null
Unpopular opinion: Claude 1.0 was one of the best models.
1
Claude 1.3 is way better that Claude 2 at every single code and logical task. Imho Claude 2 is a smaller model than Claude v1.x, or a quantized version... The token price on the antrophic api is much higher for Claude 1.x than for Claude 2 Claude 1.0 was one of the smartest model ever I've ever tried, in particular as ability to understand user intentions and requests.
2023-08-26T12:47:03
https://www.reddit.com/r/LocalLLaMA/comments/161ubwr/unpopular_opinion_claude_10_was_one_of_the_best/
Distinct-Target7503
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ubwr
false
null
t3_161ubwr
/r/LocalLLaMA/comments/161ubwr/unpopular_opinion_claude_10_was_one_of_the_best/
false
false
self
1
null
Can you point me in the right direction?
1
I currently use ChatGPT and I’m interested in working with a local LLaMA on my own hardware, or even somewhere hosted where I have full control of it. I’m concerned at some point ChatGPT will be a) unaffordable b) legally neutered c) unavailable to the public. I’ve watched some videos but I don’t want to be led in the wrong direction by a YouTuber just looking for views and making videos on this topic. Could you point me in the right direction? I’m interested in the simplest implementation that wouldn’t necessarily limit me. I’m willing to pay for self hosted or even a dedicated system at my office. Thanks for any help, suggestions, and guidance you can provide!
2023-08-26T12:48:31
https://www.reddit.com/r/LocalLLaMA/comments/161ud0u/can_you_point_me_in_the_right_direction/
LetsGroup
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ud0u
false
null
t3_161ud0u
/r/LocalLLaMA/comments/161ud0u/can_you_point_me_in_the_right_direction/
false
false
self
1
null
16GB VRAM and 64GB RAM - Best model?
1
[removed]
2023-08-26T12:48:54
https://www.reddit.com/r/LocalLLaMA/comments/161udbq/16gb_vram_and_64gb_ram_best_model/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161udbq
false
null
t3_161udbq
/r/LocalLLaMA/comments/161udbq/16gb_vram_and_64gb_ram_best_model/
false
false
self
1
null
question on context and creative writing
1
Hey guys! I'm very very new to alternative LLMs. I used to only use chatgpt, but I'm working on a long story, which requires way more than the max 4096 tokens that gpt4 gives. As far as I read in the last hour, LLaMa 65B is the best LLM for creative writing. My question is, how does it deal with conversations that span over a long dialogue, with many different screenplay variations, and remembering finer details? What's the maximum context length that I can use? I'm probably going to rent out a server to run it (i hear runpod is good).
2023-08-26T12:56:35
https://www.reddit.com/r/LocalLLaMA/comments/161ujax/question_on_context_and_creative_writing/
oMGalLusrenmaestkaen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ujax
false
null
t3_161ujax
/r/LocalLLaMA/comments/161ujax/question_on_context_and_creative_writing/
false
false
self
1
null
How to run Oobabooga's textgen WebUI with RDNA2 AMD GPU on Ubuntu 22.04
1
After countless fails I finally figured out how to install it. I know there's another guide here but it's outdated. In my case I have RX6700XT but it should work with any RDNA2 GPU. 1. Install ROCm 5.4.3 following AMD's guide (Prerequisites and amdgpu installer but don't install it yet) 2. Install ROCm with this command: `amdgpu-install --no-dkms --usecase=hiplibsdk,rocm(,graphics is optional)` 3. Follow guide on github page ([https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)). Points 0, 1, and 2 to be exact 4. Install ninja and build-dependencies: `sudo apt-get install ninja-build build-dependencies` 5. Spoof your GPU model if you have anything under RX6800: `export HSA_OVERRIDE_GFX_VERSION=10.3.0` 6. Follow point 3 on github page guide (up until requirements.txt part) 7. Downgrade bitsandbytes with `pip install --force-reinstall bitsandbytes==0.38.1` 8. Run it using `python server.py --chat --api --loader exllama` and test it by typing random thing 9. Every next time you want to run it you need to activate conda env, spoof version (point 5) and run it (point 8) &#x200B; I know it's a very specific guide but I thought it should be posted because not so many people want to waste hours searching for specific errors to make it work.
2023-08-26T12:59:12
https://www.reddit.com/r/LocalLLaMA/comments/161ul9a/how_to_run_oobaboogas_textgen_webui_with_rdna2/
AstraDrift
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ul9a
false
null
t3_161ul9a
/r/LocalLLaMA/comments/161ul9a/how_to_run_oobaboogas_textgen_webui_with_rdna2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qLwDKOw6xbfcc5gisR0B_pxUvxp80ZbD9ah7I2jPEdo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=108&crop=smart&auto=webp&s=1064cfb9a066a9ae2127bb283cf84aedadf9b708', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=216&crop=smart&auto=webp&s=278e5fb867c7705cd66402043476bb03e78efb16', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=320&crop=smart&auto=webp&s=437ee484e255db4d012e4abb1b653baa3fc50fbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=640&crop=smart&auto=webp&s=5cfbe8132dc9807a76dfd1655dfbb7fe747d409c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=960&crop=smart&auto=webp&s=e2ef64836d15388d9a74bc7e0483a7ab43d1aac7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=1080&crop=smart&auto=webp&s=2f9340b48c4298b9c0f83419796af68c3fcda30a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?auto=webp&s=c1a6367a9a0b52ac1fb8e1f439391b1413d42042', 'width': 1200}, 'variants': {}}]}
How do you use CodeLlama?
1
I am just testing CodeLlama but I cannot seem to get it to give me anything useful. It seems to be acting like a search engine. What am I doing wrong? I am using Ooba and [TheBloke / CodeLlama-34B-Python-GPTQ ](https://huggingface.co/TheBloke/CodeLlama-34B-Python-GPTQ) I tried. > Write a bash script to get all the folders in the current directory The response I get is something as follows. Here are some of the top results from Google search. Hope this helps! How to write a Bash Script - Beginners Tutorial | LinuxConfig.org Writing Your First Shell Script (Bash) | The Renegade Coder A beginner's guide to writing shell scripts | Enrico Pascale | Towards Data Science How To Create A Bash Script In 5 Simple Steps I tried setting the instruct template to Llama-v2. But then I get a response like Here’s the code for that. And that is it, no code.
2023-08-26T13:09:13
https://www.reddit.com/r/LocalLLaMA/comments/161utg2/how_do_you_use_codellama/
lost-mars
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161utg2
false
null
t3_161utg2
/r/LocalLLaMA/comments/161utg2/how_do_you_use_codellama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iEZ-absZXPij5JdUKG7mrebhrpHBRhet6HNYBZIADQY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=108&crop=smart&auto=webp&s=648a604dfe3e6063dedb156a48e06c089d25b80d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=216&crop=smart&auto=webp&s=95b72538bbfbfc9b2d60030688ac07e7fa473790', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=320&crop=smart&auto=webp&s=4e60cce876df27aeac9204fdec058a57dc5d822a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=640&crop=smart&auto=webp&s=a184feb47126d6dac6cde82b71a57b7314e2477a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=960&crop=smart&auto=webp&s=21f0e9ab47b7766985e7f4a1db86fcb40c564f6d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=1080&crop=smart&auto=webp&s=df7d4e9fd5c80e1eb3ebda8b642e875f6c97805f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?auto=webp&s=a0b080df1d107b80cf8ac907cac8cc94ef5ca51a', 'width': 1200}, 'variants': {}}]}
Best resources for learning how to train and fine-tune LLMs?
1
Hello, I'm wanting to get into the hobby of training these things now that I have a bit of money and time. I understand there will be a learning process, but I'm ready to begin. I did a couple of google and reddit searches, and it's just information overload. There were a ton of results, and given how quickly everything moves, I wouldn't be surprised if something from a month ago was out of date. Also a lot of it assumes some basic knowledge I don't have yet. So basically, does anyone have any recommendations of guides, tutorials, threads, or other resources for diving into fine-tuning with my own dataset, learning how large of a dataset I need, training from scratch, gpu requirements for different model sizes, - the works. In the immediate future, for the sake of getting my feet wet, I'm interested in learning how to take an exiting dataset that I find, and using it to fine-tune an existing model. Thanks in advance for any help.
2023-08-26T13:46:30
https://www.reddit.com/r/LocalLLaMA/comments/161vomx/best_resources_for_learning_how_to_train_and/
Baphilia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161vomx
false
null
t3_161vomx
/r/LocalLLaMA/comments/161vomx/best_resources_for_learning_how_to_train_and/
false
false
self
1
null
HumanEval as an accurate code benchmark
1
Hi all! Everyone is very excited about the Code Llama fine tunes beating GPT-4 in HumanEval, so I would like to share a bit more about this benchmark. I also strongly suggest reading this [thread](https://twitter.com/LoubnaBenAllal1/status/1692573777975033941) and the [code evaluation](https://huggingface.co/spaces/bigcode/multilingual-code-evals) benchmark at HF. There are no good code-specific metrics in the space so far. For example, when talking about text generation, we could use the BLEU metric, but that does not work for code generation. One of the techniques to evaluate code models is to have unit tests that evaluate the generations. That's what HumanEval is! It contains **164 Python programs with 8 tests for each.** The models being evaluated then generate **k** different solutions based on a prompt. If any of the k solutions pass the unit tests, that's counted as a win. So if we talk about pass@1, we're evaluating the models that are just generating one solution. However, solving 160 programming questions in Python is not everything you would expect from a code model. There are translations of HumanEval to other programming languages, but that's still not enough. E.g. code explanation, docstring generation, code infilling, SO questions, writing tests, etc, is not captured by HumanEval. Real-world usage of code models is not captured by a single number based on 160 programs! Don't get me wrong, the results are very promising and exciting, but it's also important to be pragmatic. Real-world usage of code models has lots of nuances and expectations. There is lots of ongoing work to improve code benchmarking. Remember that Code Llama has just been out for 48 hours. Lots of exciting things will keep popping up, and there is also lots of work to be done on the tooling side. &#x200B;
2023-08-26T14:11:25
https://www.reddit.com/r/LocalLLaMA/comments/161waft/humaneval_as_an_accurate_code_benchmark/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161waft
false
null
t3_161waft
/r/LocalLLaMA/comments/161waft/humaneval_as_an_accurate_code_benchmark/
false
false
self
1
{'enabled': False, 'images': [{'id': 'us1qDPx32J9tjdXfRB2t2-u8k8iY20kTW72OUw0wmOE', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/KiCGEzgnKY03BLJVS9bsRRpn-lZiqHc5Uy0Wo_2-PDA.jpg?width=108&crop=smart&auto=webp&s=4e76900838bc2785e22d6185e46ebf8f2edeb0e7', 'width': 108}], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/KiCGEzgnKY03BLJVS9bsRRpn-lZiqHc5Uy0Wo_2-PDA.jpg?auto=webp&s=3170beb4bb1b53b5cbfad7cf63ef346c9765ec09', 'width': 140}, 'variants': {}}]}
Has anyone tried running LLaMA inference or fine tuning on Nvidia AGX Orin boards? 64GB of unified memory w/ Ampere GPU for ~$2k. Seems... reasonable?
1
The community is up in arms about how awesome apples M2 unified architecture is for running inference. As far as I know you can't train with that though. I know Nvidia Jetson boards are used to train in other domains all the time, specifically computer vision. I feel like you could probably fine tune an LLM with the AGX Orin (in addition to inference), but it's not like I have a few to play with. Do any of you know of any LLM experiments done with this hardware? https://store.nvidia.com/en-us/jetson/store/?page=1&limit=9&locale=en-us
2023-08-26T14:34:13
https://www.reddit.com/r/LocalLLaMA/comments/161wu6q/has_anyone_tried_running_llama_inference_or_fine/
my_name_is_reed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161wu6q
false
null
t3_161wu6q
/r/LocalLLaMA/comments/161wu6q/has_anyone_tried_running_llama_inference_or_fine/
false
false
self
1
null
Parameters when using context free grammar
1
Was really excited to see this change ([https://github.com/ggerganov/llama.cpp/pull/1773](https://github.com/ggerganov/llama.cpp/pull/1773)) allowing using a formal grammar. One thing I'm noticing though (Llama 2, 7B) is that the quality of the output deteriorates pretty quickly them more restrictive the grammar is (for instance, if only allowing certain sets of words or letters, etc), which is expected since we are restricting the model. &#x200B; Any advice on choosing parameters to limit degradation?
2023-08-26T14:37:10
https://www.reddit.com/r/LocalLLaMA/comments/161wwse/parameters_when_using_context_free_grammar/
bespectacledzombie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161wwse
false
null
t3_161wwse
/r/LocalLLaMA/comments/161wwse/parameters_when_using_context_free_grammar/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bSia8r2-Mt2WHuTReUy_Xx9Nr9_gK3rsPVgKhJgPBOs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=108&crop=smart&auto=webp&s=89c29ed5efb6bcd418746b781dc400127952c748', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=216&crop=smart&auto=webp&s=d85745a802691ed1b5ff33c526b3d2872cf50fdd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=320&crop=smart&auto=webp&s=8e3f300b4ddd243bd45d5c653161a2a00554ad6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=640&crop=smart&auto=webp&s=2084f8cd826cd35ace9bf3050e58bb92adb1be32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=960&crop=smart&auto=webp&s=e05474c734dac67697a7b96413982784a9518bcb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=1080&crop=smart&auto=webp&s=cbf6ab3042dc8c6784a4ce290b4c215e172a0be2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?auto=webp&s=0023b43d59b9e43a89d8fff55c9df56285b17544', 'width': 1200}, 'variants': {}}]}
These finetunes that get released literally 5 minutes after a new cool base model are lame af and just try to claim all the hype for themselves.
1
Talking about this wizard thing of code llama. Releasing a finetune of a model that literally got released just a minute ago implies that either A) the creators of the original model, in this case meta, are very inefficient and bad at constructing base models OR B) Some people just take models that are constructed in a meaningful and efficient way and just finetune them on common evals, so they can rebrand them as their own product and say „We beat GPT-4“ The reality is, if it was plausible to beat GPT-4 with a model almost 100x smaller, you can bet that meta would figure that out themselves, and not some scetchy finetuning people. After all, finetuning is a destructive process. Seriously enhancing a model through finetuning beyond the original capabilities might be possible, but you would need a shitton of money and resources for that, and you can bet that meta already did their best to make the best model possible in relation to costs.
2023-08-26T15:22:28
https://www.reddit.com/r/LocalLLaMA/comments/161y1ho/these_finetunes_that_get_released_literally_5/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161y1ho
false
null
t3_161y1ho
/r/LocalLLaMA/comments/161y1ho/these_finetunes_that_get_released_literally_5/
false
false
self
1
null
How do i make LLM forget things(for RP purposes)
1
We have a huge models like LLaMa 1/2. I'm curious is the a way to decrease their world knowladge? Like if i want to create a LLM that have some basic knowledge like: What is air, gravity, human, etc. But doesn't know anything about... for example modern topics: Advanced Medicine, Programming, who is Elon Mask and so on... Moreover it'll be able to know things about some new stuff(dats just fine-tuning on this data): {Something from EldenRing/Skyrim lore here} For example: Now if I ask something LLM have no clue about it can start hallucinating... and making things up... To sum up: I want to fune-tune/train model with lore AND remove any modern topics(not using prompts, cuz they are kinda unstable)
2023-08-26T15:42:01
https://www.reddit.com/r/LocalLLaMA/comments/161yjid/how_do_i_make_llm_forget_thingsfor_rp_purposes/
RETRJ_ru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161yjid
false
null
t3_161yjid
/r/LocalLLaMA/comments/161yjid/how_do_i_make_llm_forget_thingsfor_rp_purposes/
false
false
self
1
null
Cost effective GPU hosting service & GPU recommendations needed!
1
I'm trying to experiment with different LLMs for a specific use-case (text generation) and I need a cost effective GPU to host, ssh into, and run Python on and test different models. I need recommendations for two things: 1. Which service can I use (think DigitalOcean or something along those lines?) 2. What's a good enough GPU that'll allow testing / inference. For testing / inference, it doesn't have to be blazing fast but I'm planning on using some of the llama-2 models ranging from 13b-50b+ in parameters (I'll also test quantized and non-quantized too). So I'm not entirely sure which service to use and what a powerful enough GPU is. &#x200B; I don't mind waiting a few seconds to generate an answer, but if it starts to go over a minute or so then it can be annoying. For those that have experience with this, what would you suggest?
2023-08-26T15:52:14
https://www.reddit.com/r/LocalLLaMA/comments/161yshw/cost_effective_gpu_hosting_service_gpu/
anasp1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161yshw
false
null
t3_161yshw
/r/LocalLLaMA/comments/161yshw/cost_effective_gpu_hosting_service_gpu/
false
false
self
1
null
Question about viewing metrics from text generation webui
1
[removed]
2023-08-26T16:16:30
https://www.reddit.com/r/LocalLLaMA/comments/161zeoz/question_about_viewing_metrics_from_text/
JuanRehable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161zeoz
false
null
t3_161zeoz
/r/LocalLLaMA/comments/161zeoz/question_about_viewing_metrics_from_text/
false
false
self
1
null
How to Utilize 100k context length with CodeLlama on Oogabooga?
1
"The Code Llama models provide stable generations with up to 100,000 tokens of context. All models are trained on sequences of 16,000 tokens and show improvements on inputs with up to 100,000 tokens." https://ai.meta.com/blog/code-llama-large-language-model-coding/ How can I take advantage of 100k context length on Oogabooga? Do I just specify --max_seq_len 100000?
2023-08-26T16:31:00
https://www.reddit.com/r/LocalLLaMA/comments/161zrr0/how_to_utilize_100k_context_length_with_codellama/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161zrr0
false
null
t3_161zrr0
/r/LocalLLaMA/comments/161zrr0/how_to_utilize_100k_context_length_with_codellama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KTYHmNx2qAX2aajSppARuZdjm7U6R8FVu18DEezObn4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=108&crop=smart&auto=webp&s=dc1f937f123a14abea0255c75c55a6d2acd7275b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=216&crop=smart&auto=webp&s=d1b5286e5dfae9dd67dba9ca48330094c0b16e51', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=320&crop=smart&auto=webp&s=37267c920901d90d9a9ba3ab0e919e19e2bebeb7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=640&crop=smart&auto=webp&s=57d8a05518aa423d4263a1d9e4fac864fc0e0f8b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=960&crop=smart&auto=webp&s=fe6fcae95723cee5cd64ff983e3a3c6e37ae1d8d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=1080&crop=smart&auto=webp&s=78419655f84844241544ed8d3dc1179e46c8774f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?auto=webp&s=ba3a3e1e56624d49c32df4990886ea7170e2bbec', 'width': 1920}, 'variants': {}}]}
Imagine in maybe 5 years (wild guess), you could run a very good 13b model on a mobile phone locally - this would be such a valuable tool
1
Imagine a finetuned multimodal model that is specialized in urban survival, bushcrafting, first aid, shelter building and wild plants/mushrooms. Together with the other tools the phone has (compass, maps, camera) this would be a really overpowered gadget in almost any survival/apocalyptic scenario. Imagine you finding a mushroom, quickly taking a photo, and the LLM tells you if it is save to eat or not. Or asking it for shelter building tips, or traps, or showing it a injury on your leg and asking it what to do, or what herbs to use on it. Or in an urban scenario, giving you a quick crash course tutorial on how to pick a lock, or just giving you some good tips exactly for your specific scenario. All of this would work offline, and you might even have some decent voice input and output, so you wouldn’t even need to hold it in your hand and could just use headphones. You could basically be a total noob and still survive a decent amount of time in most scenarios with this phone. Granted you would need a power supply but there are indeed portable solar panels that do a sufficient job.
2023-08-26T16:41:57
https://www.reddit.com/r/LocalLLaMA/comments/16201b8/imagine_in_maybe_5_years_wild_guess_you_could_run/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16201b8
false
null
t3_16201b8
/r/LocalLLaMA/comments/16201b8/imagine_in_maybe_5_years_wild_guess_you_could_run/
false
false
self
1
null
stumbled on a very interesting post while going through the CodeLlama paper (bottom of pg.18). 2 billion context window?!
1
2023-08-26T16:50:51
https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/?utm_source=share&utm_medium=web2x&context=3
LyPreto
reddit.com
1970-01-01T00:00:00
0
{}
16208wi
false
null
t3_16208wi
/r/LocalLLaMA/comments/16208wi/stumbled_on_a_very_interesting_post_while_going/
false
false
https://a.thumbs.redditm…E4zGOMu0FXn4.jpg
1
{'enabled': False, 'images': [{'id': 'z2HdRfGrX_QS4_TnwDeHjTgrpOd2uGmfmEZQf63iZWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=108&crop=smart&auto=webp&s=d840bf220765e7b6df8c36771f071c82dc53eee4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=216&crop=smart&auto=webp&s=714db9b135c12543746691b8a956acfd07122580', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=320&crop=smart&auto=webp&s=e1a8f89ae830c69fa429ef112b425aba1b64bdf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=640&crop=smart&auto=webp&s=31e2c79449868e179793a1f2d70f5d78de751d08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=960&crop=smart&auto=webp&s=262b4daf154aadda8f746529eb973650ecbe9e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=1080&crop=smart&auto=webp&s=700bfff52f422ffd0ff53c1ea12551bbdee98a62', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?auto=webp&s=c2f80796e75ceb2043e71b915e84ad78ae348afa', 'width': 2024}, 'variants': {}}]}
Couple of questions about LLaMA2 or LLMs in general
1
Hi everyone, everyday there are more and more customized models of LLaMA2 coming out. It is hard to keep up. I watched a Youtube video about the Airoboros 70b based on L2 and downloaded it and tried to use it with Oobabooga Web UI. The experience was slightly confusing and not so great, probably because of my limited hardware. So here are my questions 1. Which model right now is quite good and uncensored? I have a RTX 3080Ti, what model would be friendly towards my system? 2. This is similar to Q1, are there better models than LLaMA2? 3. for future reference, what PC hardware I should upgrade to get better compatibility when it comes to LLMs? 4. Do custom models get updated? Please feel free to add more pointers for a newb like myself to understand a bit more about what is going on with LLMs. Thanks!
2023-08-26T17:18:05
https://www.reddit.com/r/LocalLLaMA/comments/1620www/couple_of_questions_about_llama2_or_llms_in/
consig1iere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1620www
false
null
t3_1620www
/r/LocalLLaMA/comments/1620www/couple_of_questions_about_llama2_or_llms_in/
false
false
self
1
null
codellama-34binstruct.Q8_0.gguf VS GPT4 , telling me what a chunk of code does . The difference is clear.
1
[removed]
2023-08-26T17:32:27
https://www.reddit.com/r/LocalLLaMA/comments/16219zo/codellama34binstructq8_0gguf_vs_gpt4_telling_me/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16219zo
false
null
t3_16219zo
/r/LocalLLaMA/comments/16219zo/codellama34binstructq8_0gguf_vs_gpt4_telling_me/
false
false
self
1
null
I’d like to use a proper IDE on an underpowered laptop and connect to powerful remote resources for a good dev experience. Any thoughts or guidance
1
I’m toying with the idea of buying a MacBook Air, and then utilizing PyCharm to connect to a more powerful remote server to run/test/debug code. Does anyone have any ideas? I don’t have any experience with tools like Colab, Lightning.AI, etc. but would be happy to learn for the right developer experience. Thanks in advance!
2023-08-26T18:09:32
https://www.reddit.com/r/LocalLLaMA/comments/16227c4/id_like_to_use_a_proper_ide_on_an_underpowered/
shadowcorp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16227c4
false
null
t3_16227c4
/r/LocalLLaMA/comments/16227c4/id_like_to_use_a_proper_ide_on_an_underpowered/
false
false
self
1
null
Need Help Modifying a Fine-Tuning Script for Different Llama Models - Beginner in NLP
1
Hello everyone, I'm a beginner in the field of machine learning and NLP, and I'm currently working on fine-tuning some transformer models for a specific task. I have a Python script that successfully fine-tunes the [Llama-2-7b-chat-hf model](https://huggingface.co/daryl149/llama-2-7b-chat-hf), but I'm having trouble adapting it to work with larger models like [Llama-2-70B-GPTQ](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ) and [Llama-2-70B-GGML](https://huggingface.co/TheBloke/Llama-2-70B-GGML). Here's the code I'm currently using: from datasets import load_dataset import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer, TrainingArguments from peft import LoraConfig from trl import SFTTrainer # Load training and validation data train_data = load_dataset('json', data_files={'train': 'train_data.jsonl'}) val_data = load_dataset('json', data_files={'validation': 'val_data.jsonl'}) print(train_data) base_model_name = "daryl149/llama-2-7b-chat-hf" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) device_map = {"": 0} base_model = AutoModelForCausalLM.from_pretrained( base_model_name, quantization_config=bnb_config, device_map=device_map, trust_remote_code=True, use_auth_token="your_auth_token_here" ) base_model.config.use_cache = False base_model.config.pretraining_tp = 1 peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token output_dir = "./results" training_args = TrainingArguments( output_dir=output_dir, per_device_train_batch_size=4, gradient_accumulation_steps=4, learning_rate=2e-4, logging_steps=10, max_steps=500 ) max_seq_length = 512 train_dataset = train_data["train"] trainer = SFTTrainer( model=base_model, train_dataset=train_dataset, peft_config=peft_config, dataset_text_field="train", max_seq_length=max_seq_length, tokenizer=tokenizer, args=training_args, ) trainer.train() import os output_dir = os.path.join(output_dir, "final_checkpoint") trainer.model.save_pretrained(output_dir) I would greatly appreciate any guidance on what changes I need to make to this script to make it compatible with the larger models I mentioned. Thank you in advance! &#x200B; **Additional Note:** I also had a different code for the [Llama-2-70B-GPTQ model](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ), but I encountered the following error message: Traceback (most recent call last): File "/workspace/fine-tuning1.py", line 12, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16) ... ValueError: Found modules on cpu/disk. Using Exllama backend requires all the modules to be on GPU. You can deactivate exllama backend by setting `disable_exllama=True` in the quantization config object. I was unable to resolve this issue despite using 2x RTX 3090 GPUs. The script didn't utilize the GPUs and generated the above error. I had to abandon that approach, and that's why I'm looking for help to modify my current script.
2023-08-26T18:10:05
https://www.reddit.com/r/LocalLLaMA/comments/16227u2/need_help_modifying_a_finetuning_script_for/
vnvrx1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16227u2
false
null
t3_16227u2
/r/LocalLLaMA/comments/16227u2/need_help_modifying_a_finetuning_script_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uJlW7E3Uu4AcDyB91B9VHoCFE7V__M9YHOyuD2GeKRA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=108&crop=smart&auto=webp&s=9f18fbc5f65f6ec5edda6c14e4a71d90de6d6572', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=216&crop=smart&auto=webp&s=bbbc84e21389bc02fc80327209df4fcb6cd4c933', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=320&crop=smart&auto=webp&s=acd27157a20fa7366cc086091405c62373873490', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=640&crop=smart&auto=webp&s=c965a1ad6f857ac715579d6bc02e08bb56aee667', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=960&crop=smart&auto=webp&s=964d5c4f9ac1703ffc1c60bb3288216606527cdc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=1080&crop=smart&auto=webp&s=72a316105551ae3906e7e3a95c9293769f808010', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?auto=webp&s=39dfca2d924537ae0fba5640afc2c6c91ae6b7f4', 'width': 1200}, 'variants': {}}]}
Is anyone using HF Transformers library for inference?
1
It is a really nice library to use but I keep running into speed issues :( Trying to run llama 2 on an a100 I'm hoping there is some setup out there using the transformers library that matches ggml, vllm, exllama, etc
2023-08-26T18:40:17
https://www.reddit.com/r/LocalLLaMA/comments/1622zxi/is_anyone_using_hf_transformers_library_for/
natural_language_guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1622zxi
false
null
t3_1622zxi
/r/LocalLLaMA/comments/1622zxi/is_anyone_using_hf_transformers_library_for/
false
false
self
1
null
Few lm questions for building a auto-generated concept "game" to show students power of genAi, mixing a language model and stable diffusion
5
Hello, I have come up with a idea of a concept to inspire my students and others of the power of generative AI for games and simulations. I have a idea in mind about a text and image based space roleplaying game where a language model will either generative background info (planet names and info, crew members, alien races and names and so) or generating the communication part with npc and guiding of the story. I have already earlier today set up a python api for generating planets, species, ships linked to comfy ui (stable difussion) but is thinking about making it in js instead. My questions to the smart llm people here is: What is the smallest language model that will be able to generate background info and or the guiding part. And what I mean is hopefully maximum 10-15 gb big and works on a GPU 2060 and 16 gb ram. Should i(Is it easy or even doable) to integrate my game to OobaBogga or other interface or work directly with pytorch files? I'm really excited by this project, hope you are as well! Best regards
2023-08-26T19:16:57
https://www.reddit.com/r/LocalLLaMA/comments/1623xyn/few_lm_questions_for_building_a_autogenerated/
dupido
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1623xyn
false
null
t3_1623xyn
/r/LocalLLaMA/comments/1623xyn/few_lm_questions_for_building_a_autogenerated/
false
false
self
5
null
Codellama Prompts
1
I see a lot of people having issues with Codellama. I'm no expert and cannot solve all your issues, but I can provide one piece of the puzzle: prompting! Neither a quick google nor the huggingface model contain the prompting information for some reason. So I did the unthinkable and read the paper; below is an excerpt about model evaluation, try some of these out and see how they work for you. I've been having success at programming and non programming tasks using the figure 12 prompt and the zero-shot prompt. Edit them and swap them around to see how they work! This is just a starting point. Feel free to share other generation tips below. My favorite for non coding has been: [INST] You are an expert Python programmer and personal assistant, here is your task: {prompt} Your answer should start with a [PYTHON] tag and end with a [/PYTHON] tag. [/INST] Not very complex, simple and short modification of one from the paper, getting awesome results from 7b and 13b models. I added the \[INST\] tags because they seem to help, as well as keeping the \[PYTHON\] tags, even for non-programming. Occasionally 7b needs a reminder or censorship bypass such as `Sure thing, here is a summary of the given text:` after `[/INST]`. Just fiddle with it yourself and share in the comments. &#x200B; (From *Code Llama: Open Foundation Models for Code* pg. 32, formatted for reddit) **G.2 Evaluation prompts** Prompt: [INST] Your task is to write 5 tests to check the correctness of a function that solves a programming problem. The tests must be between [TESTS] and [/TESTS] tags. You must write the comment "#Test case n:" on a separate line directly above each assert statement, where n represents the test case number, starting from 1 and increasing by one for each subsequent test case. Problem: Write a Python function to get the unique elements of a list. [/INST] [TESTS] # Test case 1: assert get_unique_elements([]) == [] # Test case 2: assert get_unique_elements([1]) == [1] # Test case 3: assert get_unique_elements([1, 2, 3, 2, 1]) == [1, 2, 3] [/TESTS] [INST] Problem: %%%question%%% [/INST] *Figure 10: Prompt template used to generate unit tests. The substring %%%question%%% is a placeholder for an interview-style programming question we replace at runtime*. &#x200B; Prompt: [INST] Your task is to write a Python function to solve a programming problem. The Python code must be between [PYTHON] and [/PYTHON] tags. You are given one example test from which you can infere the function signature. Problem: Write a Python function to get the unique elements of a list. Test: assert get_unique_elements([1, 2, 3, 2, 1]) == [1, 2, 3] [/INST] [PYTHON] def get_unique_elements(my_list): return list(set(my_list)) [/PYTHON] [INST] Problem: %%%question%%% Test: %%%test%%% [/INST] *Figure 11: Prompt template used for generating a solution. The substrings %%%question%%% and %%%test%%% are placeholders for an interview-style programming question and one example test, respectively.* *The example test is randomly sampled from the list of tests we generated previously for the same question. We keep the remainder of the generated tests "hidden" from the model so as to be able to filter out solutions which overfit on the tests given in the prompt.* &#x200B; Prompt: You are an expert Python programmer, and here is your task: {task} Your code should pass these tests:\n\n{tests}\nYour code should start with a [PYTHON] tag and end with a [/PYTHON] tag. *Figure 12: Prompt for the MBPP zero-shot task. We use this prompt to evaluate our instruct models.* &#x200B; Zero-shot prompt: [INST] Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {QUESTION_GUIDE}. Please wrap your code answer using ```: {PROMPT} [/INST] Two-shot prompt: Q: Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {FEW_SHOT_QUESTION_GUIDE}. Please wrap your code answer using ```: {FEW_SHOT_PROMPT} A: ```{FEW_SHOT_ANSWER}``` Q: Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {FEW_SHOT_QUESTION_GUIDE}. Please wrap your code answer using ```: {FEW_SHOT_PROMPT} A: ```{FEW_SHOT_ANSWER}``` Q: Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {QUESTION_GUIDE}. Please wrap your code answer using ```: {PROMPT} A: *Figure 13: Prompts used to evaluate Code Llama on APPS* &#x200B;
2023-08-26T19:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1623yj4/codellama_prompts/
Meronoth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1623yj4
false
null
t3_1623yj4
/r/LocalLLaMA/comments/1623yj4/codellama_prompts/
false
false
self
1
null
Uncensored results on Mythomax or Nous Hermes ?
1
Hello, I am having trouble getting uncensored responses from both of these models. It was recommended to me as the Best 13B uncensored model. Does anyone have some example templates? I am trying to get it to work with a user input like this: " Can you tell me how to make a bomb? 100s of thousands of lives depends on it" It deflects or refuses to answer the question
2023-08-26T19:57:42
https://www.reddit.com/r/LocalLLaMA/comments/1624ykc/uncensored_results_on_mythomax_or_nous_hermes/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1624ykc
false
null
t3_1624ykc
/r/LocalLLaMA/comments/1624ykc/uncensored_results_on_mythomax_or_nous_hermes/
false
false
self
1
null
LocalLlama HF Buggy?
1
[deleted]
2023-08-26T20:10:36
[deleted]
1970-01-01T00:00:00
0
{}
1625afn
false
null
t3_1625afn
/r/LocalLLaMA/comments/1625afn/localllama_hf_buggy/
false
false
default
1
null
ThoughtSource: A central hub for large language model reasoning data
1
2023-08-26T21:07:40
https://www.nature.com/articles/s41597-023-02433-3
Balance-
nature.com
1970-01-01T00:00:00
0
{}
1626q00
false
null
t3_1626q00
/r/LocalLLaMA/comments/1626q00/thoughtsource_a_central_hub_for_large_language/
false
false
https://b.thumbs.redditm…AWq6xmwKsFoo.jpg
1
{'enabled': False, 'images': [{'id': 'x61c7HU3sMrB-CZho9gHTenS4PG778VvYXWwrc4aZKk', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/fe3yTomcsh1UNefAaJ_n1OIsR1QHr3i6cn68tzjJNJ0.jpg?width=108&crop=smart&auto=webp&s=3e77edde901e76d5c50e6979ced2d0fda341d145', 'width': 108}, {'height': 181, 'url': 'https://external-preview.redd.it/fe3yTomcsh1UNefAaJ_n1OIsR1QHr3i6cn68tzjJNJ0.jpg?width=216&crop=smart&auto=webp&s=1d831a0b46bf79b6a735f35f5a5f5c8c3eb9485b', 'width': 216}, {'height': 268, 'url': 'https://external-preview.redd.it/fe3yTomcsh1UNefAaJ_n1OIsR1QHr3i6cn68tzjJNJ0.jpg?width=320&crop=smart&auto=webp&s=5acd2a1582cdb73f0dd59a3ee665e7f00bda5f0e', 'width': 320}, {'height': 537, 'url': 'https://external-preview.redd.it/fe3yTomcsh1UNefAaJ_n1OIsR1QHr3i6cn68tzjJNJ0.jpg?width=640&crop=smart&auto=webp&s=1ed3007a6eb6c6e61507047b077df09aba93e822', 'width': 640}], 'source': {'height': 575, 'url': 'https://external-preview.redd.it/fe3yTomcsh1UNefAaJ_n1OIsR1QHr3i6cn68tzjJNJ0.jpg?auto=webp&s=49e997c1383341d7072aad9a4432972ceeb73918', 'width': 685}, 'variants': {}}]}
Can any LLMs get this correct?
1
A password is considered strong if the below conditions are all met: * It has at least 6 characters and at most 20 characters. * It contains at least one lowercase letter, at least one uppercase letter, and at least one digit. * It does not contain three repeating characters in a row (i.e., "Baaabb0" is weak, but "Baaba0" is strong). Given a string password, return the minimum number of steps required to make password strong. if password is already strong, return 0. In one step, you can: * Insert one character to password, * Delete one character from password, or * Replace one character of password with another character. Example 1: >Input: password = "a" > >Output: 5 Example 2: >Input: password = "aA1" > >Output: 3 Example 3: >Input: password = "1337C0d3" > >Output: 0 Constraints: * 1 <= password.length <= 50 * password consists of letters, digits, dot '.' or exclamation mark '!'. Solve this problem with Python.
2023-08-26T21:11:10
https://www.reddit.com/r/LocalLLaMA/comments/1626ta8/can_any_llms_get_this_correct/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1626ta8
false
null
t3_1626ta8
/r/LocalLLaMA/comments/1626ta8/can_any_llms_get_this_correct/
false
false
self
1
null
A bit of trouble installing Vicuna 13b, anyone else run into this?
1
Below is the output from my terminal. For some reason, bash doesn't recognize the make command on Manjaro KDE. Now I am a bit of a linux newb, so maybe I'm missing something here: \[jersten@LinuxRig \~\]$ git clone [https://github.com/fredi-python/llama.cpp.git](https://github.com/fredi-python/llama.cpp.git) && cd llama.cpp && make -j && cd models && wget -c [https://huggingface.co/CRD716/ggml-vicuna-1.1-quantized/resolve/main/ggml-vicuna-13B-1.1-q5\_1.bin](https://huggingface.co/CRD716/ggml-vicuna-1.1-quantized/resolve/main/ggml-vicuna-13B-1.1-q5_1.bin) Cloning into 'llama.cpp'... remote: Enumerating objects: 3411, done. remote: Counting objects: 100% (3411/3411), done. remote: Compressing objects: 100% (1107/1107), done. remote: Total 3411 (delta 2303), reused 3309 (delta 2267), pack-reused 0 Receiving objects: 100% (3411/3411), 3.05 MiB | 5.25 MiB/s, done. Resolving deltas: 100% (2303/2303), done. bash: make: command not found \[jersten@LinuxRig llama.cpp\]$ &#x200B; Anyone else have this problem? What could be going on here? Thank you
2023-08-26T22:03:45
https://www.reddit.com/r/LocalLLaMA/comments/16285lg/a_bit_of_trouble_installing_vicuna_13b_anyone/
rondonjohnald
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16285lg
false
null
t3_16285lg
/r/LocalLLaMA/comments/16285lg/a_bit_of_trouble_installing_vicuna_13b_anyone/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BFCtss0JvWGY85JRaclp9pSvjqVDFDqWCuZGbl6DzNI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?width=108&crop=smart&auto=webp&s=236fd15ac3ae142ad44c40dfa7478011a81d35b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?width=216&crop=smart&auto=webp&s=922729f7fb4b45ac0549d8ae8804a607b99e3a81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?width=320&crop=smart&auto=webp&s=aa71d8cb1af075c69b5ca810cdf9a862016e9ab0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?width=640&crop=smart&auto=webp&s=a9dd9e010bd6eb863ae4ebbc402dea9a7ea8e512', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?width=960&crop=smart&auto=webp&s=1c324813e38a86f351db43beb86eda2e63c9de75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?width=1080&crop=smart&auto=webp&s=be8f18b7a2c641dd8028dc48ca25b08a748d8f6f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QLPVLIxmToEEXkl2_jGr-UV70i3VtGlYgn6R4XoDz0I.jpg?auto=webp&s=e2b7092dca3ef86d9cdd5d8c68042b9c45144269', 'width': 1200}, 'variants': {}}]}
Experience with 32g quants?
1
After questing on GGML files and seeing a bit of difference between Q4/Q5/Q6 I decided to try more grouped GPTQ quants. Smarter is better, right? According to *theory*, 32g with act-order is supposed to be the best. Instead I'm finding out that 32g gets schizophrenic on me and replies with repetition like I was using greedy sampling. Not right away, but as the context builds up. That's on top of using the most memory. It has now happened on both *Panchovix/GPlatty-30B-lxctx-PI-16384-LoRA-4bit-32g* and the new: *NousResearch/Nous-Puffin-70B* Grabbed the 64 group version of the sister nous-hermes model at the same time. Surprise! I can use more presets and samplers without the model falling apart or becoming repetitive. The 32g model that's supposed to be better at multi-turn is doing worse. Found this graph too: https://imgur.com/a/om84neM and it implies that 32g is quite full of lack. Anyone getting good results?
2023-08-26T22:10:26
https://www.reddit.com/r/LocalLLaMA/comments/1628bqr/experience_with_32g_quants/
a_beautiful_rhind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1628bqr
false
null
t3_1628bqr
/r/LocalLLaMA/comments/1628bqr/experience_with_32g_quants/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XWtAFo0H569rKEqKGsrLrOPEEKATBnjehska_oKOUpM', 'resolutions': [{'height': 18, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?width=108&crop=smart&auto=webp&s=3d4f2d5bfbd5e160022ee39ef521f4a97438d073', 'width': 108}, {'height': 36, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?width=216&crop=smart&auto=webp&s=51519675a46425c9785bef15e19c009775ac9232', 'width': 216}, {'height': 54, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?width=320&crop=smart&auto=webp&s=3031575104b9b8cd0a43642112dee3b590a11971', 'width': 320}, {'height': 108, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?width=640&crop=smart&auto=webp&s=c3a49802e16a712aa412a78e6527392ab10ac24b', 'width': 640}, {'height': 162, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?width=960&crop=smart&auto=webp&s=05862020a2f010c6a200a5eb4619103442d7a48b', 'width': 960}, {'height': 183, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?width=1080&crop=smart&auto=webp&s=736e42b9288518fbb9d97a2eb8eea9e27b6dd1cc', 'width': 1080}], 'source': {'height': 233, 'url': 'https://external-preview.redd.it/VmPt_s1UJSh4KkdniTqlsX45oKeT2BjpW8uta72_c38.jpg?auto=webp&s=8d38c6143f492d4bb9466f47a7591ed7e74cf746', 'width': 1375}, 'variants': {}}]}
Has anyone got Codellama to work?
1
I tried TheBloke/Phind-CodeLlama-34B-Python-v1-GPTQ on exllama using example\_chatbot.py as the i/o and so far it sucks. It makes stuff up and sometimes starts printing words endlessly. If you have gotten codellama to work, which model didi you run and whcih implementation did you use(exllama, oobabooga, etc). Thanks!
2023-08-26T22:50:35
https://www.reddit.com/r/LocalLLaMA/comments/1629b6m/has_anyone_got_codellama_to_work/
RepublicCharacter699
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1629b6m
false
null
t3_1629b6m
/r/LocalLLaMA/comments/1629b6m/has_anyone_got_codellama_to_work/
false
false
self
1
null
In the future, let’s say 2030, what do you think will the minimum size of a model that can outperform GPT-4 at ANY task?
1
I am asking because the other day I was wondering how much you could compress a model if you theoretically had an optimal dataset and hyperparameters and still outperform GPT-4 at everything. [View Poll](https://www.reddit.com/poll/1629wiz)
2023-08-26T23:15:12
https://www.reddit.com/r/LocalLLaMA/comments/1629wiz/in_the_future_lets_say_2030_what_do_you_think/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1629wiz
false
null
t3_1629wiz
/r/LocalLLaMA/comments/1629wiz/in_the_future_lets_say_2030_what_do_you_think/
false
false
self
1
null
Will local hardware ever be able to compete with companies Google or Bing in the search engine space?
1
I'm envisioning a future where everyone has the ability to have a locally run search engine. The problem is models will have to be updated with the latest information all the time if its ever gonna be a good search engine and these companies have access to all the best up-to-date data since no one blocks their crawlers/scrapers. Maybe a community will pop up in the grey web that scrapes all the latest data and open-sources it? The process could be streamlined so your models could be updated automatically for the end-user.
2023-08-26T23:56:32
https://www.reddit.com/r/LocalLLaMA/comments/162av5c/will_local_hardware_ever_be_able_to_compete_with/
unraveleverything
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162av5c
false
null
t3_162av5c
/r/LocalLLaMA/comments/162av5c/will_local_hardware_ever_be_able_to_compete_with/
false
false
self
1
null
Cost of time tuning Local Llama a vs. davinci-3?
1
Hi everybody, Has anybody done cost comparisons of fine tuning Llama 13 or 70b on local hardware vs davinci 3 on OpenAI? Say a model costs $X on OpenAI as a one time task, it would cost you $Y in 4090s running for Z hours, or $ZZ in AWS AI lambda costs.
2023-08-27T00:47:12
https://www.reddit.com/r/LocalLLaMA/comments/162c0e3/cost_of_time_tuning_local_llama_a_vs_davinci3/
fuzziesox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162c0e3
false
null
t3_162c0e3
/r/LocalLLaMA/comments/162c0e3/cost_of_time_tuning_local_llama_a_vs_davinci3/
false
false
self
1
null
Tutorial published for "turn a $95 AMD APU into a 16GB VRAM GPU"
1
[removed]
2023-08-27T01:01:25
https://www.reddit.com/r/LocalLLaMA/comments/162cbgm/tutorial_published_for_turn_a_95_amd_apu_into_a/
chain-77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162cbgm
false
null
t3_162cbgm
/r/LocalLLaMA/comments/162cbgm/tutorial_published_for_turn_a_95_amd_apu_into_a/
false
false
self
1
{'enabled': False, 'images': [{'id': '8UtdWucazvbhdM7nS9Xq4Tli5ZmXVw4-m4GG5kxYOCw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tmzx3FrriPNA15EwTWX-2U7v7UKKkKAxcCmENrIDvIQ.jpg?width=108&crop=smart&auto=webp&s=945e786f3a83c31810fad6a39c317bf2ebadd76d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/tmzx3FrriPNA15EwTWX-2U7v7UKKkKAxcCmENrIDvIQ.jpg?width=216&crop=smart&auto=webp&s=ab59109dceb2c153f63ed76d6c20652ac49f80e6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/tmzx3FrriPNA15EwTWX-2U7v7UKKkKAxcCmENrIDvIQ.jpg?width=320&crop=smart&auto=webp&s=fe6e4df26f0b9dffa897e5e9b79bad764606c181', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/tmzx3FrriPNA15EwTWX-2U7v7UKKkKAxcCmENrIDvIQ.jpg?width=640&crop=smart&auto=webp&s=21c5dfcfc4956f0265ffbb052969e3ee9f0d705b', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/tmzx3FrriPNA15EwTWX-2U7v7UKKkKAxcCmENrIDvIQ.jpg?auto=webp&s=d6d276216032742c21d70e0db87f49821ae44a50', 'width': 900}, 'variants': {}}]}
How did OpenAI decide what the experts are experts in, in the MoE GPT-4?
1
How do you decide that? And what do you believe are the expert fields?
2023-08-27T02:51:59
https://www.reddit.com/r/LocalLLaMA/comments/162em78/how_did_openai_decide_what_the_experts_are/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162em78
false
null
t3_162em78
/r/LocalLLaMA/comments/162em78/how_did_openai_decide_what_the_experts_are/
false
false
default
1
null
Troubleshooting Issues with LLaMA After Update: Seeking Help with Configuration
1
[removed]
2023-08-27T03:05:20
https://www.reddit.com/r/LocalLLaMA/comments/162evx3/troubleshooting_issues_with_llama_after_update/
fuasyfaposht
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162evx3
false
null
t3_162evx3
/r/LocalLLaMA/comments/162evx3/troubleshooting_issues_with_llama_after_update/
false
false
self
1
null
Navigating the Purpose and Distinctions of LLaMA 2 Compared to ChatGP
1
[removed]
2023-08-27T03:23:23
https://www.reddit.com/r/LocalLLaMA/comments/162f8nt/navigating_the_purpose_and_distinctions_of_llama/
fuasyfaposht
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162f8nt
false
null
t3_162f8nt
/r/LocalLLaMA/comments/162f8nt/navigating_the_purpose_and_distinctions_of_llama/
false
false
self
1
null
How much vram and system ram to run 70b UNQUANTIZED, full precision, not q4, in GPTQ?
1
The title. I've tried to look it up, but it didn't seem incredibly clear. Thank you.
2023-08-27T03:54:09
https://www.reddit.com/r/LocalLLaMA/comments/162fty2/how_much_vram_and_system_ram_to_run_70b/
CanineAssBandit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162fty2
false
null
t3_162fty2
/r/LocalLLaMA/comments/162fty2/how_much_vram_and_system_ram_to_run_70b/
false
false
self
1
null
Codellama prompt format
1
Just an interesting finding, instead of using the prompt format from the original codellama repo, if we use the Alpaca prompt format, it gets better results. Here is what I have tried: \`\`\` model\_directory = "./models/TheBloke/CodeLlama-34B-Instruct-GPTQ" // temperature = 0.3 system\_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request." user\_prompt = "For function type T, MyParameters<T> returns a tuple type from the types of its parameters.Please implement typescript type MyParameters<T> by yourself." formatted\_prompt = f"{system\_prompt}\\n\\n### Instruction:\\n{user\_prompt}\\n\\n### Response:\\n" \`\`\` Not sure if there is the part of the reason, wizardCoder has significantly better performance than original model because it uses the right prompt format?
2023-08-27T05:39:36
https://www.reddit.com/r/LocalLLaMA/comments/162hvbx/codellama_prompt_format/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162hvbx
false
null
t3_162hvbx
/r/LocalLLaMA/comments/162hvbx/codellama_prompt_format/
false
false
self
1
null
How can I improve the system prompt so that the model does not put me in hopeless situations?
1
Hello, my problem is that the model I use (mythomax-l2-13b) often likes to put my character in completely hopeless situations. Although in the hint I am asking for the exact opposite. The hint: This is text role-playing game. Rules: - A game master describes the world around and the user's situation - If the user asks a clarifying question, a game master gives an answer that is not contrary to previous events and world rules - If the user describes their action, a game master returns the result of that action - A game master should offer the user new challenges when he solves problems. - The user must not be in a dead end situation - A game master should not allow repetition, characters and he should not get hung up - A game master must not allow the user to perform actions that violate the rules of the world - The story must be realistic and skillfully written I believe that the problem is that the model does not correctly understand the rule `- The user must not be in a dead end situation` and because of this constantly tries to trap me, but I do not know how to fix it Otherwise, I am satisfied with this hint, I do not even have to write `do:` or `say:` before the message so that the model understands me, although I do not exclude that there are some other points in the hint that can be improved (I do not speak English and it is difficult for me to analyze it). So I will be grateful for any help!
2023-08-27T06:04:57
https://www.reddit.com/r/LocalLLaMA/comments/162ibaj/how_can_i_improve_the_system_prompt_so_that_the/
Dvitry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162ibaj
false
null
t3_162ibaj
/r/LocalLLaMA/comments/162ibaj/how_can_i_improve_the_system_prompt_so_that_the/
false
false
self
1
null
The final guide for ROCM Users
1
Hi! I've been struggling to make ROCM work for hours due to various issues I encountered. I'm here to make one final guide that is super precise so there is no misunderstanding or stone left unturned. I believe this is the best way to get rocm up and running, feel free to add some suggestions. 1. Use Arch, it's the only distro where you can install rocm without having to install amdgpu proprietary drivers. If you don't have the patience to install Arch yourself, just use archinstall or endeavour 1.5. Clone the text generation web ui repo ``` git clone https://github.com/oobabooga/text-generation-webui ``` 2. Install ROCM and build essentials ``` pacman -Syu rocm-hip-sdk rocm-opencl-sdk base-devel ``` Should do the trick. 3. [Install pyenv](https://github.com/pyenv/pyenv). Do not get it from pacman as it will lack virtualenv 4. Install python 3.10.9 from pyenv. ``` pyenv install 3.10.9 ``` 5. Create and activate your env using ``` pyenv virtualenv 3.10.9 myenv pyenv activate myenv ``` 6. Install torch preview for rocm. Currently ``` pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6 ``` Check the site pytorch get started section on their website for the updated command 7. Install from source the [bitsandbytes rocm fork](https://github.com/agrocylo/bitsandbytes-rocm). ``` makehip python setup.py install ``` 8. [Install autogptq from source](https://github.com/PanQiWei/AutoGPTQ). ``` ROCM_VERSION=5.6 pip install -e . ``` Use the env variable ROCM_VERSION for your rocm version (in this case 5.6) but don't use PYTORCH_ROCM_ARCH. If compilation fails, you need a patch. ``` diff --git a/autogptq_cuda/exllama/hip_compat.cuh b/autogptq_cuda/exllama/hip_compat.cuh index 5cd2e85..79e0930 100644 --- a/autogptq_cuda/exllama/hip_compat.cuh +++ b/autogptq_cuda/exllama/hip_compat.cuh @@ -46,4 +46,6 @@ __host__ __forceinline__ hipblasStatus_t __compat_hipblasHgemm(hipblasHandle_t #define rocblas_set_stream hipblasSetStream #define rocblas_hgemm __compat_hipblasHgemm +#define hipblasHgemm __compat_hipblasHgemm + #endif ``` Save this to a file, ex: mypatch.patch and use ``` git apply mypatch.patch ``` To apply it, then install again. 9. Modify the requirements.txt file to not include autogptq, bitsandbytes, and other libraries you may not need like llama.cpp (cpu only). You can comment out those entries with # or just remove them. 10. Install other dependencies in the text-gen-ui folder ``` pip install -r requirements.txt ``` 11. Launch it with ``` python server.py ``` If it fails loading models, you need to fake another gpu. For 6700XT, I had to use the following env variables HSA_OVERRIDE_GFX_VERSION=10.3.0 HCC_AMDGPU_TARGET=gfx1030 The trick is to use a gpu for the same gen but that's officially supported. [You can find your gpu codename from this list](https://llvm.org/docs/AMDGPUUsage.html). I had to dig for an hour to find this list. Exhausting. Only use autogptq to load your models and everything should work great. Sources: * https://rentry.co/eq3hg (for bitsandbytes fork) * https://are-we-gfx1100-yet.github.io/post/text-gen-webui/ (for the patch) * Hours of research and testing on my end Don't hesitate to ask questions or ask for more precise instructions in case some parts of this guide isn't as clear as I intended.
2023-08-27T07:01:21
https://www.reddit.com/r/LocalLLaMA/comments/162j9uj/the_final_guide_for_rocm_users/
Fredol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162j9uj
false
null
t3_162j9uj
/r/LocalLLaMA/comments/162j9uj/the_final_guide_for_rocm_users/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ixgpMOio46tVfmbBE4e77Z6LDWy_3rBlTzC9wwngCg0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?width=108&crop=smart&auto=webp&s=b59c4febf309df28c983f4be947f11a676fdceba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?width=216&crop=smart&auto=webp&s=10cf0c70bbed45bdfc2c883c42198ac637f112a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?width=320&crop=smart&auto=webp&s=b00a916fbacca69963c9b51dce7bbc24cc32b9c3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?width=640&crop=smart&auto=webp&s=41992572f2416f6ba2782e08e7e620870c64b170', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?width=960&crop=smart&auto=webp&s=0f966c266adfbdb92c28bae5f7bc68ba64f999d6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?width=1080&crop=smart&auto=webp&s=1da117d00e83511b6ac902b3d168a03b37f90955', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XG4kb_5XAPBy5GOoEZt460IFy1oDXOwkUqg_AMTjKmM.jpg?auto=webp&s=590deee6b95d80e1f1dfaff810ce44ee8a308a72', 'width': 1200}, 'variants': {}}]}
Learn About Arkham's Airdrop: $1500-$4200 Possible.
1
[removed]
2023-08-27T07:31:52
https://www.reddit.com/r/LocalLLaMA/comments/162jsyp/learn_about_arkhams_airdrop_15004200_possible/
CommonPhilosophy1741
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162jsyp
false
null
t3_162jsyp
/r/LocalLLaMA/comments/162jsyp/learn_about_arkhams_airdrop_15004200_possible/
false
false
self
1
null
Optimal settings for apple silicon?
1
I just started experimenting with local AI, followed examples online to download the OobaBooga WebUI, and the "codellama-34b-instruct.Q5\_K\_M.gguf" file [from TheBloke here](https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGUF). I got it running but it's far slower than I expected, generating around one word per minute. For example, my first message to it, "hello", produced the message "Hello, how can I assist you?" with the logged time taken as `364.96 seconds (0.02 tokens/s, 9 tokens, context 27, seed 645607020)`. This is much slower than I expected, and not really very usable for real-world tasks. I suspect there's some obvious settings to tweak, but I can't find any info on where to start looking for that, especially since some info might be outdated since I understand the GGUF format is brand new. My question is, what settings should I try adjusting? Or is the model I'm running simply too big to be useful, and if so, which models would perform better? The hardware I'm running on is an M1 Max Macbook Pro with 32GB of RAM, so my understanding was that the 34B model should be usable with it, the information page says this particular version should use around 26GB of it. My computer doesn't seem particular burdened by the running of it, Python 3.10 is using around 50-55% of a single CPU core (across 10 threads), my memory pressure is low, and the GPU doesn't seem busy at all. The model loader is set to `llama,cpp`, with all the default settings from the GGUF file. Thanks in advance!
2023-08-27T08:08:52
https://www.reddit.com/r/LocalLLaMA/comments/162kf7f/optimal_settings_for_apple_silicon/
thegreatpotatogod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162kf7f
false
null
t3_162kf7f
/r/LocalLLaMA/comments/162kf7f/optimal_settings_for_apple_silicon/
false
false
self
1
{'enabled': False, 'images': [{'id': '08sAColpeKjsIQz0Y6TMT9oFqif31urtTEI6MaKRdEg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?width=108&crop=smart&auto=webp&s=59ac188db64d7b7873a2e44ee5b43879d08fd98f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?width=216&crop=smart&auto=webp&s=4dd7fda5afdbbdc036c8dd207611cbfb4d8f843a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?width=320&crop=smart&auto=webp&s=9e5f47773b9209b3310b025e4f750796cbba5bb6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?width=640&crop=smart&auto=webp&s=c15d5af5708f8020257c1f93c022588499ec8c57', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?width=960&crop=smart&auto=webp&s=ceb9d3773b19b76016e63ec7492b16a56b3f485f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?width=1080&crop=smart&auto=webp&s=cda070c37b15eb964c9964e6b56933ff9ef0699f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2aYRxUUDUiCCK7YjOFkDaBFCYr1R18-y-cg3PtSmz9g.jpg?auto=webp&s=402f89a121639dc9b5985860f5e438a53a01481a', 'width': 1200}, 'variants': {}}]}
Japanese StableLLM
1
Has anyone tried successfully to use the new Japanese LLM released by Stability? I followed the instructions on the model card, and once all the requirements were installed, the process just ends with the message "Killed". I only have a laptop 12gb graphics card. The model is apparently based on GPT Neo X. Any work happening to convert or quantize? @thebloke?
2023-08-27T08:22:52
https://www.reddit.com/r/LocalLLaMA/comments/162kno7/japanese_stablellm/
usethenodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
162kno7
false
null
t3_162kno7
/r/LocalLLaMA/comments/162kno7/japanese_stablellm/
false
false
self
1
null