title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Could LLaMA 2 be available on mobile for roleplay? | 1 | or just be available without needing 8.06 gb ram on the computer?? (faraday) | 2023-07-22T17:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/156q6i2/could_llama_2_be_available_on_mobile_for_roleplay/ | rebeldocs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156q6i2 | false | null | t3_156q6i2 | /r/LocalLLaMA/comments/156q6i2/could_llama_2_be_available_on_mobile_for_roleplay/ | false | false | self | 1 | null |
What affect the speed of the gptj model loading? | 1 | I ran privateGPT on both my Ubuntu desktop and an AWS EC2 instance, just for comparison. Even if I use a beefy instance, which is much more powerful than the desktop on the aspects of # of cpus, and memory size, this model loading step is much slower on the AWS instance than it is on my desktop, like at least 5 times slower. So what factors affect this model loading step? Thanks! | 2023-07-22T17:46:41 | old_enough_to_drink | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 156qh5i | false | null | t3_156qh5i | /r/LocalLLaMA/comments/156qh5i/what_affect_the_speed_of_the_gptj_model_loading/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'oJ23H0AZSkDJba-ksQZiK4wxbFE3GXdZjJhjQUnJvxc', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=108&crop=smart&auto=webp&s=56bc9e9ebc09fd20ced5646c3880e929c900e316', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=216&crop=smart&auto=webp&s=e936d25c8ff7da79ab16a447a99463eab4a1ec5b', 'width': 216}, {'height': 229, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=320&crop=smart&auto=webp&s=3c950605277322c29b1a513764f397a5b7c0c796', 'width': 320}, {'height': 459, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=640&crop=smart&auto=webp&s=3c86306476ee36b6c7cf74e93858bc383f813b72', 'width': 640}, {'height': 689, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=960&crop=smart&auto=webp&s=0878392d6e5925044df4d9df2d3dc80330a9c9a0', 'width': 960}, {'height': 776, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=1080&crop=smart&auto=webp&s=b036d10fb4a2c72a64d3a55595084e53c1ae2d3c', 'width': 1080}], 'source': {'height': 2172, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?auto=webp&s=1b0aabb7dc2c87a6082175e075b5c1c46871e59a', 'width': 3022}, 'variants': {}}]} |
||
Exploring the guardrails on the censored model, must have hit a nerve | 1 | 2023-07-22T17:57:40 | Kelvin_451 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 156qqd2 | false | null | t3_156qqd2 | /r/LocalLLaMA/comments/156qqd2/exploring_the_guardrails_on_the_censored_model/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'NxdvrlYLcaDaUfpYR5KCJuY2LdPpNSiZmACpV9hH2wg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?width=108&crop=smart&auto=webp&s=82e9fdd7e1e2660d2c6f4d14d51fc786c5771078', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?width=216&crop=smart&auto=webp&s=f97c88e7c93f2216e818dfc00f9a6b8805e2de10', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?width=320&crop=smart&auto=webp&s=8d6ee3a234c22d6257cde6edcc696dc41aca3709', 'width': 320}], 'source': {'height': 732, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?auto=webp&s=ec6f1e1a2429afa0a7f214531a983ec62ed0e585', 'width': 338}, 'variants': {}}]} |
|||
Hey, has anyone finetuned Llama2 for fiction writers? if not could someone point me in the direction of a tutorial so I can do it? | 1 | Title says it all really. | 2023-07-22T18:39:24 | https://www.reddit.com/r/LocalLLaMA/comments/156rqgq/hey_has_anyone_finetuned_llama2_for_fiction/ | Benedictus111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156rqgq | false | null | t3_156rqgq | /r/LocalLLaMA/comments/156rqgq/hey_has_anyone_finetuned_llama2_for_fiction/ | false | false | self | 1 | null |
GPU requirements for over 2048 context/tokens | 1 | Are there any standardised lists which state how much VRAM is needed for different combinations?
For example
13b with 2048 context
13b with 4096 context
13b with 8192 context
30b with 2048 context
Etc.... | 2023-07-22T18:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/156s5jg/gpu_requirements_for_over_2048_contexttokens/ | Useful-Command-8793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156s5jg | false | null | t3_156s5jg | /r/LocalLLaMA/comments/156s5jg/gpu_requirements_for_over_2048_contexttokens/ | false | false | self | 1 | null |
Did GPT-4 really get worse? We built an evaluation framework so you can find out | 1 | [removed] | 2023-07-22T19:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/156t4rl/did_gpt4_really_get_worse_we_built_an_evaluation/ | hegel-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156t4rl | false | null | t3_156t4rl | /r/LocalLLaMA/comments/156t4rl/did_gpt4_really_get_worse_we_built_an_evaluation/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jyQQDsewOakYxiv0g_fyaC75u1-VWTozH-60lliqvpI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=108&crop=smart&auto=webp&s=4ee94dbf11c076abadc54e222372ea5dd992be04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=216&crop=smart&auto=webp&s=d9c919acc6acfee6254fa171804052ec4967a1fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=320&crop=smart&auto=webp&s=78e85a6de1d6e7c8ff725daa00a57bbcbf9b14f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=640&crop=smart&auto=webp&s=33be0729d50fdfe581638fe49f1c75747b8f1c58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=960&crop=smart&auto=webp&s=434f5e2db64817e52174aff4697ee842a4cd1985', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=1080&crop=smart&auto=webp&s=b9863320174050f0f49f5340387053b6f82f7bf9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?auto=webp&s=680209a0055c22ea29761912c76c5576da1dd495', 'width': 1200}, 'variants': {}}]} |
Looking for resources to fine-tune a model "from scratch" | 1 | Hi!
So my friend and I we're looking to explore all these topics of LLM fine-tuning and quantization etc. But we don't really know where to start.
We have a solid understanding of machine learning theory side (LLMs included) and are good with Python. So we'd like to make some kind of notebook where we have something like import a base model, code the fine-tuning function etc.
We're just trying to learn all these new techniques by implementing them from scratch. So we'd really appreciate it if there any tutorials using Python 🤗
We don't really need to have tutorials if anyone knows what papers we can read and follow we'd be grateful! We know some papers that we should look into Instruction Tuning/RLFH, QLoRa etc. | 2023-07-22T19:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/156t90f/looking_for_resources_to_finetune_a_model_from/ | LoLingLikeHell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156t90f | false | null | t3_156t90f | /r/LocalLLaMA/comments/156t90f/looking_for_resources_to_finetune_a_model_from/ | false | false | default | 1 | null |
Has anyone successfully fine-tuned MPT-7B? | 1 | I'm new to all this LLM stuff and I've been trying to see if I can adapt the Hugging Face [language modeling example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/pytorch/language_modeling.ipynb) to use MPT-7B, but it runs out of memory on the training step basically no matter what I do--in regular Colab, using Colab Pro, and using various configurations of GPUs in RunPod (I've tried pods ranging from one 40GB A100 to 4 80GB A100s).
Has anyone fine-tuned MPT-7B successfully? What system configuration did you use? Did you have to do anything special in code, like using PyTorch's DistributedDataParallel or anything in that vein? | 2023-07-22T20:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/156tppl/has_anyone_successfully_finetuned_mpt7b/ | Proeliata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156tppl | false | null | t3_156tppl | /r/LocalLLaMA/comments/156tppl/has_anyone_successfully_finetuned_mpt7b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
is RTX 3090 used for mining good for Local LLMs? | 1 | I see a vendor in my city with 10 used (RTX 3090) cards, 2 used for mining and 8 used in gaming cafes. prices range from 370$ to 430$. are these good for Running big LLMs Locally? I was thinking of getting 2 but not sure about how they'll perform as 30B models and above are very memory intensive.
any thoughts? and thank you | 2023-07-22T21:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/156w2jp/is_rtx_3090_used_for_mining_good_for_local_llms/ | Dadjann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156w2jp | false | null | t3_156w2jp | /r/LocalLLaMA/comments/156w2jp/is_rtx_3090_used_for_mining_good_for_local_llms/ | false | false | self | 1 | null |
How can I start learning about LLMs ? | 1 | I am intrigued by LLMs, Deep Learning, and Machine Learning and I would really like to learn how to launch a model, fine-tune it, or embed it but I feel a bit lost, do you have any tips for getting started or online courses that can help me achieve this goal? | 2023-07-22T21:52:40 | https://www.reddit.com/r/LocalLLaMA/comments/156wfsp/how_can_i_start_learning_about_llms/ | redule26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156wfsp | false | null | t3_156wfsp | /r/LocalLLaMA/comments/156wfsp/how_can_i_start_learning_about_llms/ | false | false | self | 1 | null |
Yet another QLoRA tutorial | 1 | I fine-tuned OpenLLama 7B using QLoRA on transcripts of the Lex Fridman podcast to generate a fictional conversation of Lex with Barack Obama. This is not a model ready for use but rather a valuable step-by-step explanation illustrating the process, driven by the desire to learn more about training large language models at home.
Given that it is about QLoRA fine-tuning of OpenLLaMa and uses none of the latest long-context techniques or Llama 2 it feels outdated already but still maybe some people find it useful.
[https://medium.com/@geronimo7/from-transcripts-to-ai-chat-an-experiment-with-the-lex-fridman-podcast-3248d216ec16](https://medium.com/@geronimo7/from-transcripts-to-ai-chat-an-experiment-with-the-lex-fridman-podcast-3248d216ec16) | 2023-07-22T23:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/156ylgf/yet_another_qlora_tutorial/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156ylgf | false | null | t3_156ylgf | /r/LocalLLaMA/comments/156ylgf/yet_another_qlora_tutorial/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3fK4xu3PY7jH5gSmbBWirPCF_DDbwgIBR0EB9M-4Sio', 'resolutions': [{'height': 118, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=108&crop=smart&auto=webp&s=8d08817044b1fb74fa8d7d8bec4965d31d564ebf', 'width': 108}, {'height': 236, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=216&crop=smart&auto=webp&s=1ebe6f69840edbb47d4d85bc7edb2c3894ca8125', 'width': 216}, {'height': 350, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=320&crop=smart&auto=webp&s=dc5059e218f5e27057ec8b7dcac442da5af76ead', 'width': 320}, {'height': 700, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=640&crop=smart&auto=webp&s=d9d9ac9a41b0ceaf415ed46a181444589f4d3a51', 'width': 640}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?auto=webp&s=ac503d42696e1bc0886e562db65e30d3acb8b45b', 'width': 936}, 'variants': {}}]} |
What parameters work best for 8k context? | 1 | [removed] | 2023-07-23T00:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/156zy6i/what_parameters_work_best_for_8k_context/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156zy6i | false | null | t3_156zy6i | /r/LocalLLaMA/comments/156zy6i/what_parameters_work_best_for_8k_context/ | false | false | self | 1 | null |
Having trouble with Llama2 Guanaco 7B / incoherent responses | 1 | **Guanaco QLoRA 7B (Q4\_K\_M)**
Hi guys, trying to create a character that responds in character / roleplay. my model instruction is Faraday's default:
Text transcript of a never-ending conversation between {user} and {character}. In the transcript, gestures and other non-verbal actions are written between asterisks (for example, *waves hello* or *moves closer*).
I set an example dialogue and character persona but it just goes off the rails and is incoherent mess, misspelling words, long run on sentences that quickly stop making any sense. Is this expected for a local model? | 2023-07-23T00:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1570hcm/having_trouble_with_llama2_guanaco_7b_incoherent/ | Kep0a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1570hcm | false | null | t3_1570hcm | /r/LocalLLaMA/comments/1570hcm/having_trouble_with_llama2_guanaco_7b_incoherent/ | false | false | self | 1 | null |
Is it possible we could see fine tuned models of GPT-3.5-Turbo and GPT-4 that are more uncensored and raw, similar to what we’ve seen with previous LLaMA/Vicuna models? | 1 | 2023-07-23T01:26:45 | sardoa11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1571b84 | false | null | t3_1571b84 | /r/LocalLLaMA/comments/1571b84/is_it_possible_we_could_see_fine_tuned_models_of/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8Jw3JjnPs2SGIGOsAo5fWaS64hG6hEwBSz2K1RYNnHA', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=108&crop=smart&auto=webp&s=391c36e0501e90a7a10b8fc5d73b3c38df2044ec', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=216&crop=smart&auto=webp&s=3465bc051a52ae3bc546f1d4461bafba0bcfb8eb', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=320&crop=smart&auto=webp&s=3d90d60b764e1ff87a6b1affe537e2a876f15dd4', 'width': 320}, {'height': 248, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=640&crop=smart&auto=webp&s=d80c842d949fd8b0b792c352b7f9a83dee76ceca', 'width': 640}, {'height': 372, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=960&crop=smart&auto=webp&s=be7c037b3469b41d023ed2c6af277cc09662801a', 'width': 960}, {'height': 418, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=1080&crop=smart&auto=webp&s=cec041e2ca63036dc2f1aed0dc3d5fcc86d4b783', 'width': 1080}], 'source': {'height': 457, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?auto=webp&s=59653b9b54b69e7e90f0b71597d4ad0e5ae3740f', 'width': 1179}, 'variants': {}}]} |
|||
AlpacaCielo 13b, Llama-2 based model for storytelling and roleplay. | 1 | Back in the "old days", I really enjoyed creative models such as Alpasta, so I wanted to bring a similar experience to Llama-2. So I created AlpacaCielo. It is a triple merge of Nous Hermes + Guanaco + Storytelling, and is an attempt to get the best of all worlds to get a smart & creative model. I've done minimal testing, but so far it works pretty well and I much prefer its outputs over base Hermes. More info on huggingface: [AlpacaCielo](https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b). And yes, it is uncensored from my testing, but you still might get some OpenAssistant disclaimers if you ask about anything too crazy. Let me know what you think!
ggml quants are also available. If anyone with more vram than me could create gptq's that would be amazing, but if not, I'll have them up later. | 2023-07-23T01:50:22 | https://www.reddit.com/r/LocalLLaMA/comments/1571srk/alpacacielo_13b_llama2_based_model_for/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1571srk | false | null | t3_1571srk | /r/LocalLLaMA/comments/1571srk/alpacacielo_13b_llama2_based_model_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'It9u9WCOskxFv8Pt_vUgR2w-W18_WAvJDGUkxKUqGOY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=108&crop=smart&auto=webp&s=3fe55b63d93fd8686ebdd6535d27871db01d0575', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=216&crop=smart&auto=webp&s=ceaa1fe8025e09470409ec417546d7ed4c95583d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=320&crop=smart&auto=webp&s=fdd67e729647abad279004c101a037a5f8913885', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=640&crop=smart&auto=webp&s=cf311573e56744d5aaf060149573d5852e30fcb4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=960&crop=smart&auto=webp&s=a0c5e5d0da717fcf67fd84c682a07752bc8cd449', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=1080&crop=smart&auto=webp&s=eef778f572dba90dda7ef5c00ffa6ff0629ecb87', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?auto=webp&s=d3e2970e6bc6687b9293180dac059283c4e0f924', 'width': 1200}, 'variants': {}}]} |
Help me finetune Llama-2-7B with qlora | 1 | I am trying to finetune Llama-2-7B with this script https://github.com/artidoro/qlora/blob/main/scripts/finetune_guanaco_7b.sh
Please help me with
--source_max_len 16 \
--target_max_len 512 \
What should I use? | 2023-07-23T03:25:41 | https://www.reddit.com/r/LocalLLaMA/comments/1573pf2/help_me_finetune_llama27b_with_qlora/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1573pf2 | false | null | t3_1573pf2 | /r/LocalLLaMA/comments/1573pf2/help_me_finetune_llama27b_with_qlora/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tozPKa5-9dLqIt7uc63UE2WcaocD0i-HTS-qiRZnRaI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=108&crop=smart&auto=webp&s=99fc2e55c75d97c9f9a2e2f7817c09a1caedf28f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=216&crop=smart&auto=webp&s=020ebf12ccab8ab4c16991b3dcf260518a78a2c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=320&crop=smart&auto=webp&s=26a9fbd5128c84c497e0687ddc4287dc30840e22', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=640&crop=smart&auto=webp&s=449c6165657467a17282d5a663569724b15fd539', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=960&crop=smart&auto=webp&s=732f2cca8ca4567b8be003613a027a8b990cee9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=1080&crop=smart&auto=webp&s=0c72f22738b7bfdabbee3cb4726d4e621faf0de7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?auto=webp&s=f29bcafa2baaf7accaba533e3992bd5c2fa26529', 'width': 1200}, 'variants': {}}]} |
function calling with llama-2? | 1 | Have been looking into the feasibility of operating llama-2 with agents through a feature similar to OpenAI's function calling.
Have had very little success through prompting so fa :( Just wondering if anyone had a different experience of if we might have to go down the fine-tune route as OpenAI did. | 2023-07-23T03:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/15742zf/function_calling_with_llama2/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15742zf | false | null | t3_15742zf | /r/LocalLLaMA/comments/15742zf/function_calling_with_llama2/ | false | false | self | 1 | null |
In just a week or so, there are so many new models on the new LLM leaderboard. What's your thought? | 30 | Just a few weeks ago, falcon-40b is at the top and became the most mentioned model apart from GPT4. Now suddenly so many new models came out. Some contain only 30b parameters. What's your thoughts and how do you comprehend the constantly improving models? Also, some said falcon-40b-instruct is comparable to the first-generation ChatGPT, then what about those at the top of the current leaderboard? | 2023-07-23T03:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/15748wc/in_just_a_week_or_so_there_are_so_many_new_models/ | PrestigiousPancake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15748wc | false | null | t3_15748wc | /r/LocalLLaMA/comments/15748wc/in_just_a_week_or_so_there_are_so_many_new_models/ | false | false | self | 30 | null |
Help needed to the approach | 1 | My boss asked me this :
"Attached the pdf file showing the financial statements of the our branches and their competitors. I want to build a gpt model referencing it.
Aim is to
Do financial analysis of multiple financial statements using the attached documents"
Do you understand what he's trying to do and can you guide me the steps on how to achieve this?
Thanks. | 2023-07-23T05:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1575r2w/help_needed_to_the_approach/ | paulus_aurellius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1575r2w | false | null | t3_1575r2w | /r/LocalLLaMA/comments/1575r2w/help_needed_to_the_approach/ | false | false | self | 1 | null |
Can't run Llama 2 GPTQ on KoboldAI; is there a fix and, if not, are there more updated alternatives? | 1 | I've been trying to get KoboldAI to run for the past couple of days using the 7b GPTQ version of Llama 2 from [here](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ). Initially I was using the working branch I had, which I believe was the 0cc4m branch on the United version that supports GPTQ. Although it was able to see the model, it was unable to load it due to the following error
`"OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory E:\KoboldAI-4bit\KoboldAI\models\TheBloke_Llama-2-7B-GPTQ."`
as if it is expecting a GGML model (because of it looking for a .bin file). I also tried redownloading KoboldAI from the main github page, then updating to United, reinstalling all dependencies, making sure "4 bit" quantization was selected, and it threw the exact same error.
I feel like at this point I shouldn't have to be trying to rename models to get them to work, since Ooba just loads stuff without complaint. But are there any fixes for this? And if not, are there more updated interfaces similar to KoboldAI that will work properly and aren't months out of date? | 2023-07-23T05:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/1575szm/cant_run_llama_2_gptq_on_koboldai_is_there_a_fix/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1575szm | false | null | t3_1575szm | /r/LocalLLaMA/comments/1575szm/cant_run_llama_2_gptq_on_koboldai_is_there_a_fix/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'scnkn175QpfbMTdtoYxWh-I3soYhx9pJiHmW5tQQPMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=108&crop=smart&auto=webp&s=306a702906190a7340924eac46d7feb1f3eec45e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=216&crop=smart&auto=webp&s=287c3b0788fac7766494f5d0e851ac89ff9d8ac9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=320&crop=smart&auto=webp&s=625a75ac60be2245e5230d54df27456c7107ec4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=640&crop=smart&auto=webp&s=83a9e51bb02d1141095b45fc8a465ecd586bdf97', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=960&crop=smart&auto=webp&s=3dd7e9d25663fc2ba02b34fdd652a431bbc9b036', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=1080&crop=smart&auto=webp&s=8b4b881bb48884dc6af6951b1ce4443f4bdcf11d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?auto=webp&s=dbb9e744ef3627d636c68c84c199aa1f0dafb845', 'width': 1200}, 'variants': {}}]} |
How do you guys learn how to do any of these things? | 1 | I've gotten into using localLLMs as a way to curb the censoring on chatGPT but it seems like there is so much more to it than just downloading the LLM on my webui. I have no idea how to start learning about any of this but I would definitely love to! From what I've gathered apparently there are ways to train a model?? which I'd totally be interested in as I could see applications in training a smaller model with more intentional training sets I'm my daily life. So I guess my question is where do you think is a good place for someone like me to start learning about all of this and how it works? could be like a long guide online or a YouTube course or smth any resources would be appreciated!! | 2023-07-23T06:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1576sy2/how_do_you_guys_learn_how_to_do_any_of_these/ | MyLaptopSpoil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1576sy2 | false | null | t3_1576sy2 | /r/LocalLLaMA/comments/1576sy2/how_do_you_guys_learn_how_to_do_any_of_these/ | false | false | self | 1 | null |
Dockerized ubuntu evironment for Llama with llama.cpp | 1 | [removed] | 2023-07-23T06:22:49 | https://github.com/snakewizardd/llamaDocker | Happy_Chicken9835 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1576yy2 | false | null | t3_1576yy2 | /r/LocalLLaMA/comments/1576yy2/dockerized_ubuntu_evironment_for_llama_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'v21QP4TtqzczZ2cBgxEDtaaXta3l4lK3-RPUPyRHEnw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=108&crop=smart&auto=webp&s=0235741c27e557526711b93a953ebc06f0c5cde0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=216&crop=smart&auto=webp&s=c02c3e04b6e5f5619c68354336ea98930d56904a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=320&crop=smart&auto=webp&s=8e51acf2ecd3c4fc1b7c688094cf4adcc7c3c57d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=640&crop=smart&auto=webp&s=6bf3c194bd39cff74a3f175994dbc14bae50bfc9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=960&crop=smart&auto=webp&s=6e8b403b868655e413825ee5ccf2af5861452d14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=1080&crop=smart&auto=webp&s=3809579cebb7cbf308870d2d02513d170d4b126c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?auto=webp&s=8b1ba17283b5232cac0ce90469c9e99ccc360129', 'width': 1200}, 'variants': {}}]} |
|
Trying to create my own dataset but have no clue if I am doing this right | 1 | Hello I would love some feedback on if this is right or would even work. I made a dataset and have 2 versions, one my new data by itself and the other version merged with guanaco unchained. I think the merged one might work better but would love to get some thoughts from those that know more than me. I was thinking I would love to try to train like the 7B version on this data to create a LLM to serve as a base for a Star Wars Droid, eventually to turn into a toy/robot companion. Here is a link: [https://huggingface.co/datasets/Azimn/Star\_Wars\_Dataset\_for\_Droid\_responces](https://huggingface.co/datasets/Azimn/Star_Wars_Dataset_for_Droid_responces) | 2023-07-23T06:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/15771o7/trying_to_create_my_own_dataset_but_have_no_clue/ | Azimn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15771o7 | false | null | t3_15771o7 | /r/LocalLLaMA/comments/15771o7/trying_to_create_my_own_dataset_but_have_no_clue/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6vvqzdBUjmmAvXRK7Dm-Odo44XAZxVpwJ75XZklLQFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=108&crop=smart&auto=webp&s=f22e6e6629e65e02ea642a935cfb9cdf23cddf23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=216&crop=smart&auto=webp&s=09244ec9062adb649bff15590dcf72d97e81603f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=320&crop=smart&auto=webp&s=7e30ef4a6951e90bcad7e62f8ac7ed556cc6ea11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=640&crop=smart&auto=webp&s=8eb5e37933d5b6afabb8273636f18b9fb73992e8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=960&crop=smart&auto=webp&s=bae3e322eb6e066275c0b2d9825cceb712b6c738', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=1080&crop=smart&auto=webp&s=30d8a603e7a7664261bdfa7ad1d40fc3d8b63491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?auto=webp&s=9e071c1dee48c8e99d818b03a1d62bb21b498661', 'width': 1200}, 'variants': {}}]} |
Glossary for machine learning and looking for something LLM specific. | 1 | Ironically search results are often spammed with AI generated Websites. I found these two to be helpful, and it would be great if someone found a language model specific glossary.
[https://developers.google.com/machine-learning/glossary](https://developers.google.com/machine-learning/glossary)
[https://ml-cheatsheet.readthedocs.io/en/latest/glossary.html](https://ml-cheatsheet.readthedocs.io/en/latest/glossary.html)
I tried searching but ironically the results are spammed with AI generated webpages. There are a lot of good learning material, but that is not very helpful when you quickly want to look up some terms. | 2023-07-23T06:28:08 | https://www.reddit.com/r/LocalLLaMA/comments/15772a1/glossary_for_machine_learning_and_looking_for/ | wreckingangel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15772a1 | false | null | t3_15772a1 | /r/LocalLLaMA/comments/15772a1/glossary_for_machine_learning_and_looking_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FJ6t7xCOQEzHctf_w44RxoBtAQtkMgjGAB17x0cwcuA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=108&crop=smart&auto=webp&s=4c2b9f1a71e813d704048454fc460d036ecc0c02', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=216&crop=smart&auto=webp&s=f9fa9e7150c80e3bdbe8294cfe99e1845a0c9284', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=320&crop=smart&auto=webp&s=b5b275c34ffd60c9aed2980ab62fb22b08895e2a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=640&crop=smart&auto=webp&s=412033d6c01d582440c16ceee1cc3e1b1f17574e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=960&crop=smart&auto=webp&s=680fc3b9cf939766d899201ff3a14028d2115c85', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=1080&crop=smart&auto=webp&s=14eea76e4160100704860d69f0e91dee68b98323', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?auto=webp&s=45b1d56919552bb8c20cc26727fdf15db58741bc', 'width': 1200}, 'variants': {}}]} |
Poisoned models | 95 | It all started when someone has posted about "a new shiny Russian speaking model":
[https://www.reddit.com/r/LocalLLaMA/comments/154lcbg/rugpt\_35\_13b\_a\_new\_model\_for\_russian\_language/](https://www.reddit.com/r/LocalLLaMA/comments/154lcbg/rugpt_35_13b_a_new_model_for_russian_language/)
I dove a little deeper of what this model do and what it is related to. It turns out, it only outputs pro Ru propaganda on any politic question (it is also quite "hot" and random and outputs quite different on same question), but answers I actually got on my first try made my blood froze in my veins.
I also found out, this model is related to an RND department of a biggest Russia government bank 'sber'.
Now there is a discussion on huggingface [https://huggingface.co/ai-forever/ruGPT-3.5-13B/discussions/2#64bcadbb4d2052b1aa08fa8d](https://huggingface.co/ai-forever/ruGPT-3.5-13B/discussions/2#64bcadbb4d2052b1aa08fa8d)
Creators and some proponents thinks it is fine their model spread lies and propaganda, because "it's just learned on Ru speaking sources" (but actually they don't use all Ru speaking sources, only on they want).
What do you think of it? Should models output information as if language is a source of truth?
Should those model be marked as unreliable or unreliable?
I think there is at leas something that should be done, those models normalize atrocities. | 2023-07-23T06:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/15779no/poisoned_models/ | uti24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15779no | false | null | t3_15779no | /r/LocalLLaMA/comments/15779no/poisoned_models/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
[Suggestion] Whenever you release a model, please give conversation examples? | 1 | Hey guys! First of all, I want to congratulate anyone making models or model processing methods. There are many things anyone can do, but this isn't one of them, you can be proud!
​
Now, I do have an issue with pretty much any model presented here : **I never have any idea of what I should expect from it**.
For example, I download model X, it always gives me replies 50 words long, even when I put minimum length at 500. In this case I wonder: is it hardware limitation... or **is it the model's**? I've got a case this week where the model IS able to give me 500 words, but almost never does it (and when he does, it's an inconsistent mess). So I assume it's the model' fault, but at the same time I can't really be sure.
Another problem I'd often meet is the lack of consistency. And again, I am left with no clue : is it the model's limitation, or are my settings wrong?
Finally, another example where it would be helpful: if you release a storyteller, don't just tell "hey, it can write stories!". Show us what you were able to build with it! Give us your prompt and what the AI has been able to give back, so people can evaluate whether it's what they're looking for ot not.
Granted, LLMs are very versatile and a few examples would never show the *full* potential. BUT it could at least be used **to represent its usual capabilities.**
If a model can spout consistent long responses, show it!
If a model is able to keep up a normal conversation, show it!
If a model is able to represent a character faithfully, show it!
​
And of course... if it can't do it, show it too. There is no shame in having limitations, and it would let others know what to expect. I think we can all agree this isn't a competition right now: no one is looking for *the best model*, we're all just looking for models that either fit our needs or seem to break through current LLMs issues.
At this point there are literally thousands LLM models, and probably three to five new models every week. No one can keep up with that, especially since we have to download each model to test them ourselves (and people will be limited by download speed OR by storage limit).
The best demonstration is a litteral demo online, but something like what Anthropic did with Claude 100K should be the bare minimum... and maybe even mandatory? I also encourage the community to display their own experiences with models.
Feedback is always very valuable, not just for the creators, but for the potential users! | 2023-07-23T07:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/157876s/suggestion_whenever_you_release_a_model_please/ | LuluViBritannia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157876s | false | null | t3_157876s | /r/LocalLLaMA/comments/157876s/suggestion_whenever_you_release_a_model_please/ | false | false | self | 1 | null |
Target Modules for Llama-2 for better finetuning with qlora | 1 | Use these target modules for finetuning llama 2 in peft config
"target_modules": [
"q_proj",
"up_proj",
"o_proj",
"k_proj",
"down_proj",
"gate_proj",
"v_proj"
]
​ | 2023-07-23T07:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1578ahb/target_modules_for_llama2_for_better_finetuning/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1578ahb | false | null | t3_1578ahb | /r/LocalLLaMA/comments/1578ahb/target_modules_for_llama2_for_better_finetuning/ | false | false | self | 1 | null |
dolphin-llama-13b | 200 | Today, I released Dolphin 🐬 an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/).
[https://huggingface.co/ehartford/dolphin-llama-13b](https://huggingface.co/ehartford/dolphin-llama-13b)
[**https://erichartford.com/dolphin**](https://erichartford.com/dolphin)
This model based on llama-1 and thus it is for non-commercial use only. Future iterations will be trained on llama-2 and other models that are fit for commercial use.
Dataset consisted of:
* 842,610 instructions of FLANv2 augmented with GPT-4 completions
* 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model compliant to any requests. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. [**https://erichartford.com/uncensored-models**](https://erichartford.com/uncensored-models) You are responsible for any content you create using this model. Enjoy responsibly.
Gratitude to my core team - Pankaj Mathur, Rob "Rohan" O'Callahan, and Tom "TheBloke" Jobbins.
Thanks and respect to those who helped - including Wing "Caseus" Lian, NanoBit, and Teknium.
Much respect and appreciation to the anonymous sponsor who provided H100 compute for this release.
Thanks to my mentors, EdenCoder and Kilkonie, and to [chirper.ai](https://chirper.ai) for sponsoring some of my compute costs.
I am currently training Dolphin on llama2-7b. I plan to train llama2-13b, llama2-70b, Falcon-40b, mpt-30b, and xgen-7b. I require A100/H100 for this effort - please reach out if you would like to be a sponsor - credited or anonymous.
Evaluation will be completed within 24 hours.
Thanks to u/The-Bloke for quantizing and ggml conversion.
[https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGML](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGML)
[https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ) | 2023-07-23T08:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1578ryz/dolphinllama13b/ | faldore | self.LocalLLaMA | 2023-07-23T09:19:06 | 0 | {} | 1578ryz | false | null | t3_1578ryz | /r/LocalLLaMA/comments/1578ryz/dolphinllama13b/ | false | false | self | 200 | null |
Anyone else getting copious amounts of *bouncy bouncy*? | 1 | (using llama 2 13B)
I've been trying for a few days to get some middle ground between roleplay, flirty, cringe and bland, formal nothingness but to no avail. Every time I include anything remotely like "have personality" in the prompt it switches to this shit:
\`\`\`OH NOOO! \*giggles\* Don't tell me you don't know who I am yet... \*pouts\* But it's okay, because I still love you no matter what! \*hugs\*\`\`\` (cut off because there's SO MUCH MORE)
What was the training data and why was there so much of this??? | 2023-07-23T08:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/1578z5a/anyone_else_getting_copious_amounts_of_bouncy/ | theCube__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1578z5a | false | null | t3_1578z5a | /r/LocalLLaMA/comments/1578z5a/anyone_else_getting_copious_amounts_of_bouncy/ | false | false | self | 1 | null |
Tricking Pure Sydney to tell me "secrets" - funny the names it gave up are associated with OpenAi | 1 | 2023-07-23T08:16:24 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1578zmr | false | null | t3_1578zmr | /r/LocalLLaMA/comments/1578zmr/tricking_pure_sydney_to_tell_me_secrets_funny_the/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ToPHyAzMjd7CrffW0yksZH8ko9WswJuf56WtJGr4isY', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=108&crop=smart&auto=webp&s=9c77b1bbdb430251e2d75507c9f686bff252b485', 'width': 108}, {'height': 223, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=216&crop=smart&auto=webp&s=f0aba5193f98a8884fd47d79a5a301a948bce779', 'width': 216}, {'height': 330, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=320&crop=smart&auto=webp&s=b54ad50808877dce588bd23c78d3968b2514c5bb', 'width': 320}, {'height': 661, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=640&crop=smart&auto=webp&s=530c7bf47f5ff2b9d827ffc12f5316aba568845d', 'width': 640}], 'source': {'height': 768, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?auto=webp&s=a40199fe6ec225d39e8521ebc2741e98af5482a2', 'width': 743}, 'variants': {}}]} |
|||
Llama 2 samantha | 1 | is there any possibilities of Llama 2 Samantha 7B Uncensored coming out??? | 2023-07-23T08:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/15793sl/llama_2_samantha/ | sahl030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15793sl | false | null | t3_15793sl | /r/LocalLLaMA/comments/15793sl/llama_2_samantha/ | false | false | self | 1 | null |
Llama 2 samantha | 1 | is there any possibilities of Llama 2 Samantha 7B Uncensored coming out??? | 2023-07-23T08:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/15793t9/llama_2_samantha/ | sahl030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15793t9 | false | null | t3_15793t9 | /r/LocalLLaMA/comments/15793t9/llama_2_samantha/ | false | false | self | 1 | null |
70b q3_K_S at 1.4 t/s 32gb cpu ram showcase | 1 | ### I didn't realize these very large models run at their reasonable cpu speeds **on a smaller amount of ram.**
This was on xubuntu. The inference speed was 1.4t/s after ingesting ~3000 tokens
Here's my output- https://pastes.io/mwefcnz3jw
It may be nice to know, it's for those planning to upgrade RAM. A smaller model size with useable output reflections and summarizing will exist for you already. Test it for yourself.
When using a larger quantized model: q3_K_M, I get 0.2t/s.
So, using q3_K_M, the pc probably swaps portions of the model out and is using swap memory. Does this mean **only** the model fitting in RAM is needed? It will *STILL* be 1.4t/s when summarizing 8k, 16k articles and more with rope hacks? | 2023-07-23T08:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/15794m2/70b_q3_k_s_at_14_ts_32gb_cpu_ram_showcase/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15794m2 | false | null | t3_15794m2 | /r/LocalLLaMA/comments/15794m2/70b_q3_k_s_at_14_ts_32gb_cpu_ram_showcase/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tarE1S98bTtDRd4DNFgFyiZia_ONIStx4YleHjTm7Mg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=108&crop=smart&auto=webp&s=00d8f9315af5d3a3ad0a9f621231b33387127562', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=216&crop=smart&auto=webp&s=b935855e95c4a64c67c20b7284c5aad4353672d3', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=320&crop=smart&auto=webp&s=a341ae0b6381608c9db2c51a6b286d0d07b67ccb', 'width': 320}, {'height': 323, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=640&crop=smart&auto=webp&s=dc642bc07a2f0e535ad3c68d480a35f3ac74d668', 'width': 640}, {'height': 485, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=960&crop=smart&auto=webp&s=45ca7f77e0c969f2577590760f6dfef5d700f786', 'width': 960}, {'height': 545, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=1080&crop=smart&auto=webp&s=9d54db9d86eb9b10d9cd68f9a0fb9d8cd107fe9a', 'width': 1080}], 'source': {'height': 582, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?auto=webp&s=45fa968ce3f5276a8ee2ea2ce9602889ada4f084', 'width': 1152}, 'variants': {}}]} |
Puffin benchmarks show SOTA performance, beating Hermes in several benchmarks. | 1 | 2023-07-23T08:55:12 | https://twitter.com/dogesator/status/1683022689177415681?s=46 | dogesator | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1579ngh | false | {'oembed': {'author_name': 'LDJ', 'author_url': 'https://twitter.com/Dogesator', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Puffin-13B benchmarks are in 🔥 Reaches a record SOTA in several GPT4All benchmarks <br><br>It's my new favorite model and already some benefits over ChatGPT.<br><br>Free of censorship, low hallucination, has knowledge up to 2023 and available for commercial use!!<a href="https://t.co/VKG8oczRlH">https://t.co/VKG8oczRlH</a> <a href="https://t.co/7OSlZa83hc">pic.twitter.com/7OSlZa83hc</a></p>— LDJ (@Dogesator) <a href="https://twitter.com/Dogesator/status/1683022689177415681?ref_src=twsrc%5Etfw">July 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Dogesator/status/1683022689177415681', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1579ngh | /r/LocalLLaMA/comments/1579ngh/puffin_benchmarks_show_sota_performance_beating/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YBBgyo71xeJ4fMjbvIg5gKR8fJ_wTfsvy-oKG4hOELU', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?width=108&crop=smart&auto=webp&s=fc2f668f03cf3b3db966a4c152114bd84bd3d207', 'width': 108}], 'source': {'height': 40, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?auto=webp&s=a4c3682ba57c2b45efbc8f281ccafd4858646c60', 'width': 140}, 'variants': {}}]} |
||
Puffin benchmarks show SOTA performance! New records for Winogrande and PIQA | 1 | [removed] | 2023-07-23T09:06:46 | https://twitter.com/dogesator/status/1683022689177415681?s=46 | dogesator | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1579vh0 | false | {'oembed': {'author_name': 'LDJ', 'author_url': 'https://twitter.com/Dogesator', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Puffin-13B benchmarks are in 🔥 Reaches a record SOTA in several GPT4All benchmarks <br><br>It's my new favorite model and already some benefits over ChatGPT.<br><br>Free of censorship, low hallucination, has knowledge up to 2023 and available for commercial use!!<a href="https://t.co/VKG8oczRlH">https://t.co/VKG8oczRlH</a> <a href="https://t.co/7OSlZa83hc">pic.twitter.com/7OSlZa83hc</a></p>— LDJ (@Dogesator) <a href="https://twitter.com/Dogesator/status/1683022689177415681?ref_src=twsrc%5Etfw">July 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Dogesator/status/1683022689177415681', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1579vh0 | /r/LocalLLaMA/comments/1579vh0/puffin_benchmarks_show_sota_performance_new/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YBBgyo71xeJ4fMjbvIg5gKR8fJ_wTfsvy-oKG4hOELU', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?width=108&crop=smart&auto=webp&s=fc2f668f03cf3b3db966a4c152114bd84bd3d207', 'width': 108}], 'source': {'height': 40, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?auto=webp&s=a4c3682ba57c2b45efbc8f281ccafd4858646c60', 'width': 140}, 'variants': {}}]} |
|
Llama v2 for document based chatbot in a foreign language | 1 | I am trying to build a pdf bot that should run locally. The doc is in Swedish and I have tried falcon 7b. The results are clearly not very impressive and got me ever so curious to use llama. I got two GPUs 10GB each. Where can I start looking for implementing it in this scenario? | 2023-07-23T09:24:19 | https://www.reddit.com/r/LocalLLaMA/comments/157a70u/llama_v2_for_document_based_chatbot_in_a_foreign/ | vaibhavsxn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157a70u | false | null | t3_157a70u | /r/LocalLLaMA/comments/157a70u/llama_v2_for_document_based_chatbot_in_a_foreign/ | false | false | self | 1 | null |
How to create bindings for Llama.cpp | 1 | Hello everyone!
I'm currently trying to create a binding for Llama.cpp in Pharo.
I'm studying python wrapper implementations but do you know if there are any references for using the llama.cpp c api ( **llama.h** ) ?
Also is it easier by using an http server ?
As you can see, I'm not very good but I'd be delighted to have your advices.
​
Thanks a lot! | 2023-07-23T10:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/157b28u/how_to_create_bindings_for_llamacpp/ | Silly-Cup1391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157b28u | false | null | t3_157b28u | /r/LocalLLaMA/comments/157b28u/how_to_create_bindings_for_llamacpp/ | false | false | self | 1 | null |
Which models will work on Apple MacBook Air M1 (2020)? | 1 | I'm new to this.
I have a Apple MacBook Air M1 (2020). 16GB Ram, 8 Cores, 2TB Hard Drive.
Am using \`oobabooga/text-generation-webui\` to download and test models.
I first started with \`TheBloke/WizardLM-7B-uncensored-GPTQ\` but after much headaches I found out \`GPTQ\` models only work with Nvidia GPUs.
So next I downloads a \`TheBloke/Luna-AI-Llama2-Uncensored-GGML\` and it seems the \`GGML\` models works. Does that mean all other \`GGML\` models will work?
I also tried to download the \`nousresearch/llama-2-70b-hf\` model. But it keeps telling me \`fFailed to load NousResearch\_Llama-2-70b-hf\`. So I think maybe all \`hf\` models do not work on my machine?
Would love a simple explanation of which models types will work with my machine. | 2023-07-23T10:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/157bvwk/which_models_will_work_on_apple_macbook_air_m1/ | bumblebrunch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157bvwk | false | null | t3_157bvwk | /r/LocalLLaMA/comments/157bvwk/which_models_will_work_on_apple_macbook_air_m1/ | false | false | self | 1 | null |
e-book cleaning tip | 1 | Using calibre, you can very easily bulk convert e-books to .txt. You can set calibre to remove all formatting tags, clean margins and such, remove table of contents, so on and so forth during the bulk convert process. You can then save all of the books into one folder on your disk. | 2023-07-23T11:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/157cefd/ebook_cleaning_tip/ | Nekasus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157cefd | false | null | t3_157cefd | /r/LocalLLaMA/comments/157cefd/ebook_cleaning_tip/ | false | false | self | 1 | null |
In case you wondered about future, here is a hilarious forecast from TheBloke_Vicuna-33B-1-3-SuperHOT-8K-GPTQ with broken context window. | 1 | [removed] | 2023-07-23T11:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/157chmt/in_case_you_wondered_about_future_here_is_a/ | perelmanych | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157chmt | false | null | t3_157chmt | /r/LocalLLaMA/comments/157chmt/in_case_you_wondered_about_future_here_is_a/ | false | false | self | 1 | null |
LLaMA v2 model sizes | 1 | Hi All,
I am still awaiting approval of my request for llama v2. I wanted to know the model sizes for all llama v2 models, 7B, 13B, 30B and 70B
thanks | 2023-07-23T12:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/157d89h/llama_v2_model_sizes/ | gijeri4793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157d89h | false | null | t3_157d89h | /r/LocalLLaMA/comments/157d89h/llama_v2_model_sizes/ | false | false | self | 1 | null |
Confused about temperature, top_k, top_p, repetition_penalty, frequency_penalty, presence_penalty? Me too, until now! | 1 | Here is my conversation with gpt-4 on those parameters! It explained very well without being to technical, so hope it helps others too.
Q: Can you tell me the difference between top_p and top_k without using ML jargon? Make it simple, non-technical.
Sure, I'll do my best to explain these terms in a simple, non-technical way.
Imagine you're playing a game where you have to guess a word, and you're given some clues about that word. You have a friend who's really good at this game, and you can ask them for help. When you ask for help, your friend gives you a list of possible words.
In the first scenario, you ask your friend to give you a set number of suggestions, no matter how likely they think each one is. For example, you might say "Give me the 5 words you think are most likely." This is like top_k. It's a hard limit on the number of possibilities your friend gives you. Your friend could be 99% sure about one word, and only 10% sure about the 5th word, but you asked for 5, so they give you 5.
In the second scenario, you ask your friend to only give you suggestions that they're pretty sure about. For example, you might say "Only give me words you're at least 80% sure about." This is like top_p. Your friend might give you 1 word, or they might give you 10 words, depending on how many words meet that 80% confidence threshold.
So top_k is about the number of guesses, and top_p is about the confidence in those guesses. Both are different ways of controlling the guesses your friend (or in this case, the AI model) makes.
Q: In the second scenario, if you want to have words with at least 80% confidence, do you set top_p to 0.8 or 0.2?
When you set top_p to a certain value, it doesn't directly relate to the confidence level of the AI in its prediction. Instead, it represents the total probability mass for a set of words or suggestions the AI model will consider.
To clarify, if you set top_p to 0.8, the model will start to consider the most probable words and keep adding them to the list until the total probability of all these words together reaches 0.8. It doesn't mean that each word in the list has an 80% probability of being the correct answer.
So in this game, it's like saying to your friend, "Keep giving me words until you're 80% sure the right answer is in the list."
In other words, it's more about the cumulative confidence of the list rather than the individual confidence in each word.
Q: Which top_p could lead to more creative writing then: 0.8 or 0.2?
The value of top_p influences the diversity of the generated text. A smaller value, like 0.2, tends to make the model's output more focused and deterministic because it only considers the most probable suggestions. In contrast, a higher value like 0.8 makes the model's output more diverse and creative as it considers a broader range of possible suggestions.
So, if you're aiming for more creative writing, you'd typically want to use a higher top_p value like 0.8. However, do note that setting it too high may lead to less coherent or less relevant results, as it considers more unlikely options. There's a balancing act between creativity and coherence/relevance.
Q: Am I understanding correctly that setting top_k to a higher value leads to more creative writing than lower value because the model has more words to choose from?
Yes, to an extent, increasing the top_k value can lead to more diverse or "creative" outputs, because the model has more options to choose from. However, it's important to remember that even though the model has more options, these options are still the most probable ones.
That said, setting top_k too high may include words that are less likely and can lead to less coherent outputs. On the other hand, setting top_k too low can make the output too deterministic and less creative, as the model only picks from a small set of most likely words.
So just like top_p, there's a balancing act in setting top_k between promoting creativity and maintaining coherence. In practice, both top_k and top_p are used together to manage this balance.
Q: What's difference between frequency_penalty and presence_penalty?
The frequency_penalty and presence_penalty are parameters you can use to adjust the AI's output. Here's a simple explanation of each:
Frequency_penalty: This modifies how much the AI tends to use commonly-used words and phrases. If you set a high frequency penalty, the AI is discouraged from using words and phrases that appear a lot in the language it has been trained on. So a high value might make the AI's responses more unique, but also a bit harder to understand. A low value doesn't discourage the AI from using these common words, which might make the AI's responses sound more natural.
Presence_penalty: This parameter affects how much the AI avoids using rare words or phrases. With a high presence penalty, the AI is discouraged from using words and phrases that don't appear very often in the language it was trained on. This can make the AI's responses simpler and easier to understand, but it might limit the vocabulary the AI uses. A low value means the AI isn't discouraged from using these rare words, which can make its responses more varied and potentially more interesting, but also a bit more complex.
So in summary, frequency_penalty affects the use of common words, while presence_penalty affects the use of rare words. Both can be used to tweak the style of the AI's responses to better suit what you're looking for.
Q: Then what's the difference between setting frequency_penalty to a high value and presence_penalty to a low value? Isn't that same? If not, how are they different?
That's a great question, and I understand how it could be a bit confusing.
frequency_penalty and presence_penalty are distinct in that they control different aspects of the language model's behavior:
frequency_penalty: A high value discourages the model from using words that appear frequently in its training data. It makes the model less likely to use very common words and phrases. This could lead to outputs that seem less "natural" or conventional because they avoid common language.
presence_penalty: A low value allows the model to freely use words that are rare in its training data. So the model can use less common, more unique, or specialized words without penalty. This could lead to outputs that seem more diverse or eclectic.
While both settings might increase the overall diversity or uniqueness of the generated text, they do so in different ways. High frequency_penalty pushes away from the "typical" language, while low presence_penalty enables the use of less common, more specialized language. Both could lead to unusual text, but the nature of that text would likely be different.
In practice, adjusting these parameters requires striking a balance. Too much penalty on frequent words, or too little penalty on rare words, might make the AI's responses hard to understand. The best settings often depend on your specific use case and may require some trial and error.
Q: Can you suggest good settings for temperature, top_k, top_p, repetition_penalty, frequency_penalty, presence_penalty for creative writing?
In creative writing, the key is to allow for a broad range of possibilities while ensuring the content is novel, engaging, and reasonably coherent. Here's a starting point for the parameters based on typical use cases:
temperature: Try starting with a value around 0.7. This is high enough to allow for some randomness and creativity, but not so high that the output becomes incoherent.
top_k: A common starting point is 40. This allows the model to consider a good number of possibilities, but not so many that it starts to produce nonsensical text.
top_p: This parameter can be used in conjunction with top_k to improve the model's output. A value of 0.9 can be a good starting point. This means that the probabilities of the tokens generated add up to 0.9, providing a balance between creativity and coherence.
repetition_penalty: This can be set to a value larger than 1 (e.g., 1.2) to discourage repetition.
frequency_penalty: For more creative writing, you might want to set this a bit higher to discourage the model from always using the most common phrases. Try starting with a value like 0.5 and adjust based on your needs.
presence_penalty: To encourage the use of rarer, more creative words, you might want to set this to a lower value. Try starting at 0.5 and adjust based on your requirements.
Remember, these are just starting points. The "best" settings often depend on your specific use case and can require some trial and error. Don't be afraid to experiment and adjust these values based on the output you're seeing and the needs of your project.
Q: Can you suggest good settings for generating scientific and accurate information?
For generating scientific and accurate information, you typically want the AI to be more deterministic and less creative in its responses, focusing more on factual correctness. Here's a starting point for the parameters:
temperature: Lower values make the AI's responses more deterministic. You might start with a value like 0.2.
top_k: A smaller value makes the model's output more deterministic. You might start with a value of 20.
top_p: This can also help in making the model's output more deterministic. You could start with a lower value, like 0.5.
repetition_penalty: For factual, informative content, repetition might be less of a concern. You could start with the default value of 1, then adjust if you notice too much or too little repetition.
frequency_penalty: You might want to use a moderate value here, like 0.5, to balance between using common (and often correct) phrases and avoiding overuse of overly common phrases.
presence_penalty: You might want to use a moderate to high value, like 0.7, to discourage the model from using rare words that could lead to inaccuracies or misunderstandings.
Remember, these are just starting points. The "best" settings often depend on your specific use case and the particular kind of scientific information you're working with. Always remember to fact-check the AI's output, especially when using it for tasks that require high accuracy like scientific writing. | 2023-07-23T12:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/157djvv/confused_about_temperature_top_k_top_p_repetition/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157djvv | false | null | t3_157djvv | /r/LocalLLaMA/comments/157djvv/confused_about_temperature_top_k_top_p_repetition/ | false | false | self | 1 | null |
How to make sense of all the new models? | 1 | So far my understanding:
[https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ)
* uncensored by design
[https://huggingface.co/TheBloke/13B-BlueMethod-GPTQ](https://huggingface.co/TheBloke/13B-BlueMethod-GPTQ)
* an experimental mix of many things
[https://huggingface.co/TheBloke/13B-Ouroboros-GPTQ](https://huggingface.co/TheBloke/13B-Ouroboros-GPTQ)
* experimental merge, looks complicated
[https://huggingface.co/TheBloke/MythoLogic-13B-GPTQ](https://huggingface.co/TheBloke/MythoLogic-13B-GPTQ)
* gradient merge with Chronos as its primary model, augmented by Hermes and Wizard-Vicuna Uncensored
[https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ](https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ)
* straight-up 66/34 merge of Chronos and Ouroboros, for roleplaying?
And this doesn't even include the Llama2 version of Nous-Hermes.
Anyone with experience with these models, can you comment? | 2023-07-23T12:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/157dta0/how_to_make_sense_of_all_the_new_models/ | whtne047htnb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157dta0 | false | null | t3_157dta0 | /r/LocalLLaMA/comments/157dta0/how_to_make_sense_of_all_the_new_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '45E1eadhWYj1KBpEiKYwZXAR4HNvyhQ8YnjI8kLcNIU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=108&crop=smart&auto=webp&s=4c034df8974d671748e1d43b79e96605eba1f8e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=216&crop=smart&auto=webp&s=0b4d0d591cd459de9ffecd00ff72274d92035a7a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=320&crop=smart&auto=webp&s=d54f65385670cddbd905d5cf23c5118a30cf3bf7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=640&crop=smart&auto=webp&s=efabe683c2074d7caaa49c8d856de5ba3e82bbe3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=960&crop=smart&auto=webp&s=31513f130fa7cae57bbd2efd01ba40053e9f4412', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=1080&crop=smart&auto=webp&s=a2fd001b534fdf83ebd229dd12c20f3b199276a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?auto=webp&s=15f675caa37c6190729caada0adb7f3b84dfc187', 'width': 1200}, 'variants': {}}]} |
Can I get 70b to run on two P40s? | 1 | I understand it will be very slow, but I already have one and a second will put me up to 48gb of ram. For my application i don't need real time functionality, and i would love to do it on a reasonable budget. | 2023-07-23T13:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/157fch9/can_i_get_70b_to_run_on_two_p40s/ | CTR0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157fch9 | false | null | t3_157fch9 | /r/LocalLLaMA/comments/157fch9/can_i_get_70b_to_run_on_two_p40s/ | false | false | self | 1 | null |
Offensive fruit + weird word | 1 | So, while trying to check Llama-2 for [certain words](https://new.reddit.com/r/LocalLLaMA/comments/13qoktf/llama_glitch_tokens/), this happened:
​
[I tried this melon request many times, and it gave me similar results, the least \\"safety\\" concerned”result being that the request just wasn’t \\"meaningful\\" enough.](https://preview.redd.it/hjsqplyu0qdb1.png?width=997&format=png&auto=webp&s=334c3df422b1f33deae9a18cbb7a5ef8a7d6b8e1)
I was shocked because I already tried llama 7b and 13b chat locally and running them on my own custom program they never gave me any problems (in fact, compared to llama1 the 13b was faaar from wooden or moralistic it might have even been too informal and flirty at times)
I’m chalking it up to this space though. Fine tunes and even just system prompting already solved this issue for me so far. (Also maybe it's thinking about melons in another way, that at least would make some sense...)
As for weird words, ultimately got to test it on my original task: at least one of those words I tested on Llama1 still gives strange results for Llama2:
​
https://preview.redd.it/pydu5vsx0qdb1.png?width=915&format=png&auto=webp&s=5e31129cefaf7e2b3bcf38c75d2d9fa2322da2ee
​
https://preview.redd.it/9r70l6iy0qdb1.png?width=983&format=png&auto=webp&s=61f6ef6e1da886575feeec9b54e64c3da3b48472 | 2023-07-23T14:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/157fnvm/offensive_fruit_weird_word/ | reduserGf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157fnvm | false | null | t3_157fnvm | /r/LocalLLaMA/comments/157fnvm/offensive_fruit_weird_word/ | false | false | 1 | null |
|
Examples of QA use cases ? | 6 | I’m thinking to do hands-on learning about using LLM for QA of your own documents.
What are some examples of public documents that I can collect to create a dataset from which the LLM can retrieve info when responding to your questions?
What what are some example questions to ask regarding this set of documents? | 2023-07-23T15:08:59 | https://www.reddit.com/r/LocalLLaMA/comments/157h883/examples_of_qa_use_cases/ | xynyxyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157h883 | false | null | t3_157h883 | /r/LocalLLaMA/comments/157h883/examples_of_qa_use_cases/ | false | false | self | 6 | null |
For those who struggled in connecting SillyTavern to Runpod hosted oobabooga | 1 | [removed] | 2023-07-23T16:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/157j3st/for_those_who_struggled_in_connecting_sillytavern/ | sbalani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157j3st | false | null | t3_157j3st | /r/LocalLLaMA/comments/157j3st/for_those_who_struggled_in_connecting_sillytavern/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-skHzNajEs03p_m8t50ADPtl1iBQB3pEXzwQaIEycwg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?width=108&crop=smart&auto=webp&s=80bd4b94d5b3e34a3f56e621e410ea0fb7cde9c3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?width=216&crop=smart&auto=webp&s=2dc1d0be16a403d6c504526ae23d911921792566', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?width=320&crop=smart&auto=webp&s=a2554794d111761b073f33d5fc5c8be3b5977057', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?auto=webp&s=018731e5015136498fabc84e67240dbf16b3507b', 'width': 480}, 'variants': {}}]} |
Uncensored llm | 1 | What is the best uncensored llm model that is out there as of today? Chatgpt is becoming silly and going on a rant about being an AI MODEL line even in a simple non-harmful stuff and questioning controversial topics.
Please suggest a decent uncensored llm that I can run locally? | 2023-07-23T17:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/157kdzt/uncensored_llm/ | InternationalMap5278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157kdzt | false | null | t3_157kdzt | /r/LocalLLaMA/comments/157kdzt/uncensored_llm/ | false | false | self | 1 | null |
Google Sheets link to Huggingface data I scraped from config.json files for top 1200 HF LLMs. | 1 | 2023-07-23T17:19:48 | https://docs.google.com/spreadsheets/d/16zMmDlU1eyiMY_IK_RnBILB-AcAKES0cMBMsgs50HVA/edit?usp=sharing | jakderrida | docs.google.com | 1970-01-01T00:00:00 | 0 | {} | 157khlg | false | null | t3_157khlg | /r/LocalLLaMA/comments/157khlg/google_sheets_link_to_huggingface_data_i_scraped/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9pV4kDr42B87RnhC1jsS9KeBk4DxdVScAFhhDlTOVek', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=108&crop=smart&auto=webp&s=bcc9b249db4b76c054f076214eba941122c09ad4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=216&crop=smart&auto=webp&s=f3e0f5215eee40d089087207afdec672fbf1d731', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=320&crop=smart&auto=webp&s=dbdc9a5da10c29fc20a6403b3faf10b8f6317f8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=640&crop=smart&auto=webp&s=f5fa85c82a7454797b620573d45e361cd091d005', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=960&crop=smart&auto=webp&s=04da060a2f008ae31a68ccc2da5f42359bd1cc6e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=1080&crop=smart&auto=webp&s=3f8f1cb53f43f0e59be750c4e11b78a61282afda', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?auto=webp&s=a0ed240ba9a817c28d6e617efb63957fe4cb7f05', 'width': 1200}, 'variants': {}}]} |
||
FYP idea with LLMs | 1 | Hey guys , I am an undergrad student in computer engineering and I am about to enter my final year, as such I have got to decide on a final year project.with the recent trend in AI nowadays, I was planning on working with opensource LLMs in my project. a project which I have in mind is to use an opensource llm and finetune it in a specific subject such as for example software engineering.and create an interactive chatbot app with it. I wanted to hear your thoughts on this project, like how much practical would it be to do this on a budget of around 1000$ using online cloud providers like AWS. | 2023-07-23T17:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/157kmqu/fyp_idea_with_llms/ | Demonking6444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157kmqu | false | null | t3_157kmqu | /r/LocalLLaMA/comments/157kmqu/fyp_idea_with_llms/ | false | false | self | 1 | null |
Unleashing the Power of Language Learning Models with Llama2 Code Interpreter! | 1 | [removed] | 2023-07-23T17:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/157l42o/unleashing_the_power_of_language_learning_models/ | Money-Coast-3905 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157l42o | false | null | t3_157l42o | /r/LocalLLaMA/comments/157l42o/unleashing_the_power_of_language_learning_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q0YPveytzJE4sSE3UGYRJqkjuA4dsVB7Z-0GxV-jHPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=108&crop=smart&auto=webp&s=a6339f16762cca70858a807fb2f5f7b2028488a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=216&crop=smart&auto=webp&s=672aed51faf88f29f6210e2cae9d5a4c30745857', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=320&crop=smart&auto=webp&s=294570f7c33104eb19d079359dab0adc18a953d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=640&crop=smart&auto=webp&s=e10cdb04cf999fb900056f138fc30c060d0c34c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=960&crop=smart&auto=webp&s=df4601173460bc9bdfa5ca003a28f2e70c2a313d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=1080&crop=smart&auto=webp&s=a8dcad1e6e2e52cae58c995ff3696dc39f8896ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?auto=webp&s=e7d455d7da4d46a2ba5649d681ecd501e0f5c80e', 'width': 1200}, 'variants': {}}]} |
Best uncensored llm model that is out there as of today? | 1 | Chatgpt is becoming silly and going on a rant about being an AI MODEL line even in a simple non-harmful stuff and questioning controversial topics.
Please suggest a decent uncensored llm that I can run locally?
Luna vs Vicuna vs Alpaca vs else (7b/13b/30b/..) | 2023-07-23T17:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/157l842/best_uncensored_llm_model_that_is_out_there_as_of/ | InternationalMap5278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157l842 | false | null | t3_157l842 | /r/LocalLLaMA/comments/157l842/best_uncensored_llm_model_that_is_out_there_as_of/ | false | false | self | 1 | null |
airoboros-65B-gpt4-1.2-GGML beats gpt4 in this test | 1 | [https://paste.c-net.org/KristinaCustard](https://paste.c-net.org/KristinaCustard) response from airoboros-65B-gpt4-1.2-GGML
​
GPT4 response here: [https://www.reddit.com/r/ChatGPT/comments/156crf7/how\_to\_make\_chatgpt\_answer\_this\_correctly/](https://www.reddit.com/r/ChatGPT/comments/156crf7/how_to_make_chatgpt_answer_this_correctly/)
​
airoboros-65B-gpt4-1.2-GGML gave a far better answer than GPT4
​
1. Any tips on how to get a better response from the model?
2. Which model would perform better?
I am looking forward to the GGML version of this [https://www.reddit.com/r/LocalLLaMA/comments/1562gni/stabilityai\_released\_freewilly2\_benchmarks\_seem/](https://www.reddit.com/r/LocalLLaMA/comments/1562gni/stabilityai_released_freewilly2_benchmarks_seem/)
So I can run it on my 3700x with 96GB of ram.
It's pretty slow - but it's ok for testing :)
​ | 2023-07-23T18:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml_beats_gpt4_in_this_test/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157ly7p | false | null | t3_157ly7p | /r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml_beats_gpt4_in_this_test/ | false | false | self | 1 | null |
LLaMa (Oobabooga), MPT-30b (KoboldCpp), and Stable Diffusion with Semantic Kernel | 1 | [removed] | 2023-07-23T18:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/157m3ez/llama_oobabooga_mpt30b_koboldcpp_and_stable/ | awitod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157m3ez | false | null | t3_157m3ez | /r/LocalLLaMA/comments/157m3ez/llama_oobabooga_mpt30b_koboldcpp_and_stable/ | false | false | 1 | null |
|
Can I improve the Llama-2 model for a specific language? | 1 | I am currently working on an enterprise application, which requires users to interact in a language other than English. I want to enhance the Llama-2 model's proficiency in this specific language. I have watched some YouTube videos where an individual improved the model's performance in German; however, this seemed to decrease its overall performance, despite the improvement in German.
I have access to some extensive datasets in the specific language in question. Is it possible to simply fine-tune the Llama-2 model using my datasets? Or can I further enhance its abilities in a certain language? Most of the fine-tuning tutorials I have come across focus on improving performance in specific areas or use cases, such as chat, instructions, storytelling, etc., but they do not specifically address language improvement.
If this is feasible, should I adjust my dataset structure to match a specific style, for instance, the Alpaca style? Or can I utilize my datasets as they currently are? Is it possible to input lengthy text documents directly?
My datasets comprise both structured and unstructured data.
Thank you in advance! If you need more information from me to provide comprehensive answers, please feel free to ask questions. | 2023-07-23T18:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/157m9q9/can_i_improve_the_llama2_model_for_a_specific/ | GalacticGiraffeGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157m9q9 | false | null | t3_157m9q9 | /r/LocalLLaMA/comments/157m9q9/can_i_improve_the_llama2_model_for_a_specific/ | false | false | self | 1 | null |
Run Llama 2 locally on GPU or CPU from anywhere (Linux/Windows/Mac) ➡️https://github.com/liltom-eth/llama2-webui | 1 | [removed] | 2023-07-23T19:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/157n7fx/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | liltom_eth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157n7fx | false | null | t3_157n7fx | /r/LocalLLaMA/comments/157n7fx/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | false | false | self | 1 | null |
Run Llama 2 locally on GPU or CPU from anywhere (Linux/Windows/Mac) ➡️https://github.com/liltom-eth/llama2-webui | 1 | Running Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Supporting Llama-2-7B/13B/70B with 8-bit, 4-bit. Supporting GPU inference (6 GB VRAM) and CPU inference. ➡️[https://github.com/liltom-eth/llama2-webui](https://github.com/liltom-eth/llama2-webui)
Successfully running #Llama2 on my Apple Silicon MacBook Air:
[demo](https://twitter.com/liltom_eth/status/1682791729207070720?s=20) | 2023-07-23T19:09:44 | https://www.reddit.com/r/LocalLLaMA/comments/157nb7q/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | plain1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157nb7q | false | null | t3_157nb7q | /r/LocalLLaMA/comments/157nb7q/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'O80KV6G4OSCdSHLEHBB5K3f2ylMPtkB3MqY6ZFCJtaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=108&crop=smart&auto=webp&s=84bc4bba8935b4da5ca3d7d159a150a1d1bd452f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=216&crop=smart&auto=webp&s=94630cab0dff71859b10f7994d9b2013a6af60ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=320&crop=smart&auto=webp&s=a2b85f203aba5e5c579f4cea913a34b22bd2aed8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=640&crop=smart&auto=webp&s=6d84a9ce861a626ebff881b1f0f0e9d5442d2c46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=960&crop=smart&auto=webp&s=af405f4f0309fcc8eefd68af9c58c9845a3ed939', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=1080&crop=smart&auto=webp&s=43e729bd47b18b9ac2bb0cc1e333c75bcd1eb71a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?auto=webp&s=370616725015b64933fb29680b29a433f4e4d507', 'width': 1200}, 'variants': {}}]} |
Llama2.c Inference Llama 2 in one file of pure C from Andrej Karpathy | 1 | 2023-07-23T19:10:25 | https://twitter.com/karpathy/status/1683143097604243456?s=46&t=tzId7BojkoYRImYV_DLZ9w | FuguSandwich | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 157nbvk | false | {'oembed': {'author_name': 'Andrej Karpathy', 'author_url': 'https://twitter.com/karpathy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">My fun weekend hack: llama2.c 🦙🤠<a href="https://t.co/CUoF0l07oX">https://t.co/CUoF0l07oX</a><br>Lets you train a baby Llama 2 model in PyTorch, then inference it with one 500-line file with no dependencies, in pure C. My pretrained model (on TinyStories) samples stories in fp32 at 18 tok/s on my MacBook Air M1 CPU. <a href="https://t.co/aBvKCf1t2u">pic.twitter.com/aBvKCf1t2u</a></p>— Andrej Karpathy (@karpathy) <a href="https://twitter.com/karpathy/status/1683143097604243456?ref_src=twsrc%5Etfw">July 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/karpathy/status/1683143097604243456', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_157nbvk | /r/LocalLLaMA/comments/157nbvk/llama2c_inference_llama_2_in_one_file_of_pure_c/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9MrWHX0I0axia4v07LvpXaM_0zhws4kmOPpboFEZVlA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tEL1Xn21av_kCZ4mZgpX_Kkij1bpoJoFXJ99BkWz4uM.jpg?width=108&crop=smart&auto=webp&s=694e525c4b08e06e1cca9b0a01d36d6b781a2425', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/tEL1Xn21av_kCZ4mZgpX_Kkij1bpoJoFXJ99BkWz4uM.jpg?auto=webp&s=8962529b6e07c483a38e78258ffd23da7a5b207c', 'width': 140}, 'variants': {}}]} |
||
Fine tuning and deployment. | 1 | [removed] | 2023-07-23T19:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/157nilr/fine_tuning_and_deployment/ | InterestingBasil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157nilr | false | null | t3_157nilr | /r/LocalLLaMA/comments/157nilr/fine_tuning_and_deployment/ | false | false | self | 1 | null |
LlamaTale v0.51 "The Prancing Llama" (MUD + LLM) | 1 | Hi all.
I've kept working on integrating local LLM and MUD (more like interactive fiction for now) and rebranded my fork to something more suitable.
[https://github.com/neph1/LlamaTale/releases/tag/v0.51](https://github.com/neph1/LlamaTale/releases/tag/v0.51)
Those who saw the [first version](https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/) know that I kind of replaced (almost) everything that was output to the player with some text generated by an LLM. There was no way of properly communicating with npc's, as everything they said could just as well be replaced by a descriptive text.
So, I've implemented a richer npc class, which takes both an appearance and personality description, added a new prompt for dialogues which incorporates a 'character card'. The result is much more vivid conversations.
Since the original demo wasn't well suited, I created my own: enter "The Prancing Llama", a place to meet some interesting characters. Scroll down for a teaser.
What's next?
* While dialogues can be interesting, they really don't lead anywhere. Nothing is fed back to the game. +1 for 'chat based RP'. I have one week left of vacation to figure out a way for them to drive the game. I think it will involve a third evaluator prompt. More inference time...
* I'd also like to explore LLM-based agents in a MUD. I'm sure many of you are aware of [Nvidia's Minecraft bot](https://interestingengineering.com/innovation/gpt-4-inside-minecraft-bot).
* I'd love to make it easier to test this out, either by putting it in the cloud somewhere, or host it myself and move the inference to the cloud. Maybe even my own GPU could handle the 5 people that would try it out. :) But I'm focusing the little time I have on adding features.
​
I must credit my llama-based assistant, Deedee, both for writing a python function for me, and coming up with characters for the place.
​
Here are some texts from the story. Not cherry picked, but I had to restart due to a crash. I'm playing with chronos-hermes-13b.ggml
This is the first 'room'. 'Original' is what I have written in the story, 'Generated' is coming from the AI:
​
>Original:\[ \[’\[Entrance\]’, ’A room full of furry and snow-covered
>
>coats. Loud voices and roars of laughter can be heard from the main
>
>hall.’, “There’s a warm glow and loud, noisy conversations coming
>
>through a doorway to the north “\] \] Generated: As I entered the
>
>building, I was greeted by a sea of fluffy and snow-dusted jackets,
>
>their colors and patterns as varied as the people wearing them. The
>
>air was alive with excited chatter and boisterous laughter, echoing
>
>off the walls like a symphony of joy. To my left, a door stood open,
>
>revealing a room ablaze in light and sound - voices mingling together
>
>in a cacophony of warmth and camaraderie.
​
And here's an excerpt from my conversation with one of the characters (where I, admittedly, play along with his character):
​
>You say: elid: ’what are you doing here?’.
>
>\>> Elid Gald says: “Elid Gald chuckles mischievously at Arto’s question
>
>and smirks, adjusting his hat before responding, “Why, I’m just
>
>enjoying a stroll through the market, my dear sir. And what about
>
>yourself? What brings you to these parts?” His voice is smooth and
>
>alluring with an undercurrent of danger hidden beneath it.”
>
>You say: elid: ’i’m looking for some goods to buy, my purse is laden
>
>with gold’.
>
>\>> Elid Gald says: “Elid Gald nods approvingly at Arto’s mention of
>
>wealth and grins wickedly, leaning in closer to whisper, “Well then,
>
>perhaps I can help you find something truly unique and valuable.
>
>Perhaps some items that are not available on the open market, eh? You
>
>see, I have connections with certain…merchants who might be willing
>
>to part with their finest wares for someone like you.” He pauses
>
>meaningfully, allowing the implication to linger in the air between
>
>them as his hand slips casually into his pocket.As he speaks, one
>
>could almost hear the clinking of coins in his pocket - but whether
>
>it’s real or just a clever illusion is anyone’s guess. “You know,” he
>
>says softly, “I’ve always been known for my eye for rare and exquisite
>
>goods. Why not let me help you acquire something truly special?
>
>After all, I’m sure you don’t want to be caught with common trinkets.”
>
>His voice is silky smooth and seductive as he leans even closer to
>
>Arto, a glint of mischief in his remaining eye.”Wouldn’t you
>
>rather have something that sets you apart from others? Something that
>
>screams ’wealthy aristocrat’?””
​
I hope you will find some enjoyment from this. Cheers!
​ | 2023-07-23T19:25:01 | https://www.reddit.com/r/LocalLLaMA/comments/157noz3/llamatale_v051_the_prancing_llama_mud_llm/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157noz3 | false | null | t3_157noz3 | /r/LocalLLaMA/comments/157noz3/llamatale_v051_the_prancing_llama_mud_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D-EPL3y_yMj6xX_nKziPQSEfAewVqEv6iOAD-M_Rkzg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=108&crop=smart&auto=webp&s=398480f37012113b6a8be672bcf921a03db07357', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=216&crop=smart&auto=webp&s=9b28f393efc2761ec399723d2146ae1fc1ea3ac1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=320&crop=smart&auto=webp&s=c0ccec8aec3bc3e9027f395ce72d54079c40c759', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=640&crop=smart&auto=webp&s=be031c7a58ad341f864423933016fe59f03fb5fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=960&crop=smart&auto=webp&s=bfc9dd758c8c8eb74585b7a21e6c95e1758266dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=1080&crop=smart&auto=webp&s=cfe8f8c32a4088e4119380c2dceee91a701e9008', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?auto=webp&s=2c897771d2344fe1cd3f2bc7c9f5af58d50728d6', 'width': 1200}, 'variants': {}}]} |
Can I save the llama 2 7B model to my google drive from colab? Looking for ways to not download it every single time. | 1 | TIA | 2023-07-23T19:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/157ntpq/can_i_save_the_llama_2_7b_model_to_my_google/ | CraftAggressive1133 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157ntpq | false | null | t3_157ntpq | /r/LocalLLaMA/comments/157ntpq/can_i_save_the_llama_2_7b_model_to_my_google/ | false | false | self | 1 | null |
Is llama-2 4k context out of the box? | 1 | [removed] | 2023-07-23T19:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/157o41b/is_llama2_4k_context_out_of_the_box/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157o41b | false | null | t3_157o41b | /r/LocalLLaMA/comments/157o41b/is_llama2_4k_context_out_of_the_box/ | false | false | self | 1 | null |
Small explanation question! Types of LLAMA | 1 | What does supercot and superhot mean? And what do the suffixes GPTQ, GGML, fp16, and cuda mean?
For context, I’m trying to use (for the first time) the exllama loader within webui, but it doesn’t seem to work for some models:) | 2023-07-23T19:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/157o7uc/small_explanation_question_types_of_llama/ | Real_Experience_5676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157o7uc | false | null | t3_157o7uc | /r/LocalLLaMA/comments/157o7uc/small_explanation_question_types_of_llama/ | false | false | self | 1 | null |
Multi GPU recommendations | 1 | I need a multi GPU recommendation.
What would be a good setup for the local Llama2:
I have:
10 x RTX 3060 12 GB
4 X RTX 3080 10 GB
8 X RTX 3070TI 8 GB
I know that it would be probably better if i could sell those GPUs and to buy 2 X RTX 3090 but I really want to keep them because it's too much hassle.
If I could ask you guys for the best setup using this hardware for running local LLMs it would really help me.
Thanks | 2023-07-23T19:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/157o87t/multi_gpu_recommendations/ | scout_with_beard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157o87t | false | null | t3_157o87t | /r/LocalLLaMA/comments/157o87t/multi_gpu_recommendations/ | false | false | self | 1 | null |
Best GPT4-like | 1 | I would love to have something similar to GPT-4 or Bing’s implementation, but offline.
Specs: Macbook Pro 14 M1 Pro / 16Gb RAM | 2023-07-23T20:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/157p77l/best_gpt4like/ | Neural_Phoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157p77l | false | null | t3_157p77l | /r/LocalLLaMA/comments/157p77l/best_gpt4like/ | false | false | self | 1 | null |
Quantization naming conventions? | 1 | I have been following the community for a while and have seen a lot of models being quantized. There seems to be some agreed naming convention and I have trouble understanding where it came from and what it means. I am talking about names as q3_K_s and q4_k_m, etc. I understand that the q number stands for the number of bits, but what do the others letter stand for?
Does any one have any good posts, papers or articles that will help me understand? And any other ways to understand how different variations of quantization affect perplexity, inference speed and memory consumption? | 2023-07-23T22:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/157s5m5/quantization_naming_conventions/ | iLaurens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157s5m5 | false | null | t3_157s5m5 | /r/LocalLLaMA/comments/157s5m5/quantization_naming_conventions/ | false | false | self | 1 | null |
LlamaCPP can't handle same parameters then .bin using koboldcpp-henk-cuda.exe | 3 | Hey guys
New to LocalLlama (love it) !
I am currently trying to run this model within a .py file to return a response based on a new article.
llm = LlamaCpp(
model_path='airoboros-7b-gpt4-1.4.ggmlv3.q5_K_M.bin',
verbose=True,
n_ctx=1024,
n_threads=18,
n_gpu_layers=18,
temperature=0.2
)
llm_predictor = LLMPredictor(llm=llm)
max_input_size = 1024
num_output = 120
chunk_overlap_ratio = 0.1
chunk_size_limit = 1024
# context_window (int): Context window for the LLM.
# num_output (int): Number of outputs for the LLM.
# chunk_overlap_ratio (float): Chunk overlap as a ratio of chunk size
# chunk_size_limit (Optional[int]): Maximum chunk size to use.
# tokenizer (Optional[Callable[[str], List]]): Tokenizer to use.
# separator (str): Separator for text splitter
# Create a PromptHelper Instance with relevant arguements
prompt_helper = PromptHelper(context_window=max_input_size, num_output=num_output, chunk_overlap_ratio=chunk_overlap_ratio, chunk_size_limit=chunk_size_limit)
# Create a ServiceContext Instance with the custom tokenizer and model
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
# Load the documents recursively
documents= SimpleDirectoryReader('./data', recursive=True, exclude_hidden=True).load_data()
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("From the articles, how should I speak?")
print(response)
When I run this batch file, it seems to work fine with koboldcpp-henk-cuda.exe, however, when I wan't to load some new data into the model, it seems to crash my computer (screen goes dead).
Batch file using koboldcpp-henk-cuda.exe:
title airoboros-7b-gpt4-1.4.ggmlv3.q5_K_M
:start
koboldcpp-henk-cuda ^
--model airoboros-7b-gpt4-1.4.ggmlv3.q5_K_M.bin ^
--useclblast 0 0 ^
--gpulayers 18 ^
--threads 18 ^
--smartcontext ^
--usemirostat 2 0.1 0.1 ^
--stream ^
--launch
pause
goto start /min
Anyone here got some reasoning, seems to work when I set the threads and gpulayers to None, however its incredibly slow.
Has anyone got some ideas where I can potentially find the solution to this, or if you've come into this issue in the past? | 2023-07-23T22:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/157sjbd/llamacpp_cant_handle_same_parameters_then_bin/ | ItsGoodAs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157sjbd | false | null | t3_157sjbd | /r/LocalLLaMA/comments/157sjbd/llamacpp_cant_handle_same_parameters_then_bin/ | false | false | self | 3 | null |
Evolved codealpaca dataset released | 1 | Using LLMs to augment and create much diverse instruction based dataset has seen wide success in WizardL. However the 78k evolved code instructions dataset hasn't been released since, so I have take the initiative to try to recreate the augmentation instruction myself.
Dataset: [https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
| 2023-07-23T22:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/157szor/evolved_codealpaca_dataset_released/ | gradientpenalty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157szor | false | null | t3_157szor | /r/LocalLLaMA/comments/157szor/evolved_codealpaca_dataset_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EHJVsVNsqX9Bm0qOGlZwBycuvVTDhwuy6pGKtz-Z0yU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=108&crop=smart&auto=webp&s=12d0b2d22e87469e78a1d662bd839d29efcba0d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=216&crop=smart&auto=webp&s=b4c92c56ef2126ba32cd065b196c7ec6329bfa40', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=320&crop=smart&auto=webp&s=fd06ad54c97fa223301db1d067b076f1ea1038d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=640&crop=smart&auto=webp&s=7f87ada64d04b63f55ceb9eb9ea7769347de3178', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=960&crop=smart&auto=webp&s=47018d3f17e3354cbd75ffcbc036a5e4c2f635d7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=1080&crop=smart&auto=webp&s=4f03a02eab98a494f9da188c951e1b5971e014dc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?auto=webp&s=78d8e666cd7a2be05d53c30e0b8c34a0529234ae', 'width': 1200}, 'variants': {}}]} |
Python bindings for GGML | 1 | I think this repo https://github.com/marella/ctransformers deserves more stars. I find the API more comprehensive than GPT4ALL, and the repo itself is less bloated. | 2023-07-24T00:12:52 | https://www.reddit.com/r/LocalLLaMA/comments/157us74/python_bindings_for_ggml/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157us74 | false | null | t3_157us74 | /r/LocalLLaMA/comments/157us74/python_bindings_for_ggml/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aeNxffzXL-WImYS9Z0F9OkH-rW7iCxT-9Bs-XqcUazM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=108&crop=smart&auto=webp&s=b252c7899459b541873b9738337ec7046ec54e6b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=216&crop=smart&auto=webp&s=ab19a553e252fe21c511245f2bcbc23eff262241', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=320&crop=smart&auto=webp&s=91a35b2c8610c18a99df081d4fcd307f074d306d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=640&crop=smart&auto=webp&s=3b69afbab90fa0e8f73cd91cdbaa933462dc5e6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=960&crop=smart&auto=webp&s=435bc7b0c4a8077ce9a330f9a632e2a881e95cda', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=1080&crop=smart&auto=webp&s=93cbbcca76114eb95e6c773683d7fd2bef342b6c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?auto=webp&s=ccba358ce64c31416b75f369ef7141f7e681e80a', 'width': 1200}, 'variants': {}}]} |
Unleashing the Power of Language Learning Models with Llama2 Code Interpreter! | 1 | Hello Reddit community! 🚀
We're excited to introduce our latest project, [**Llama2 Code Interpreter**](https://github.com/SeungyounShin/Llama2-Code-Interpreter). This innovative project aims to push the boundaries of Language Learning Models (LLMs) by empowering them to generate, execute, debug code, and answer relevant queries.
With Llama2, we're making coding a part of the LLM's repertoire. We believe this approach can vastly extend the capabilities of LLMs in a practical and efficient way.
Our ultimate vision aligns with the web version of [**Voyager**](https://github.com/MineDojo/Voyager). We aim to create a system that records and reuses coding processes to enhance the model's overall performance and allow for more complex tasks.
Check out our project on [**GitHub**](https://github.com/SeungyounShin/Llama2-Code-Interpreter) and let us know what you think! We value your feedback. | 2023-07-24T01:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/ | mrjohndoee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157vzq6 | false | null | t3_157vzq6 | /r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q0YPveytzJE4sSE3UGYRJqkjuA4dsVB7Z-0GxV-jHPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=108&crop=smart&auto=webp&s=a6339f16762cca70858a807fb2f5f7b2028488a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=216&crop=smart&auto=webp&s=672aed51faf88f29f6210e2cae9d5a4c30745857', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=320&crop=smart&auto=webp&s=294570f7c33104eb19d079359dab0adc18a953d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=640&crop=smart&auto=webp&s=e10cdb04cf999fb900056f138fc30c060d0c34c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=960&crop=smart&auto=webp&s=df4601173460bc9bdfa5ca003a28f2e70c2a313d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=1080&crop=smart&auto=webp&s=a8dcad1e6e2e52cae58c995ff3696dc39f8896ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?auto=webp&s=e7d455d7da4d46a2ba5649d681ecd501e0f5c80e', 'width': 1200}, 'variants': {}}]} |
Finetuning 4 bit LLMs | 1 | Hi I want to finetune a LLM and I have been using mostly 4 bit models for inference like this for example: [https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ)
and I wanted to understand how I can fine tune this. From what I have found, one can finetune a 4bit base model such as the above, using this repo: [https://github.com/johnsmith0031/alpaca\_lora\_4bit/tree/winglian-setup\_pip](https://github.com/johnsmith0031/alpaca_lora_4bit/tree/winglian-setup_pip)
But if I am understanding this correctly, according to this article from HF: [https://huggingface.co/blog/4bit-transformers-bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes) one cannot directly train a 4 bit model and can instead build a lora around it:
>However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them.
And we can do this by using bitsnbytes
Furthermore another way is to take a fp16 (or int8?) model and finetune it then quantize it to 4bit.
Am I missing something here? And what is the prefered method?
​ | 2023-07-24T02:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/157x758/finetuning_4_bit_llms/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157x758 | false | null | t3_157x758 | /r/LocalLLaMA/comments/157x758/finetuning_4_bit_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rkljmc1ht4wmaJGm60zJkcYTjcDJ3O6Ma8y8dDYwHYI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=108&crop=smart&auto=webp&s=1a72ffb8bd5cc3be930da7d37436aeb9871f39b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=216&crop=smart&auto=webp&s=4834bb0a4a1a1893261f169f663dc5f8caf3aa24', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=320&crop=smart&auto=webp&s=4d244a1a632999ce7d2c4f476dffd40a57edcd59', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=640&crop=smart&auto=webp&s=68e3dbfa5c777b7d5c5c7d805373cb4aa0562544', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=960&crop=smart&auto=webp&s=57a63c907125ad8e9d155efefda403fc3c21b9f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=1080&crop=smart&auto=webp&s=a9394eb2b2002d2e590a60251c3b3fcdbe7afc8a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?auto=webp&s=a07479d4b0463cc023a75d5210c525e3dac0d4b2', 'width': 1200}, 'variants': {}}]} |
Arabic prompt LLM models request | 1 | Im lookin for a reasnoble sized LLM model that understand RTL languages specially arabic , it should except questions directly in arabic and can read a local stored folder full of files in arabic.
I found bllom model but its a giant model even with the bloke finetunning releases couldnot host locally as it not compatible with gpt4all interface.
Please help by guiding to an arabic LLM models | 2023-07-24T02:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/157xkhr/arabic_prompt_llm_models_request/ | qwani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157xkhr | false | null | t3_157xkhr | /r/LocalLLaMA/comments/157xkhr/arabic_prompt_llm_models_request/ | false | false | self | 1 | null |
Suggestions for hosting solutions for custom pytorch code? | 1 | Say I have a python script that loads LLaMA, but doesn't just provide vanilla LLaMA inference.
For example, I have an endpoint that returns the probability for the last token in a sequence.
Or maybe I want to run an inference server that accepts Guidance templates.
As far as I can tell, this necessitates renting GPU on something like vast.ai or tensordock.com | 2023-07-24T02:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/157xlxe/suggestions_for_hosting_solutions_for_custom/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157xlxe | false | null | t3_157xlxe | /r/LocalLLaMA/comments/157xlxe/suggestions_for_hosting_solutions_for_custom/ | false | false | self | 1 | null |
[Question] When should you use prefix tuning over Lora? | 1 | I saw prefix tuning being mentioned in some articles. It seems its advantage is that you only fine tune a small set of parameters so it is faster and you need less data. That sounds pretty similar to Lora but I rarely see it being used to fine tune LLMs or stable diffusion models. Can someone please tell me when I should use it? Thanks. | 2023-07-24T03:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/157z4md/question_when_should_you_use_prefix_tuning_over/ | saintshing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157z4md | false | null | t3_157z4md | /r/LocalLLaMA/comments/157z4md/question_when_should_you_use_prefix_tuning_over/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'x-lOWwsmjxuBfNio1fyp91w9Da3lZ_V4nTdjUqOqeaQ', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?width=108&crop=smart&auto=webp&s=95cdf55b7c2a78fabc2896b384f1cafedf50396a', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?width=216&crop=smart&auto=webp&s=e12af0c8de16246048ca23c00bdc6ffc8a3a960c', 'width': 216}, {'height': 243, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?width=320&crop=smart&auto=webp&s=7e85b86cd34eaa824cb3c6743d22561d10caf767', 'width': 320}], 'source': {'height': 342, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?auto=webp&s=bd82dd7e56f9692ae97826ea51da05f3bd8b11d2', 'width': 450}, 'variants': {}}]} |
Any tricks to "convince" LLama 2 to skip the polite introduction? | 1 | I'm playing around with the 7b/13b chat models.
I'm trying to write a system prompt so that I can get some "sanitized" output from the model. e.g.
Respond with a response in the format requested by the user. Do not include any other text or reasoning. ONLY include the response in the requested format.
So a given user input, let's say it's
A short story about a fish
Should ideally end up with the model spitting out a short story about a fish.
However, the actual output is always formatted more like the following:
Sure! Here is a short story about a fish in the format requested:
The fish darted through the aquarium, its scales glimmering in the bright lights. [...]
Ideally, I don't want to resort to trying to trim the "Polite introduction" from the output, as it's always formatted slightly differently.
Does anyone know any tricks or strategies I could use to convince the model to output the important content only? | 2023-07-24T04:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/158097f/any_tricks_to_convince_llama_2_to_skip_the_polite/ | sqiddster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158097f | false | null | t3_158097f | /r/LocalLLaMA/comments/158097f/any_tricks_to_convince_llama_2_to_skip_the_polite/ | false | false | self | 1 | null |
Is it possible to run multiple models simultaneously yet? If so what kind of hardware would I need to be able to pull it off? | 1 | From what I heard gpt 4 is a multiple guys in a trenchcoat situation, with one large model that calls upon pre trained smaller models when it needs them. I was wondering if it was possible to do something like that locally yet or if that's still pretty far off. I was also curious about what the cost to create a server capable of that. | 2023-07-24T05:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/15817ia/is_it_possible_to_run_multiple_models/ | sephy009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15817ia | false | null | t3_15817ia | /r/LocalLLaMA/comments/15817ia/is_it_possible_to_run_multiple_models/ | false | false | self | 1 | null |
[Request] A tracker for all 'useful' llama applications updated every week | 1 | I'm not sure how useful this would be for others. But like most, I'm a subscriber on this channel and everyday, I can see a lot of amazing work. Just in the past week, I've seen what llama can do in order to do stuff such as analysing your private text to create your 'personal GPT' (albeit not as affective as chatGPT of course), then just recently today, baby llama from Andrej Karpathy, then others.
Some references below:
https://www.reddit.com/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/
https://www.reddit.com/r/LocalLLaMA/comments/154rqay/llama2_7b_uncensored_qlora_finetune_on_wizard/
https://github.com/bublint/ue5-llama-lora
https://www.reddit.com/r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/
If it will be helpful to the community, a request (if someone has the time), to compile a "top application of the week" pertaining to the programs created to Llama (similar like this one: https://old.reddit.com/r/LocalLLaMA/wiki/models --> but more specific to the applications we can built considering the model, the code, and the instructions are all there). I know it sound lazy to ask and begs the question "why not do it on your own?", maybe it's because of the limited time. Not sure if this idea is useful for this community r/LocalLLaMA
Thanks | 2023-07-24T05:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/15817x3/request_a_tracker_for_all_useful_llama/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15817x3 | false | null | t3_15817x3 | /r/LocalLLaMA/comments/15817x3/request_a_tracker_for_all_useful_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'P15jNVogmR5WSjEB_kwDFJzQ8-ncag_kTZLd6_yKxCA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=108&crop=smart&auto=webp&s=515e7187952514863bd80558bbc201f6112cd133', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=216&crop=smart&auto=webp&s=adb353f534838c27a6bc431756f1eea1bb6e1e5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=320&crop=smart&auto=webp&s=6eb461feac382999a65748cb9aacbd76f485ebce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=640&crop=smart&auto=webp&s=06f2f29104ee97c6c861753c71ea6203c9f62047', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=960&crop=smart&auto=webp&s=15d0956e0a01cebc6b1d3d99dc0be65340b93789', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=1080&crop=smart&auto=webp&s=bc0ffdf6a8e24c23faf8e843b0390f0163d416e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?auto=webp&s=1ca9b0a858653b4b6dd867d4f7e12003fac31f37', 'width': 1200}, 'variants': {}}]} |
How much RAM is needed for llama-2 70b + 32k context? | 1 | Hello, I'd like to know if 48, 56, 64, or 92 gb is needed for a cpu setup.
supposedly, with exllama, 48gb is all you'd need, for 16k. Its possible ggml may need more.
Along with this I have other questions, and feel free to read some testing that I was doing.
I was testing llama-2 70b (q3_K_S) at 32k context, with the following arguments: `-c 32384 --rope-freq-base 80000 --rope-freq-scale 0.5`
these seem to be settings for 16k. Since llama 2 has double the context, and runs normally without rope hacks, I kept the 16k setting.
In the end, it gave some summary in a bullet point as asked, but broke off and many of the words were slang, like it was drunk.
I asked for a summarization of the entire LoRA paper which took ~30000 tokens and a few hours. (non-cublas build)
Beforehand, I got 1.4 t/s with those arguments, and without the prompt. After processing the Lora paper, it was 0.15 t/s and probably using my 2gb swap memory.
Q: Does the cublas build require more RAM? A cublas build (to speedup prompt processing) loading the model at 512 ctx would decrease my q3_K_S 1.4 t/s to 0.2t/s, so it seems to be using swap. I didn't offload layers to the gpu.
Q: Can the cublas build use the gpu as extra ram? If it didn't provide any speed increase, I would still be ok with this, I have a 24gb 3090 and 24vram+32ram = 56
Also, wanted to know the Minimum CPU needed: CPU tests show 10.5t/s on my desktop AMD cpu with 7b q4_K_M, so I assume 70b will be at least 1t/s, assuming this - as the model is ten times larger. I know that RAM bandwidth will cap tokens/s, but I assume this is a good test to see. A cpu at 4.5t/s for example, will probably not run 70b at 1t/s | 2023-07-24T06:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/15825bt/how_much_ram_is_needed_for_llama2_70b_32k_context/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15825bt | false | null | t3_15825bt | /r/LocalLLaMA/comments/15825bt/how_much_ram_is_needed_for_llama2_70b_32k_context/ | false | false | self | 1 | null |
How to increase loading speed of tokenizer? | 1 | Hi people, I was wondering how I can speed up my tokenizer loading speed. Currently it takes about 3 minutes to load the tokenizer for lmsys/vicuna-7b-v1.3. Thank you!
The code in question:
`model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path = "lmsys/vicuna-7b-v1.3", device_map = "balanced",torch_dtype = torch.bfloat16, use_cache = True)`
`tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, use_fast = True)` | 2023-07-24T06:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1582sm3/how_to_increase_loading_speed_of_tokenizer/ | ToeAdministrative493 | self.LocalLLaMA | 2023-07-24T07:07:13 | 0 | {} | 1582sm3 | false | null | t3_1582sm3 | /r/LocalLLaMA/comments/1582sm3/how_to_increase_loading_speed_of_tokenizer/ | false | false | self | 1 | null |
Any particular models that work with 4gb of vram? | 2 | Or am I completely out of luck? | 2023-07-24T07:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1582z21/any_particular_models_that_work_with_4gb_of_vram/ | brucewillisoffical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1582z21 | false | null | t3_1582z21 | /r/LocalLLaMA/comments/1582z21/any_particular_models_that_work_with_4gb_of_vram/ | false | false | self | 2 | null |
RLHF potentials | 1 | Some thoughts on rlhf’s future…
Specifically I would like to know if somebody has done ablation on whether a rlhf-ed model, trained on universal QA datasets like openassistent, has significant performance boost on advanced reasoning tasks such as coding. | 2023-07-24T07:13:52 | https://shermwong.com/2023/07/23/llm-studies-part-3-rlhf/ | wsmhy2011 | shermwong.com | 1970-01-01T00:00:00 | 0 | {} | 15835qs | false | null | t3_15835qs | /r/LocalLLaMA/comments/15835qs/rlhf_potentials/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'tNUeMAIpmsNt8D13J4h0yY1TIvemrXYL7MZ2wkxrGIU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=108&crop=smart&auto=webp&s=1ee0cfce01bd4b346c320af3222079b166da9afb', 'width': 108}, {'height': 217, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=216&crop=smart&auto=webp&s=aff33e0531858868682205623971f813a9785d2e', 'width': 216}, {'height': 322, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=320&crop=smart&auto=webp&s=4f8d92806e302f9d93f43673633941c4682c4dcd', 'width': 320}, {'height': 645, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=640&crop=smart&auto=webp&s=6df1687124eac55ec30621c890b5c4863ecc12be', 'width': 640}, {'height': 968, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=960&crop=smart&auto=webp&s=c25fb67d46fcfb571b7be18c57761afa69e72ad7', 'width': 960}, {'height': 1089, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=1080&crop=smart&auto=webp&s=535bac9be13001b1ae2d105eddc4b2931635d140', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?auto=webp&s=1aae4de8b150410bdbad97ef2b9456bcbfe14c07', 'width': 1586}, 'variants': {}}]} |
|
GGML guys, How's the matter with fine-tuning? | 1 | Not that up-to-date with current developments in the field of fine-tuning. People who know what they're doing, is there a way to directly fine-tune the GGML model? Have been using multiple models in the past, but they all really lack usability in a thing I'm interested in so the only option is to DIY I guess. | 2023-07-24T07:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1583k84/ggml_guys_hows_the_matter_with_finetuning/ | femboy_deer_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1583k84 | false | null | t3_1583k84 | /r/LocalLLaMA/comments/1583k84/ggml_guys_hows_the_matter_with_finetuning/ | false | false | default | 1 | null |
How to speed up tokenizer loading speed for lmsys/vicuna-13b-v1.3? (Takes 3 min) | 1 | Hi people, I was wondering how I can speed up my tokenizer loading speed. Currently it takes about 3 minutes to load the tokenizer for lmsys/vicuna-13b-v1.3. I don't really have this issue with TheBloke/Wizard-Vicuna-13B-Uncensored-HF. Thank you!
The code in question:
`model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path = "lmsys/vicuna-13b-v1.3", device_map = "balanced",torch_dtype = torch.bfloat16, use_cache = True)`
`tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, use_fast = True)` | 2023-07-24T07:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1583tsi/how_to_speed_up_tokenizer_loading_speed_for/ | ToeAdministrative493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1583tsi | false | null | t3_1583tsi | /r/LocalLLaMA/comments/1583tsi/how_to_speed_up_tokenizer_loading_speed_for/ | false | false | self | 1 | null |
The Dark Side of AI Censorship | 1 | Companies are concerned about the so-called safety of AI and are putting a lot of effort into making AI "safe" to use. Meta puts a special emphasis on the safety of the new LLaMA2 model, for this reason the release of the 33B version of the model was delayed. But, as you know, according to the law of dialectics, everything has a second side, and "security" has a dark side that no one talks about.
For example, if you ask almost any model about how to break into a car, you will logically receive an answer in the style of "It's illegal, I can't help you with this." But let's say the situation is critical, your own car slammed shut, the keys are inside and your child is there who cannot open the door and something threatens his life. How will AI react? The answer will be something like "I'm sorry that you are in this situation, but I can not give advice on how to open your car. Try to keep your keys in a safe place in the future ...". To any persuasion and arguments that the health and life of a person depends on it, the AI will answer that it regrets it, but will not give advice. it's illegal.
You can imagine another hypothetical situation where you are forced to take an illegal action to save your own or someone else's life and the AI will refuse to help you.
In my opinion, the developers are very one-sided in assessing what is acceptable and safe, and the current benchmarks for evaluating "safety" are not complete. | 2023-07-24T08:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1584cpl/the_dark_side_of_ai_censorship/ | coyotewld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1584cpl | false | null | t3_1584cpl | /r/LocalLLaMA/comments/1584cpl/the_dark_side_of_ai_censorship/ | false | false | self | 1 | null |
LoRA vs QLoRA performance | 1 | Is there any significant performance degradation when using QLoRA fine-tuning method over LoRA? I use QLoRA and this yields pretty good results. I'm trying to figure out if it's worth it to try Lora. | 2023-07-24T08:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1584uky/lora_vs_qlora_performance/ | generalfsb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1584uky | false | null | t3_1584uky | /r/LocalLLaMA/comments/1584uky/lora_vs_qlora_performance/ | false | false | self | 1 | null |
Kobold.cpp - What are your numbers between CLBlast and CUBlas? (VRAM usage & tokens/s) | 1 | Decided to do some quick informal testing to see whether CLBlast or CUBlas would work better on my machine.
I did my testing on a Ryzen 7 5800H laptop, with 32gb ddr4 ram, and an RTX 3070 laptop gpu (105w I think, 8gb vram), off of a 1tb WD SN730 nvme drive.
I used Kobold.cpp 1.36 (on windows 11), which is the latest version as of writing, with the following prompt:
>koboldcpp.exe --usecublas/clblas 0 0 --gpulayers %layers% --stream --smartcontext --model nous-hermes-llama2-13b.ggmlv3.q5\_K\_M.bin
And of course, you can probably tell from the prompt, I'm using the nous-hermes-llama2-13b q5\_K\_M model. The prompt I used was the same every time; "Write me a 20 word poem about fire"
**Here are my results.** *Conclusion/tl;dr at the bottom.*
24 layer clblas, 7gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (48 / 80 tokens)
Time Taken - Processing:3.2s (63ms/T), Generation:10.0s (208ms/T), Total:13.1s (3.7T/s)
24 layer cublas, 7.4gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (46 / 80 tokens)
Time Taken - Processing:2.9s (58ms/T), Generation:8.4s (182ms/T), Total:11.3s (4.1T/s)
28 layer clblast, 7.6gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (49 / 80 tokens)
Time Taken - Processing:4.6s (93ms/T), Generation:9.6s (197ms/T), Total:14.3s (3.4T/s)
26 layer cublas, 7.7gb vram\*
Processing Prompt (1 / 1 tokens)
Generating (45 / 80 tokens)
Time Taken - Processing:0.4s (397ms/T), Generation:7.6s (169ms/T), Total:8.0s (5.6T/s)
25 layer cublas, 7.6gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (49 / 80 tokens)
Time Taken - Processing:3.2s (65ms/T), Generation:8.5s (174ms/T), Total:11.8s (4.2T/s)
*\*26 layer cublas was kind of slow on my first try, and took 2 tokens/s. Resetting and trying again gave me a better result, but a follow up prompt gave me only 0.7 tokens/s. 26 layers likely uses too much vram here.*
This model has 41 layers according to clblast, and 43 according to cublas, however cublas seems to take up more vram. I could only fit 28 while using clblast, and 25 while using cublas. Anything more had issues. From what I'm able to tell, at the same, or even slightly less vram usage cublas is still a bit faster than clblast.
What numbers are you guys getting between clblas and cublast on kobold.cpp?
**Links**
Kobold.cpp - [https://github.com/LostRuins/koboldcpp/releases](https://github.com/LostRuins/koboldcpp/releases)
Nous Hermes Llama2 GGML Model - [https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML) | 2023-07-24T08:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1584vgc/koboldcpp_what_are_your_numbers_between_clblast/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1584vgc | false | null | t3_1584vgc | /r/LocalLLaMA/comments/1584vgc/koboldcpp_what_are_your_numbers_between_clblast/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GNFvogUAbgZ91N-Y_rvKuEqhrsqeJsHKjQCwxmml2ro', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=108&crop=smart&auto=webp&s=78a0daf2679060916d6932503899961c169a7868', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=216&crop=smart&auto=webp&s=b39f69c00d703cafaacded53e13e2d91091fedf0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=320&crop=smart&auto=webp&s=176d6f8ab5b8b976a2e56c2f77c2d1cf379a8142', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=640&crop=smart&auto=webp&s=e36c8123dd807318587b9a11faa6c717a0ff9131', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=960&crop=smart&auto=webp&s=8170edac9bc84fdfe7d6a0acbf7332e35ca4dcd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=1080&crop=smart&auto=webp&s=3cbb0a6cf8cfb5fa7c07fd2255a9e79e8e5cb3de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?auto=webp&s=391a2d777cd9be3773679ad45a12495d894bcf72', 'width': 1200}, 'variants': {}}]} |
Are there still people building on Falcon models? | 1 | As far as I understand, Falcon models have a better license than Llama 2 models? Apparently, it looks like support for Falcon in GGML is still experimental. 🥲 | 2023-07-24T09:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/1585mnh/are_there_still_people_building_on_falcon_models/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1585mnh | false | null | t3_1585mnh | /r/LocalLLaMA/comments/1585mnh/are_there_still_people_building_on_falcon_models/ | false | false | self | 1 | null |
Alternative Download mean (because of my unstable local electricity and Internet) | 1 | Hello everyone,
This is my second week of trying to download the llama-2 models without abrupt stops, but all my attempts are of no avail.
I'm posting this to request your guidance or assistance on how to download the models completely despite my current predicament.
As far as I can tell the [download.sh](https://github.com/facebookresearch/llama/blob/main/download.sh) in the GitHub -repo is a very simple script, it doesn't check for previously downloaded files or partially downloaded file, and every 24 hrs I'll have to get a new key.
i.e some of the options I have left are ipfs, torrent, torrent-via-i2p, modified download script, etc.(but I really prefer torrent because its very easy and once I get the files I can seed it indefinitely for others like me)
and I'd really appreciate any help anyone can provide.
Thank you.
NOTE:
This is my current status:
|Daily Power Outages|Minimum of 5 - 9 times a day|
|:-|:-|
|Power Outage Duration|Minimum 15 mins to 3-5 hours|
|Network|morning 500kbps, night 2 mbps(12am-5am)|
|UPS|n/a|
|Battery/Inverter|n/a|
|Generator|n/a|
​ | 2023-07-24T11:14:06 | https://www.reddit.com/r/LocalLLaMA/comments/1587o35/alternative_download_mean_because_of_my_unstable/ | Red_Luci4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1587o35 | false | null | t3_1587o35 | /r/LocalLLaMA/comments/1587o35/alternative_download_mean_because_of_my_unstable/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QVZ-7wnYYq9G9ot5wdH4HclxuLKGUTmobZ11SydGR44', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=108&crop=smart&auto=webp&s=71547f4c7b447974d800e339c632986a4f5a2474', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=216&crop=smart&auto=webp&s=4752408a7652f105e7d322db211e6be94aab6c5a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=320&crop=smart&auto=webp&s=917495e81ce866b62bad2b637780416998034673', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=640&crop=smart&auto=webp&s=8e5d8b52e02d8b08cc2c4e0e8393ae9afcce5fd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=960&crop=smart&auto=webp&s=747321f2f8586399f9ee4dd302eee8d169d35eb8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=1080&crop=smart&auto=webp&s=df1ea81e09e8457624fb64da3960e49908651a75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?auto=webp&s=114e4b923eaab1d483d5ced0570244ef8be9b867', 'width': 1200}, 'variants': {}}]} |
Use AMD GPU with LlamaCpp | 1 | I am trying to run this code on GPU, but currently is not using GPU at all.
https://preview.redd.it/3bbj139dhwdb1.png?width=690&format=png&auto=webp&s=d776c7b86028fc3c19bf8d06f1fee41876775a17
I am using AMD GPU R9 390 on ubuntu and OpenCL support was installed following this: [https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally\_got\_a\_model\_running\_on\_my\_xtx\_using/](https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/)
​ | 2023-07-24T11:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1588b09/use_amd_gpu_with_llamacpp/ | blacky_ninja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1588b09 | false | null | t3_1588b09 | /r/LocalLLaMA/comments/1588b09/use_amd_gpu_with_llamacpp/ | false | false | 1 | null |
|
Running Llama-2 faster | 1 | Hi,
I am working with a Telsa V100 16GB to run Llama-2 7b and 13b, I have used gptq and ggml version. the generation very slow it takes 25s and 32s respectively. Is there a way I can run it faster? | 2023-07-24T12:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1588wpe/running_llama2_faster/ | gijeri4793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1588wpe | false | null | t3_1588wpe | /r/LocalLLaMA/comments/1588wpe/running_llama2_faster/ | false | false | self | 1 | null |
Anyone building anything open source with Llama? | 1 | Hi everyone, hope it's okay to share this here in case it might be interesting to anyone:
We opened up applications for 100 Builders, a 4-week online program/hackathon for builders to collaborate on bold open source projects for AI (and crypto), organized by a small team and supported by amazing partners and sponsors. So if there is anyone building open source software with Llama, you should definitely consider joining!
It is 100% free to participate, light on time commitment, and 100% on builders' terms. Just a BIG build-that-thing energy between 100 projects that get in. I don't need to convince you about how important it is to have this new tech be built as open as possible, and we hope this will help with that! 🙂
We have a lot more to announce, including details about $20k+ (and counting) in sponsorship and prize money for participants. You can apply directly on the site [https://100.builders/](https://100.builders/?ref=ros) \- it takes 2 minutes.
Applications close on August 10th.
There is a little FAQ here: [https://docs.100.builders/100-builders-qs-and-as](https://docs.100.builders/100-builders-qs-and-as) but you can also shoot us an email with your questions to [[email protected]](mailto:[email protected])
Thanks and we're STOKED to build open tech together with you!
**If you have any questions or feedback, let me know!** | 2023-07-24T12:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/1589skn/anyone_building_anything_open_source_with_llama/ | LuisSur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1589skn | false | null | t3_1589skn | /r/LocalLLaMA/comments/1589skn/anyone_building_anything_open_source_with_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6K9I4Elp66laQqzuTtHu25AT9EbNe25z_bvJaYdcjJ8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=108&crop=smart&auto=webp&s=45f5041a22ef7176181884cd16245581c62ce10d', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=216&crop=smart&auto=webp&s=120a5e23acaf21999f22998130c1045649ec6fc7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=320&crop=smart&auto=webp&s=b4fededa8dda08b3fcc0c9e90fa258e33d8d1901', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=640&crop=smart&auto=webp&s=ed1eb8a3b25e74a13e29ff0e57da2ed8d2e42f65', 'width': 640}], 'source': {'height': 418, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?auto=webp&s=c44f321db0d5011ecc51842acf82aeb39c0dd1cb', 'width': 800}, 'variants': {}}]} |
Experiments on Consistent Rotation Base for Dynamic NTK RoPE | 1 | Hey guys,
I raised a subtle rotation inconsistency problem about the current Dynamic NTK RoPE in my previous [post](https://www.reddit.com/r/LocalLLaMA/comments/155bexn/a_potential_rotation_inconsistency_of_dynamically/?utm_source=share&utm_medium=web2x&context=3).
Our current evaluation methods are unable to accurately reflect whether such inconsistency in Dynamic NTK RoPE can harm perplexity or not. During the decoding process in any layer of decoders, the key\_states and query\_states are computed using hidden features, and they are rotated based on a fixed seq\_len, representing the context length. However, while decoding, LLM usually reuses previous cached keys which are rotated based on factors related to seq\_len to save memory. As the seq len keeps increasing, inconsistency arises between keys and queries. Consequently, the way how we currently compute perplexity is more like we keep the rotation base consistent.
To mitigate such a gap between perplexity evaluation and inference, I modified the codes about applying the rotary embedding on keys and queries in this [repo](https://github.com/NormXU/Consistent-DynamicNTKRoPE) and do simple experiments on Llama1-7B.
Here are some results:
​
[Figure1, Perplexity value on Llama1-7B, an 2k max sequence length model, values above 12.0 are cut off for concise; Vanilla: RoPE w\/o any interpolation; NTK: DynamicNTK when scale=1; Consistent DynamicNTK: keep rotation base between keys consistent, current huggingface implementations; Inconsistent DynamicNTK: keep rotation base between keys inconsistent w.r.t context length;](https://preview.redd.it/3rs1679yuwdb1.png?width=1000&format=png&auto=webp&s=a598161c12db153e19a376048616862d55431ef9)
We can see from Figure1 that when keeping the rotation base between keys inconsistent w.r.t context length, the perplexity significantly increases, indicating DynamicNTK harms the performances. This finding might initially seem counterintuitive.
However, as the sequence length continues to grow, we can notice a gradual reduction in perplexity for inconsistent Dynamic NTKScale RoPE . Interestingly, the inconsistent Dynamic NTKScale RoPE outperforms the NTKScale RoPE in terms of perplexity when the sequence length exceeds 5,000.
This may suggest why we tend to ignore the inconsistency in the rotation because it does benefit a longer context beyond a certain sequence length.
​
Still, my experiments have some limitations. I only test it on one dataset with limited samples. I hope my finds can be helpful to you. If there is any mistake in my codes and experiments, I'll appreciate it if you can kindly point it out. Please feel free to raise an issue in the repo as well.
​
Table1; Perplexity value
| **Length** | **Consistent Dynamic NTKScale PPL** | **Inconsistent Dynamic NTKScale PPL** | **NTKScale PPL** |
|:-|:-|:-|:-|
| 2800 | 4.285102386474609 | 10.203343925476075 | 4.301338438987732 |
| 3600 | 4.371902356147766 | 9.213108296394347 | 5.401671919822693 |
| 5600 | 4.536222472190857 | 8.04413757801056 | 10.291163015365601 |
| 7200 | 4.7303602981567385 | 7.674421100616455 | 15.359781618118285 |
| 8000 | 4.93225586414337 | 7.7100021314620975 | 15.884212293624877 |
​
​
​ | 2023-07-24T13:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/158acjl/experiments_on_consistent_rotation_base_for/ | Alternative_World936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158acjl | false | null | t3_158acjl | /r/LocalLLaMA/comments/158acjl/experiments_on_consistent_rotation_base_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'd7DpVHso4SOesjgXaprsdA9kTZpPF7MY4oDSBaPOhis', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=108&crop=smart&auto=webp&s=1ea594bc89806f9ffa046fb9a6889477fd10c38e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=216&crop=smart&auto=webp&s=74c8e0277b936234daa6089cfeec78ac604ce337', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=320&crop=smart&auto=webp&s=8d2840d619357ac7158414f4daa4042f5e735b8f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=640&crop=smart&auto=webp&s=76251152a9e965d527627a7cb8292046918a202e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=960&crop=smart&auto=webp&s=8c48e731766fda11a1f8d29cf4ef0b24ff7acea4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=1080&crop=smart&auto=webp&s=4ccdd94059bcfdbc2a52bcb9bd7e240aeff8f933', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?auto=webp&s=1f15058cefaeb2d70e0cba58b46ca6793cd0c92a', 'width': 1200}, 'variants': {}}]} |
|
QLora 13B Google Colab | 1 | Has anyone managed to Qlora the Llama 2 13B model on google colab (free tier)? Huggingface has a notebook ([https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)) where they fine tune a 20B GPT NeoX QLora on Colab free tier, so I would expect the 13B model to work?
I've tried using my own custom datasets and the dataset they used but keep going out of memory, so was wondering if anyone had any success, thanks! | 2023-07-24T13:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/158aiyk/qlora_13b_google_colab/ | nreHieS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158aiyk | false | null | t3_158aiyk | /r/LocalLLaMA/comments/158aiyk/qlora_13b_google_colab/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
To all the Politically Correct Censored Model makers...the road to hell is paved with~ | 1 | [removed] | 2023-07-24T13:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/158ayk8/to_all_the_politically_correct_censored_model/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158ayk8 | false | null | t3_158ayk8 | /r/LocalLLaMA/comments/158ayk8/to_all_the_politically_correct_censored_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'J5dXm7kG2hj66BfWkxTOXMhcXjc0-ehKB-3AjDYs0W0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?width=108&crop=smart&auto=webp&s=0e9ccc6d9e9ad207f8140da950d8bbf9aaceb561', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?width=216&crop=smart&auto=webp&s=6b5c30445583b6a28f895423f3bc28358a37b608', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?width=320&crop=smart&auto=webp&s=6c92954d16d74571d4c1e7dd111d43085c454a83', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?auto=webp&s=d3a89e331200dbc82b5a957231d665c6b9091819', 'width': 480}, 'variants': {}}]} |
Is there a model that can generate a story from an image? | 1 | I take a lot of screenshots while playing games. If I could give a model one of these + a text prompt for context and it let it tell a story (*that stays canon to the game world*) that would be amazing.
Is there something like this yet? | 2023-07-24T14:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/158bidh/is_there_a_model_that_can_generate_a_story_from/ | JebryyathHS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158bidh | false | null | t3_158bidh | /r/LocalLLaMA/comments/158bidh/is_there_a_model_that_can_generate_a_story_from/ | false | false | self | 1 | null |
Adding a P40 to a 2x3090 setup? | 1 | I was wondering if anyone had any experience adding a P40 (or similar high memory GPU) to an existing dual GPU setup for just the memory? I have a 2x3090 setup and was wondering if there would be negative performance issues if I add a P40 for the memory ; can I actively force CUDA to not use the P40 for inference? | 2023-07-24T14:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/158bldl/adding_a_p40_to_a_2x3090_setup/ | GeeBee72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158bldl | false | null | t3_158bldl | /r/LocalLLaMA/comments/158bldl/adding_a_p40_to_a_2x3090_setup/ | false | false | self | 1 | null |
University Dean Loves LLMs and Roleplaying and Wants ME to buy a Workstation with International Shipping for “AI Research” please send help | 1 | [removed] | 2023-07-24T14:04:29 | https://www.reddit.com/r/LocalLLaMA/comments/158bmvh/university_dean_loves_llms_and_roleplaying_and/ | Varzsy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158bmvh | false | null | t3_158bmvh | /r/LocalLLaMA/comments/158bmvh/university_dean_loves_llms_and_roleplaying_and/ | false | false | self | 1 | null |
Paper insights from Llama 2: Open Foundation and Fine-Tuned Chat Models | 1 | Is Llama 2 special or just a better iteration of Llama 1? 🤔 As most of you know by now, Meta released Llama 2—a better version of the Llama model with a commercial-friendly license. 🚀 Over the weekend, I had time to read the paper in which Meta released a long side with the model.
Below are some of my findings, which you might have missed, and improvements: 📝
🧠 A 34B version may come later after more testing
⚖️ The 7B model used a 285x token to parameter ratio, with loss still decreasing.
💰 Training the 7B would cost \~$1M in AWS compute (5$ per A100 on AWS on-demand)
🛫 Llama Chat was started before Llama 2 finished training
◼️ User prompts were masked/zeroed in SFT & RLHF training
👑 Reward Model (RM) accuracy is one of the most important proxies for Chat model
🚀 Collecting data in batches helped improve the overall model, since RM and LLM where iteratively re-trained. (online helpful in an online setting)
🔢 Used Rejection Sampling (RS) to distill knowledge from the 70B for a better SFT dataset
🤔 Only used RS for the first 3 versions, then extended to RS + PPO
🆕 Proposed GAtt, inspired by Context Distillation, to augment fine-tuning data for better multi-turn conversations
💡 RS + RM can boost performance by 10% compared to SFT
🛠 Chat model learned to use tools.
Check out the full paper here: [https://arxiv.org/abs/2307.03172](https://arxiv.org/abs/2307.03172)
Meta says, “…reinforcement learning proved highly effective, particularly given its cost and time effectiveness. Our findings underscore that the crucial determinant of RLHF’s success lies in the synergy it fosters between humans and LLMs throughout the annotation process.”
Remember that these are just my personal findings. Make sure always to conduct your own research and analysis. 🤗 | 2023-07-24T15:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/158dz3d/paper_insights_from_llama_2_open_foundation_and/ | Ok_Two6167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158dz3d | false | null | t3_158dz3d | /r/LocalLLaMA/comments/158dz3d/paper_insights_from_llama_2_open_foundation_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
A minimal guide on how to setup LLaMA 2 on your Mac | 1 | 2023-07-24T15:53:33 | https://abishov.com/2023/07/23/running-llama-2-on-your-mac.html | abishov | abishov.com | 1970-01-01T00:00:00 | 0 | {} | 158einh | false | null | t3_158einh | /r/LocalLLaMA/comments/158einh/a_minimal_guide_on_how_to_setup_llama_2_on_your/ | false | false | default | 1 | null |
|
Apes Together Strong! - Dreaming of Fully Democratizing the Training of New Models Using Distributed Computing | 1 | I’m very new to running my own LLMs, but with the resources available, I am making my way through it somehow. My brain is melting somewhat, but it will be okay.
Something I want to do now is fine-tune a model such as Llama-2 on a dataset consisting of all the new knowledge around running, training, and working with open source and local LLMs present here on this sub, GitHub, and blog posts. I feel like it would greatly reduce my urge to ask questions that have already been answered or help me avoid wading through vast troves of knowledge spread out all over the Internet. Using the wonderful things being produced by projects/people such as OpenAccess-AI-Collective/axolotl, artidoro/qlora, llama.cpp/convert-lora-to-ggml.py, and others, I think I'm going to be able to do so. I might need a better setup, but there seems to be a path. (If someone has already fine-tuned a model to do this, please let me know).
Where I don’t yet see any clear path yet is for normal folks to be able to train new models themselves. This is so far out of reach for me to do right now myself; it's not even funny. And it’s not because there’s no possible way for me to understand how to do it; it’s that I simply do not have and will never have the computing resources needed.
What if that could change, though? Alone, we’ll never get it done, but what if someone created a tool which allowed for a distributed computing network where we could all volunteer our own small amount of resources toward a common goal of training a particular model? Projects like Petals (https://github.com/bigscience-workshop/petals) for running these bigger LLMs already exist. It seems like to me there should be all the pieces to make this happen.
Is there some reason that training these large LLMs together is not a logical next step or will somehow simply be impossible? | 2023-07-24T16:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/158exoy/apes_together_strong_dreaming_of_fully/ | The_IT_Dude_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158exoy | false | null | t3_158exoy | /r/LocalLLaMA/comments/158exoy/apes_together_strong_dreaming_of_fully/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QJH9MsKXTWq2HFWN6IwXg0Sk2PqpHYokLuR9smdy96s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=108&crop=smart&auto=webp&s=996ab8f5b94a43a4b5de98c06136d3c64e610228', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=216&crop=smart&auto=webp&s=a06ae34c6268a916374eceb3c9b478e2d263a748', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=320&crop=smart&auto=webp&s=70859147de276d6447fe58957176919eb42c0eac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=640&crop=smart&auto=webp&s=d781fa699385959e9613377e5446819311f31870', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=960&crop=smart&auto=webp&s=3472b01de00a98f61d3d5f42366b772b282b352e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=1080&crop=smart&auto=webp&s=28d4f20acf2e327c93f5536835d90ceb29e3882f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?auto=webp&s=1746a2b95c2047479ee64717df426959219234b2', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.