title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Help wanted with a project
1
I have been tasked with researching the use of a Large Language Model as a knowledge base for my company. I aim to implement the most capable model possible, so I would appreciate any recommendations on hardware and AI models that would be suitable. Specifically, what hardware specifications - such as GPUs, CPUs, and memory - would you advise to support advanced models? And which Large Language Model architecture would you suggest is best suited for a robust enterprise knowledge base, in terms of capabilities and scalability? Insights on model training approaches and optimization for our use case would also be greatly appreciated. My goal is to leverage state-of-the-art AI to build an intelligent and comprehensive knowledge base. Expert guidance on optimal hardware configurations and model selection would help ensure I implement an effective solution suited to my company's needs and resources. I welcome any technical guidance you can offer, as I want to make an informed decision for this important initiative. Please let me know your thoughts and suggestions.
2023-08-05T10:22:36
https://www.reddit.com/r/LocalLLaMA/comments/15is2it/help_wanted_with_a_project/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15is2it
false
null
t3_15is2it
/r/LocalLLaMA/comments/15is2it/help_wanted_with_a_project/
false
false
self
1
null
What exactly is it that makes Llama 2 better than Llama?
1
[removed]
2023-08-05T10:27:42
https://www.reddit.com/r/LocalLLaMA/comments/15is5t3/what_exactly_is_it_that_makes_llama_2_better_than/
crono760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15is5t3
false
null
t3_15is5t3
/r/LocalLLaMA/comments/15is5t3/what_exactly_is_it_that_makes_llama_2_better_than/
false
false
self
1
null
Comparing Linear Rope Scaling vs NTK Scaling for 8K Superhot and Hermes-LLongMA-2 8K Models
1
I have done quite a few tests with models that have been finetuned with linear rope scaling, like the 8K superhot models and now also with the hermes-llongma-2-13b-8k.ggmlv3.q4\_K\_S.bin model. My GPU has 16GB VRAM, which allows me to run 13B q4\_0 or q4\_K\_S models entirely on the GPU with 8K context using koboldcpp (v1.38). As an initial test, I use a short prompt (in story mode) and set the number of tokens to generate to 8000, which still fits in the 8K context buffer together with the prompt. I use temp 0.3, top p 0.9 and streaming and I abort and regenerate if I don't like the first chapter, e.g. if it's too short. It takes about 8 minutes for me to generate the 8000 tokens and then I look through the text to check for obvious problems. The prompt that I use should (with some luck) generate enough varying text and reads as follows: The following is an encyclopedia about every country in the world, each chapter addressing a different country, ordered by the name of the country, including its statistics, GDP per capita, history, culture, and notable landmarks. \------------- Chapter 1: Afghanistan Afghanistan First, I conducted the test multiple times using the correct scaling method: linear rope scaling, with the correct scaling factor. Next, I conducted multiple tests using the incorrect scaling method: NTK scaling instead of linear rope scaling, with the correct scaling factor. Results for all 13B 8K superhot models I tested, like chronos-hermes-13b-superhot-8k.ggmlv3.q4\_0.bin: \- ropeconfig 0.25 10000 (linear factor 4): Many problems in numbers containing 2 or more successive identical digits, like 11 -> 111, 2001 -> 20001 etc. \- ropeconfig 1.0 82000 (NTK factor 4): Much better, very obvious. But still problems in numbers containing 3 or more successive identical digits, like 000 -> 0000. Results for hermes-llongma-2-13b-8k.ggmlv3.q4\_K\_S.bin: \- ropeconfig 0.5 10000 (linear factor 2): Problems in numbers containing 3 or more successive identical digits, like 000 -> 0000. Similar quality of numbers as superhot 8K models when using NTK scaling, maybe slightly better. \- ropeconfig 1.0 32000 (NTK factor 2): Much better. No obvious problems in numbers seen. So, according to these results, 8K models based on linear rope scaling like superhot and hermes-llongma-2 produce much better number behaviour when using NTK scaling than when using linear rope scaling during inference. This result was surprising. I have only tested using koboldcpp (up to v1.38), but I don't believe that there is a bug. I also have the impression that not just numbers but the quality of the text in general is better when using NTK scaling instead of linear rope scaling with these models, but I may be hallucinating that. Has anybody else seen similar behaviour for such a test with 8K superhot or LLongMA-2 8K models? I also did these tests with the vicuna-13b-v1.5-16k.ggmlv3.q4\_K\_S.bin model which supports a 16K context and is based on LLaMA-2 (scaling factor 4), testing with a 8K context buffer and I didn't see any numbers problems in the 8000 tokens produced, neither using linear scaling nor using NTK scaling.
2023-08-05T11:12:16
https://www.reddit.com/r/LocalLLaMA/comments/15isyyo/comparing_linear_rope_scaling_vs_ntk_scaling_for/
mll59
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15isyyo
false
null
t3_15isyyo
/r/LocalLLaMA/comments/15isyyo/comparing_linear_rope_scaling_vs_ntk_scaling_for/
false
false
self
1
null
Transformer implementation - help
1
[removed]
2023-08-05T11:40:16
https://www.reddit.com/r/LocalLLaMA/comments/15ithtd/transformer_implementation_help/
04RR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ithtd
false
null
t3_15ithtd
/r/LocalLLaMA/comments/15ithtd/transformer_implementation_help/
false
false
self
1
null
Help for set Llama.cpp public
1
Hi I would like to put llama.cpp public, every time I start it i gave me this output: To create a public link, set share=True in launch()." where is the file with that option? Thanks
2023-08-05T11:54:40
https://www.reddit.com/r/LocalLLaMA/comments/15itrt5/help_for_set_llamacpp_public/
HackintoshWITHoMEN30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15itrt5
false
null
t3_15itrt5
/r/LocalLLaMA/comments/15itrt5/help_for_set_llamacpp_public/
false
false
self
1
null
Fine-Tuning LLaMA 2 7b q4 GGML?
1
So I got LLaMA 2 7b q4 GGML inference running on some systems, works well enough. Using Koboldcpp. I'd like to fine-tune the model, and I'm trying to figure out how to go about doing this...there are a lot of methods (LORA etc) so I'm a little confused. I'm fine with the default context lengths etc 1. the process: So I should I fine tune the un-quantized fp16 LLaMA 2 7b released by Meta, and then convert the fine tuned model into a .bin? Can quantized GGML models be fine-tuned directly? I found this: https://www.philschmid.de/instruction-tune-llama-2 - - but its for QLORA. Is it better (from output quality POV) to fine tune with or without QLORA and flash attention? 2. instruction fine tuned dataset: should this be a JSONL file, a JSON, separate files...how should I format it? 3. Hyperparameters: can I use the same hyperparameters specified in the link? what if I don't use QLORA, what would the hyperparameters be then? 4. Where: runpod seems to be the recommended one here, are there any templates I can use for runpod?
2023-08-05T11:58:44
https://www.reddit.com/r/LocalLLaMA/comments/15itul5/finetuning_llama_2_7b_q4_ggml/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15itul5
false
null
t3_15itul5
/r/LocalLLaMA/comments/15itul5/finetuning_llama_2_7b_q4_ggml/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NYy7vS_DCF7ziYozZI5NewU4mrQpjLxWwJIEeoOeoTE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=108&crop=smart&auto=webp&s=4768a7f3ce8e98b65ec2928dd27be69d13817653', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=216&crop=smart&auto=webp&s=f597cbd4fbbce7835de2c3ddf57bea4be32791f5', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=320&crop=smart&auto=webp&s=63abbf41f12bdd3f3a744092849dea63858626f3', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=640&crop=smart&auto=webp&s=8c350290c3032da07ffd1380750949fe1a6eddec', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=960&crop=smart&auto=webp&s=eb6f8491e988e2a9cbc7ff3ab2a8f7d3c829b09f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=1080&crop=smart&auto=webp&s=c63d9cb2ef67160c0d0c200ae7b5a4b86e3e4148', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?auto=webp&s=b98be99841a14dfc0937f46c8910ea6847ab32b0', 'width': 2400}, 'variants': {}}]}
How we can get the access of llama 2 API key?
1
I want to use llama 2 model in my application but doesn't know where I can get API key which i can use in my application. I know we can host model private instance but it's doesn't fit in my requirement, i just want to make 500 to 1000 request every day. So for that it's doesn't make any sense of me to deploy private instance so i get one more unwanted headache and extra bill. So is there any way where i can get the access of llama 2 API free or low cost?
2023-08-05T12:04:41
https://www.reddit.com/r/LocalLLaMA/comments/15itz6u/how_we_can_get_the_access_of_llama_2_api_key/
Prashant_4200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15itz6u
false
null
t3_15itz6u
/r/LocalLLaMA/comments/15itz6u/how_we_can_get_the_access_of_llama_2_api_key/
false
false
self
1
null
What is the best way to run local LLM in another language than English?
1
Hi folks, I wonder what is the most optimal way for me to run a local LLM that can make responses in preffered language? In my case Norwegian. I want to develop a local system that can scan my docs (that are originally written in Norwegian) and give me response in Norwegian as well. Thanks in advance!
2023-08-05T13:37:06
https://www.reddit.com/r/LocalLLaMA/comments/15ivxl5/what_is_the_best_way_to_run_local_llm_in_another/
Calm-Cap-1480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ivxl5
false
null
t3_15ivxl5
/r/LocalLLaMA/comments/15ivxl5/what_is_the_best_way_to_run_local_llm_in_another/
false
false
self
1
null
Trying to load togethercomputer_LLaMA-2-7B-32K with fully loaded context but it OOMs, but I should have enough VRAM?
1
Using text-generation-webui API through Sillytavern to actually be able to send more than 16k of context. I start running out of memory at about 15k context on a single 80gb card, so I added a second 80gb and I can see that the VRAM of both cards is actually being used, but I'm not able to actually add any context past that despite doubling my VRAM. I'm still getting the same out of memory issue. I've tried using auto-devices and manually assigning VRAM amounts, but both give me the same problem. Tried loading in 8/4 bit, and that lowers the initial (pretty miniscule) footprint of the model without context, but doesn't seem to impact how the context load uses VRAM. I don't think it's some weird problem with the normal 16k context barrier in ooba (because I'm able to use the 16k superhot models without issue, and I'm actually OOMing when I'm sending only 15k on this model.) Any ideas?
2023-08-05T14:18:00
https://www.reddit.com/r/LocalLLaMA/comments/15iwvqj/trying_to_load_togethercomputer_llama27b32k_with/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iwvqj
false
null
t3_15iwvqj
/r/LocalLLaMA/comments/15iwvqj/trying_to_load_togethercomputer_llama27b32k_with/
false
false
self
1
null
Any feedback using a i9-13900hx and 4090 mobile?
1
I know, I know laptops are slow, but I travel a lot.. Anyway I'm wondering what sort of speed I could get with this setup: I9-13900hx 8 p-core 2.20ghz/5.40ghz 6 e-core 1.70ghz/3.60ghz 32 threads total Ddr5 5600mhz max bandwidth 89 gb/s This seems like a crazy cpu for a laptop And a 4090 with 16gb ggr6 576gb/s 9728 cuda core To me it seems like a good setup I'm wondering what kind of speeds I could expect doing inference on a 70b 4q? I guess using a ggml model offloading 16gb to vram and about 30gb in ram ram Thank your for ur time
2023-08-05T14:52:30
https://www.reddit.com/r/LocalLLaMA/comments/15ixp1j/any_feedback_using_a_i913900hx_and_4090_mobile/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ixp1j
false
null
t3_15ixp1j
/r/LocalLLaMA/comments/15ixp1j/any_feedback_using_a_i913900hx_and_4090_mobile/
false
false
self
1
null
How does the HF leaderboard work?
1
A lot of models are tuned to have specific prompt formats. When I go to the model submission page in HF leaderboard, I don’t see anywhere where you specify the prompt format. So how does the evaluation know what format to use?
2023-08-05T14:58:35
https://www.reddit.com/r/LocalLLaMA/comments/15ixu4y/how_does_the_hf_leaderboard_work/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ixu4y
false
null
t3_15ixu4y
/r/LocalLLaMA/comments/15ixu4y/how_does_the_hf_leaderboard_work/
false
false
self
1
null
running vicuna models serverless on banana: sharing my templates for community use
1
hey. i am running the vicuna model family serverless on banana. wanted to share the templates in case anyone else feels the need to do so🙂. \[vicuna-7b\](https://github.com/yachty66/demo-vicuna-7b) \[vicuna-13b\](https://github.com/yachty66/demo-vicuna-13b) \[vicuna-33b\](https://github.com/yachty66/demo-vicuna-33b)
2023-08-05T15:16:34
https://www.reddit.com/r/LocalLLaMA/comments/15iyacr/running_vicuna_models_serverless_on_banana/
yachty66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iyacr
false
null
t3_15iyacr
/r/LocalLLaMA/comments/15iyacr/running_vicuna_models_serverless_on_banana/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkcL1-qWvpPw_u6jENaoK_t-7_GgyyAy59AwbnvGZ3c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=108&crop=smart&auto=webp&s=e0f0de2b2f67cbe0c90b7b0aba14b69651d6351f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=216&crop=smart&auto=webp&s=b28fc2f9c6aa0537a45cd41a3790b043872f9177', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=320&crop=smart&auto=webp&s=1a60ef1e3bf89ab127ab3da3eec1b52ee448d941', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=640&crop=smart&auto=webp&s=4a4a5097974d07a53d9a611126fa4d3d823f1e6b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=960&crop=smart&auto=webp&s=d6079d2c72be4c4bee078e68f54f9d2da0257e0b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=1080&crop=smart&auto=webp&s=649b970d4f225dda86313839160c0cde5a37539a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?auto=webp&s=795cd48079e2382375b7b88de78918943ec95b4f', 'width': 1200}, 'variants': {}}]}
How to run local models on the cloud
1
Hello, I wanted to make a guide on how to run local models on the cloud, for people who want to test, but don't have the right hardware (like me). I will be using Vast in this guide, but Runpod will work too. (This will have you using Ooga Booga Text Generation Web UI. Let me know if you want a guide for KoboldAI too.) * Go to [https://cloud.vast.ai/](https://cloud.vast.ai/create/) to find instances (if you want, you can use [https://cloud.vast.ai/?ref\_id=73271](https://cloud.vast.ai/?ref_id=73271) instead to support me at no extra cost, completely optional though). Here are some things to consider: 1. Hover your mouse over every instance you might buy, as there are bandwidth costs associated with every instance. A few of them have no costs, some are dirt cheap, and others are outrageously priced. I would not do anything like 30 dollars every TB (1 TB is about 1000 GB), but anything under 10 dollars/TB is pretty reasonable. You will need to figure out the price yourself though, as what may be cheap for me may not be cheap for you. For example, if I wanted 30 GB to download (which for 4bit Llama-2-70b GPTQ which is about 35 GB, if the pricing is about 5 dollars/TB for download (marked with the down arrow before the price) then it would cost about 17 cents, I think. I'm tired, so I don't know for sure. 2. Keep in mind you have to pay for storage. You can set the amount on the left (you can't change it later without destroying the instance then renting the instance again). 3. I would never choose a GTX unless you know what you are doing, probably not even a RTX 2070. I would say go with either RTX 3060 or RTX A2000 at the very minimum. 4. Internet speed: make sure it is at least 200 mbps for download otherwise it will take a very long time to download the models. ​ https://preview.redd.it/tcf058vagbgb1.png?width=880&format=png&auto=webp&s=080fe75c2bd42f9f03dd3e854c068b7ec60448a5 * Go to the side where it says "Instance Configuration" then click "Edit Image and Config". Next, go to the "Recommended" tab, look for "OogaBooga LLM WebUI", currently it says "(LLaMa 2)" after the text, which will probably change when a new model comes out. Click on it (not the link, but the blank space around it). It should highlight it as shown here. Then click "Select" and now the configuration should say the name of the image. * Next, find the model you want. I wouldn't recommend anything less than 13B, as it just won't be very good or coherent. I will be using [https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ) in this example. It is censored though. I would recommend at least 12 GB of VRAM for this, so a 3060 will work, as long as you don't go too high on the context length, maybe 1 or 2 thousand tokens (maybe more, maybe less, should be at least 1k though)? You will need to use ExLlama to do it because it uses less VRAM which allows for more context (I will show that in a sec), but keep in mind that the model itself can only go to 4k context. * Now, check to make sure you have enough storage. On the model page on HuggingFace, click on "files and versions", and look at the size of the model. Add about 1 or 2 gigs to that just in case, then you should be ready. TheBloke/WizardLM-13B-V1.2-GPTQ (which is 4bit) uses about 8 gigs of storage, and as I just said, about 12 GB of VRAM. * Rent your desired instance. Make sure you look at my considerations one more time, then go to the instances tab and wait for it to boot up. I will be using an RTX 3060 for this, which has 12GB of VRAM. * Once it is done booting (it can sometimes take a while, if it takes over 10 mins, maybe try a different instance), click "open" then in the new link it opened, go to the "model" tab. Go back to the model you want. Next to the name, there should be a tiny copy button. Click that, then paste it into the field that says "download custom model or lora". * After that, in the same tab, go to the drop-down under the text "model", make sure it has the model selected, then click load (it may already be loaded). You may have to click the refresh button too. Usually, the model page (if it is by TheBloke) will say what the prompt template is, which is this for WizardLM 1.2: ​ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: * Go back to the text generation tab, paste the template and modify the prompt to whatever you like. * If you want to chat with it (not all models are chat models though), then go to session, then change the mode to "chat" then click "apply and restart". * To make sure your prompt format is correct in chat mode, go to "chat settings" and for me, I would put "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." in the context and set "your name" to "USER:" and set "character's name" to "ASSISTANT:". * You may want to set the parameters to a different preset. Finding the best one will take some trial and error, but there is a "learn more" dropdown that says some recommended ones for chat and instruct. That's it for this guide. I hope this helped you! If you have any questions, just ask!
2023-08-05T16:14:13
https://www.reddit.com/r/LocalLLaMA/comments/15izoxk/how_to_run_local_models_on_the_cloud/
kotykd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15izoxk
false
null
t3_15izoxk
/r/LocalLLaMA/comments/15izoxk/how_to_run_local_models_on_the_cloud/
false
false
https://b.thumbs.redditm…sB-nnoAqsRIE.jpg
1
null
Preferred way to integrate into a new UI?
1
I've been working on an open source app for LLM power users, spending a lot of time on the UI and overall "quality of life" features that ChatGPT and the likes of gradio don't offer out of the box. Question for you: Right now it's OpenAI only, but I want to change that by offering integration options for other local models (or models running in the cloud somewhere). **What would be the easiest way for you to plugin to a new UI, if you wanted to do so, without having to change your backend setup**? I was thinking about integrating the oobabooga API but I'm thinking instead I may just make an option to add additional API endpoints and then figure out how to handle schema configurations if they differ from OpenAI's API. Would love any feedback you have, and if you want to check out the app I'd love to know what you think as well: [https://github.com/neary-ai/neary](https://github.com/neary-ai/neary). Thanks!
2023-08-05T16:52:54
https://www.reddit.com/r/LocalLLaMA/comments/15j0mdn/preferred_way_to_integrate_into_a_new_ui/
BlindingLT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j0mdn
false
null
t3_15j0mdn
/r/LocalLLaMA/comments/15j0mdn/preferred_way_to_integrate_into_a_new_ui/
false
false
self
1
{'enabled': False, 'images': [{'id': 'C_FfafjN8qeHhNfdkX8eP2RZrXAdBCmkDPRCdfncjig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=108&crop=smart&auto=webp&s=524c9c73c66cfc736217af354749fd4c9182984a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=216&crop=smart&auto=webp&s=939d567fd3b0fc5169dc5816538f36676d309752', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=320&crop=smart&auto=webp&s=38a1523724c7c4e0bffedb2864dc1d3beb1afe2a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=640&crop=smart&auto=webp&s=96fcbad9506f42e4804deb645a420bbf3c4020fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=960&crop=smart&auto=webp&s=3cadbd7a6455633f88e7b1f9bd6eccf1f19af58a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=1080&crop=smart&auto=webp&s=8f58e7d187b2cef2a012b546c8e8f3e52b31a975', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?auto=webp&s=ef376cddeeba58aca07bf1f47ff7f2fcd2097ef0', 'width': 1200}, 'variants': {}}]}
What are the variables to take into account to calculate the best model that fit in my hardware?
1
Ok, I installed oobabooga and I have to download a model to start getting into this LLM world. I started looking for a model to try on hugginface. I ideally search an Argentine or Latin Spanish model, but an English model also works for me. I cannot understand among so many models and sizes which is best for the hardware I have: Intel Core i7-7700HQ CPU 2.8Ghz Nvidia GeForce GTX 1060 6GB VRAM 32GB Ram DDR4 2400mhz 2TB M2 nvme ​ **I have two main questions:** ​ **1.** Is the amount of VRAM or RAM a limitation when running a model or does it just make the model run and respond slower? ​ **2.** Is there a way to calculate the amount of vram and ram memory I need to run a model or at least certain parameters to take into account when choosing the best model for my hardware? ​ **Context:** I am a software engineer with +15 years of experience in many languages and technologies. I am passionate about AI, but have no experience with LLMs yet. I have quite a bit of experience in Stable Diffusion, using LoRas, LoRas training. I use kohya, automatic1111 and ComfyUI daily. ​ Links or references to information that help me get started and learn about this topic are appreciated. ​ Sorry for my English, i am a native Spanish speaker. ​ Thanks in advance.
2023-08-05T16:53:28
https://www.reddit.com/r/LocalLLaMA/comments/15j0mvm/what_are_the_variables_to_take_into_account_to/
joaco84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j0mvm
false
null
t3_15j0mvm
/r/LocalLLaMA/comments/15j0mvm/what_are_the_variables_to_take_into_account_to/
false
false
self
1
null
[Request] Updated benchmark numbers for 7900xt/x ?
1
7900xtx for **llamacpp/koboldcpp (CLBlast)** or **exllama(ROCm-HIP)** or **vLLM(idk if amd gfx is supported)** or **whatever other inference software you use**. Please let me know of more projects i haven't heard of yet. So. Help me make an informed decision. **Are you running 7900xtx for local llama? What's your experience and speed?** My preferred OS is Windows, but all info is appreciated. Thank you. ​ I feel like 7900xtx should be faster than 3090, only behind 4090 (significantly) in speed. It has 24GB VRAM and ROCm/HIP support is finally coming to windows, even if it's limited as a first release. I nearly bought a 7900xtx (900£) on the spot the day ROCm for windows was released, except it was just out of stock on my preferred UK retailer. In the days later i realized, ROCm support was still limited. So i thought, what the heck, I ll wait for more info on how it performs on AI. My other option is a 4090, but thats 1530£ (founders edition) . 4090 should be roughly 40-60 % faster, if neither 7900xtx or 4090 are VRAM bandwidth limited.
2023-08-05T18:57:51
https://www.reddit.com/r/LocalLLaMA/comments/15j3lzw/request_updated_benchmark_numbers_for_7900xtx/
windozeFanboi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j3lzw
false
null
t3_15j3lzw
/r/LocalLLaMA/comments/15j3lzw/request_updated_benchmark_numbers_for_7900xtx/
false
false
self
1
null
MoEM > MoE?
6
I am very much in the beginner category of wrangling LLMs and other AI types, but I have the framework of an idea. I'd love to hear any comments or criticism, if anyone knows that this has already been tried, or which fundamental principles i could be misinterpreting/misunderstanding. Could we take the GPT4 architecture (~8 'expert' models that are specialized to a specific domain, ie coding, all working together in a MoE [mixture of experts] set up.) a step further and make a kind of 'MoEM' = Mixture of Expert Managers? In this idea, I'm imagining that similarly, there might be a 'Coding Expert Manager' which in turn manages smaller, even more specialized models (eg, Language, optimization, debugging, maybe even particular fields like data processing, web dev, etc). My reasoning is that it seems like the smaller, more specialized models are outperforming larger, general models on specific tasks. I also imagine it might help with efficiency, if you could 'freeze' the parts of the model that aren't needed for a given task, or maybe the 'Expert Managers' help to ensure that only the relevant parts of a network are activated? The more I've typed, the more this just sounds like a hybrid of the Agent/Tool-maker models and MoE, but I'd still love any and all input!
2023-08-05T19:19:59
https://www.reddit.com/r/LocalLLaMA/comments/15j45a7/moem_moe/
Only-Offer4234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j45a7
false
null
t3_15j45a7
/r/LocalLLaMA/comments/15j45a7/moem_moe/
false
false
self
6
null
The Quest to Have Endless Conversations with Llama and ChatGPT 🗣️🦙🤖💬
1
2023-08-05T20:50:41
https://medium.com/@joaolages/the-quest-to-have-endless-conversations-with-llama-and-chatgpt-%EF%B8%8F-81360b9b34b2
JClub
medium.com
1970-01-01T00:00:00
0
{}
15j6cre
false
null
t3_15j6cre
/r/LocalLLaMA/comments/15j6cre/the_quest_to_have_endless_conversations_with/
false
false
https://b.thumbs.redditm…7uaND7Zcfeak.jpg
1
{'enabled': False, 'images': [{'id': 'Bd2tKVhk26ts2G8Ag4O6orZvv4fbvZNAF7Q03ikzADQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=108&crop=smart&auto=webp&s=ba96a68eba2babade967b840ab3c1d89e134154e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=216&crop=smart&auto=webp&s=6e06649464f3ca435c32f2ed9e73168e9273fb90', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=320&crop=smart&auto=webp&s=d8d718d986fc19b9bd1bb0562e6733d0ee2ba125', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=640&crop=smart&auto=webp&s=5a5f3c7aeeb56bbab7b0db89995dd4919050176f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=960&crop=smart&auto=webp&s=7dc8f89416830f2e71160073e28463a450434e17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=1080&crop=smart&auto=webp&s=92d2d7a7907d777bff8e63ad2d24594dae67f25c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?auto=webp&s=0b855eb3d262137937689963a3854557f4517b0b', 'width': 1200}, 'variants': {}}]}
Looking for RP 3B / 2.7B models
1
Exist a version of Erebus, Pygmalion or something similar model of 3B or 2.7B that I can use in llama.cpp? I can't be able to run the 7B version of Llama. So far I've been able to run open llama 3B.
2023-08-05T21:17:14
https://www.reddit.com/r/LocalLLaMA/comments/15j707q/looking_for_rp_3b_27b_models/
Alexis212s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j707q
false
null
t3_15j707q
/r/LocalLLaMA/comments/15j707q/looking_for_rp_3b_27b_models/
false
false
self
1
null
Formatting Training Data for Fiction Generation?
1
If I want to fine-tune a model to generate fiction, how should I format my training data? In this case it is a half written novel.
2023-08-05T22:02:59
https://www.reddit.com/r/LocalLLaMA/comments/15j85eh/formatting_training_data_for_fiction_generation/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j85eh
false
null
t3_15j85eh
/r/LocalLLaMA/comments/15j85eh/formatting_training_data_for_fiction_generation/
false
false
self
1
null
The ways to train conversational language model or chat model. What is the template to train a interactive chatbot for ORCA dataset?
1
[removed]
2023-08-05T22:53:43
https://www.reddit.com/r/LocalLLaMA/comments/15j9coe/the_ways_to_train_conversational_language_model/
dungduong16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j9coe
false
null
t3_15j9coe
/r/LocalLLaMA/comments/15j9coe/the_ways_to_train_conversational_language_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PVvkKvkDFvHzNVO6GsOKvqBayJrtELtPRA80qzZg8kY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=108&crop=smart&auto=webp&s=44bfb6288e553d23a79c16441268211c0db6c774', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=216&crop=smart&auto=webp&s=454edd26769c3e54dc1c43b7c37ae23d77b8bd37', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=320&crop=smart&auto=webp&s=11e406b202ac33c1ca5b520f030ddf0eb97eea5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=640&crop=smart&auto=webp&s=5d32d81c2587df25d53350de76765df78559587a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=960&crop=smart&auto=webp&s=3ef68369061443211eb5f206bca142423dac71e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=1080&crop=smart&auto=webp&s=7800c57192c5fbd4c083792789759d7b308e79a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?auto=webp&s=f8162e7879d23a737e1413b2e63529015c826717', 'width': 1200}, 'variants': {}}]}
Advice on where to start
1
I asked this question to chat gpt 4 and it have an answer but I don't know where to start so I want to ask here and hope for advisement: If I was trying to design an AI setup for trading stocks and options as one use to interpret data and develop strategies and secondly as a personal AI for my 2 year old daughter that would essentially follow her through life as her own personal tutor, what with be the best way to achieve this? Should I build a separate computer that functions as an AI server or just rent cloud space? Where would be a good starting place to learn how to do this? Any help is greatly appreciated.
2023-08-05T23:10:15
https://www.reddit.com/r/LocalLLaMA/comments/15j9qil/advice_on_where_to_start/
Seronkseronk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j9qil
false
null
t3_15j9qil
/r/LocalLLaMA/comments/15j9qil/advice_on_where_to_start/
false
false
self
1
null
llama-2-70b-chat model running continuously
1
I can run the 7b and 13b models with no issue. However, when attempting to run the 70b versions, the model loads and then runs on the GPUs at 100% forever. No errors, just runs continuously. torchrun --nproc_per_node 8 example_chat_completion.py --ckpt_dir llama-2-70b-chat/ --tokenizer_path tokenizer.model --max_seq_len 512 --max_batch_size 4 Any ideas on what could be causing this issue with 70b but not the other versions?
2023-08-05T23:22:47
https://www.reddit.com/r/LocalLLaMA/comments/15ja0mz/llama270bchat_model_running_continuously/
_daddylonglegz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ja0mz
false
null
t3_15ja0mz
/r/LocalLLaMA/comments/15ja0mz/llama270bchat_model_running_continuously/
false
false
self
1
null
Start-ups in Local LLM
1
It's now possible to truly have your own unbiased AI assistant that is no one else's but yours! Just this simple selling point should be enough for start-ups to jump on this, but the only startup I've seen that is serving local integration is Tiny Corp by George Hotz. (Check Him Out!) Why do you guys follow this community? what about LocalRun LLMs motivates you to sit through them command lines, and install headaches?
2023-08-05T23:25:35
https://www.reddit.com/r/LocalLLaMA/comments/15ja2uq/startups_in_local_llm/
PigWedgion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ja2uq
false
null
t3_15ja2uq
/r/LocalLLaMA/comments/15ja2uq/startups_in_local_llm/
false
false
self
1
null
A beginner who is looking for other ways to use llms
1
I've a simple chat ui (CLI) for command line. Requirements: modern CPU with 6+ cores, 32GB RAM, OpenBLAS + koboldcpp. It can communicate with a local koboldcpp or a remote runpod instance. It can also: * import a tavernai (webp) character * import tavernai's room / chatlog * connect to multiple koboldcpp through different ports * listen to my voice, and reply in SAPI5 voices * chat rooms are simple text files, <char name>-<date>.txt I use it for roleplaying and speaking practice (with a chatbot in a foreign language). An online speech recognition api - similar to whisper api - transcribes my voice to text, which then appears in the chat. I'm talking to the same character, but (2-3) different language models (13b vicuna, 7b janeway, and another 7b) are replying to the same sentence. I can choose one of them, then write my response (or dictate, if I want to). It costs me 0 Euro (besides electricity) to run this uncensored chat prototype. This is how the configuration file looks like KOBOLD=http://192.168.0.101:5001 #PORT OF PRIMARY MODEL # SUMMARIZER=http://127.0.0.1:5002 PORT=5002  #PORT OF OTHER MODELS NCHAR=true #MULTIPLE MODEL MODE CHARS=C:/Tavern/public/characters/ CHAR=C:/Tavern/public/characters/Akari.webp MASTER=Honey ASM_KEY= ASM_LANGUAGE=fr TTS=Microsoft Caroline I would like to find a good model (ggml) that can help to summarize the chat log (in English), I could use word embeddings to find the relevant chat lines and put the summary behind the truncated conversation every time I generate a new response.
2023-08-06T00:03:01
https://www.reddit.com/r/LocalLLaMA/comments/15jawyp/a_beginner_who_is_looking_for_other_ways_to_use/
justynasty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jawyp
false
null
t3_15jawyp
/r/LocalLLaMA/comments/15jawyp/a_beginner_who_is_looking_for_other_ways_to_use/
false
false
self
1
null
Evaluation loss increases when training Guanaco
1
I am probably not the only one who noticed that evaluation loss increases after a few epochs when using QLoRA to finetune Llamas on conversational data. Wondering how long one could ignore the rise in loss and continue training, I did a benchmark. The latest MT Bench (GPT-4 based evaluation, claimed 80+% agreement with human eval.) was used to evaluate each checkpoint and it seems that model quality improves despite the increase in evaluation loss and eventually flattens out (at least under these specific circumstances). [QLoRA, standard hyperparameters, Llama2-7B, OASST1 dataset](https://preview.redd.it/8jkbp9ixrdgb1.png?width=1500&format=png&auto=webp&s=9112fe1e21dd37a6cd86282e5624a049fd57ea45) The entire story can be found here: [https://medium.com/@geronimo7/reproducing-guanaco-141a6a85a3f7](https://medium.com/@geronimo7/reproducing-guanaco-141a6a85a3f7)
2023-08-06T00:05:26
https://www.reddit.com/r/LocalLLaMA/comments/15jaywk/evaluation_loss_increases_when_training_guanaco/
HatEducational9965
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jaywk
false
null
t3_15jaywk
/r/LocalLLaMA/comments/15jaywk/evaluation_loss_increases_when_training_guanaco/
false
false
https://b.thumbs.redditm…hWwrFO46IvwI.jpg
1
{'enabled': False, 'images': [{'id': 'JKUJLtSlwkv4ZIGxAPgPzZY6OuVnwhiUoVaEgXS-Ad8', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=108&crop=smart&auto=webp&s=7cac0162ac531306de0042f6db0851640fadcdbc', 'width': 108}, {'height': 84, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=216&crop=smart&auto=webp&s=736987990f0ccd2ffdd807a22fcbbb2fc6be32b9', 'width': 216}, {'height': 124, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=320&crop=smart&auto=webp&s=b678b0c4e6577e545b8a713acfdf742819c2de22', 'width': 320}, {'height': 249, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=640&crop=smart&auto=webp&s=503ae8c7df80057208b6f651fae7b6d73339191a', 'width': 640}, {'height': 373, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=960&crop=smart&auto=webp&s=62453d8a8b2c6e21a854fd35f2e1cf750cf843de', 'width': 960}, {'height': 420, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=1080&crop=smart&auto=webp&s=3cbbecb1b5b01d5285e723770fe25df0bfc5b9a4', 'width': 1080}], 'source': {'height': 467, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?auto=webp&s=ebb92c80ee3702525cdb8b2abc4fe374394a4330', 'width': 1200}, 'variants': {}}]}
Problem Updating Oobabooga Installed on External Drive
1
When I try to update Oobabooga installed on G drive, a nonsystem drive, I get this kind of error message. "G:/oobabooga\_windows\_08062023/oobabooga\_windows/installer\_files/pip-req-build-igr9ong4' is on a file system that does not record ownership To add an exception for this directory, call: git config --global --add [safe.directory](https://safe.directory) G:/oobabooga\_windows\_08062023/oobabooga\_windows/installer\_files/pip-req-build-igr9ong4\\" But adding an exception for the above directory does not seem to solve the issue since I will have to add tens or hundreds of exceptions before I could successfully run the update\_windows.bat. Is there any way to avoid adding exceptions one by one? Thanks.
2023-08-06T00:15:37
https://www.reddit.com/r/LocalLLaMA/comments/15jb6vt/problem_updating_oobabooga_installed_on_external/
statingwindow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jb6vt
false
null
t3_15jb6vt
/r/LocalLLaMA/comments/15jb6vt/problem_updating_oobabooga_installed_on_external/
false
false
self
1
null
StableLM-Alpha-V2 3B and 7B released, trained on 1.1T tokens.
1
2023-08-06T00:24:43
https://github.com/Stability-AI/StableLM
pokeuser61
github.com
1970-01-01T00:00:00
0
{}
15jbe3r
false
null
t3_15jbe3r
/r/LocalLLaMA/comments/15jbe3r/stablelmalphav2_3b_and_7b_released_trained_on_11t/
false
false
https://b.thumbs.redditm…Mpitg6FZ-qLI.jpg
1
{'enabled': False, 'images': [{'id': 'EqeGhDk7wYAIyVZVza87x-J5MbNcQ9Ibv6IpfHctP14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=108&crop=smart&auto=webp&s=094aae81bc6b32dd9eb4be5434a835bf6632b9e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=216&crop=smart&auto=webp&s=161706dad7b2be375edc7cbd2e7ddf53d52528aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=320&crop=smart&auto=webp&s=18868529b41fc19bf88d48d5ee1e0b83077369b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=640&crop=smart&auto=webp&s=f352c07a4105fdb9318bb97deeb1aa433b49f044', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=960&crop=smart&auto=webp&s=3652efe677408b64cc1872f1d254688efba3a0dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=1080&crop=smart&auto=webp&s=55503245601db5a433b3d80e2bf409a86d77e6af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?auto=webp&s=b3e4ad0f63417694f43dd59776d2930dc7352fca', 'width': 1200}, 'variants': {}}]}
Llama.cpp with 13B is hallucinating in my domain?
1
I have been experimenting with Llama2 locally and I started asking it things that I actually can judge whether it is correct or not. This is not correct output (the last 2 prompts below did not produce valid FHIR and were misinformed about what a FHIR Patient is (it is only demographics). I tried to to tweak the prompt and it was still wrong. In fact, it's not even close. Is this because the model has not been fine tuned to FHIR? Would I get better results with a larger model? &#x200B; `./main -ins -t 6 -ngl 10 --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -s 42 -m llama-2-13b-chat.ggmlv3.q4_0.bin -p "Act as a helpful Health IT consultant" -n -1` `main: build = 938 (c574bdd)` `main: seed = 42` `ggml_init_cublas: found 1 CUDA devices:` `Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6` `llama.cpp: loading model from llama-2-13b-chat.ggmlv3.q4_0.bin` `llama_model_load_internal: format = ggjt v3 (latest)` `llama_model_load_internal: n_vocab = 32000` `llama_model_load_internal: n_ctx = 2048` `llama_model_load_internal: n_embd = 5120` `llama_model_load_internal: n_mult = 256` `llama_model_load_internal: n_head = 40` `llama_model_load_internal: n_head_kv = 40` `llama_model_load_internal: n_layer = 40` `llama_model_load_internal: n_rot = 128` `llama_model_load_internal: n_gqa = 1` `llama_model_load_internal: rnorm_eps = 5.0e-06` `llama_model_load_internal: n_ff = 13824` `llama_model_load_internal: freq_base = 10000.0` `llama_model_load_internal: freq_scale = 1` `llama_model_load_internal: ftype = 2 (mostly Q4_0)` `llama_model_load_internal: model size = 13B` `llama_model_load_internal: ggml ctx size = 0.11 MB` `llama_model_load_internal: using CUDA for GPU acceleration` `llama_model_load_internal: mem required = 5775.77 MB (+ 1600.00 MB per state)` `llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer` `llama_model_load_internal: offloading 10 repeating layers to GPU` `llama_model_load_internal: offloaded 10/43 layers to GPU` `llama_model_load_internal: total VRAM used: 2182 MB` `llama_new_context_with_model: kv self size = 1600.00 MB` &#x200B; `system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |` `main: interactive mode on.` `Reverse prompt: '### Instruction:` &#x200B; `'` `sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.700000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000` `generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 9` &#x200B; &#x200B; `== Running in interactive mode. ==` `- Press Ctrl+C to interject at any time.` `- Press Return to return control to LLaMa.` `- To return control without starting a new line, end your input with '/'.` `- If you want to submit another line, end your input with '\'.` &#x200B; Act as a helpful Health IT consultant \> What is FHIR? FHIR (Fast Healthcare Interoperability Resources) is a standard for exchanging healthcare information electronically. It defines a set of resources and APIs that can be used to exchange healthcare data between different systems, such as electronic health records (EHRs), laboratory systems, and medication management systems. FHIR is designed to be flexible and extensible, so it can be used in a wide range of healthcare settings, from small clinics to large hospitals and beyond. &#x200B; One of the key benefits of FHIR is that it enables seamless interoperability between different healthcare systems, allowing healthcare providers to access and share patient data more easily and efficiently. This can improve the quality and safety of care, as well as reduce costs and administrative burdens. Additionally, FHIR is designed to be easy to implement and use, so it can be adopted by healthcare organizations of all sizes and technical capabilities. &#x200B; Overall, FHIR is an important standard for healthcare interoperability, and it has the potential to transform the way healthcare is delivered and managed. &#x200B; \> Create a FHIR Patient Resource To create a FHIR Patient Resource, we can use the following steps: &#x200B; 1. Define the patient's identity: This includes the patient's name, address, date of birth, and other identifying information. 2. Specify the patient's contact information: This includes the patient's phone number, email address, and other contact details. 3. Define the patient's medical history: This includes any relevant medical conditions, allergies, medications, and other health-related information. 4. Specify the patient's current status: This includes the patient's current location, their clinical status (e.g., stable, unstable), and any other relevant information. 5. Include any additional relevant information: This might include the patient's family history, social history, or other information that is important for their care. &#x200B; Here is an example of what a FHIR Patient Resource might look like: `\`\`\`` `{` `"name": "John Doe",` `"address": {` `"street": "123 Main St",` `"city": "Anytown",` `"state": "CA",` `"postalCode": "12345"` `},` `"dateOfBirth": "1980-01-01",` `"contact": {` `"phone": "555-555-1234",` `"email":` [`"[email protected]`](mailto:"[email protected])`"` `},` `"medicalHistory": [` `{` `"code": "Diabetes",` `"status": "stable"` `},` `{` `"code": "Hypertension",` `"status": "stable"` `}` `],` `"currentStatus": {` `"code": "Outpatient",` `"status": "stable"` `},` `"additionalInfo": [` `{` `"code": "FamilyHistoryOfHeartDisease",` `"status": "important"` `}` `]` `}` `\`\`\`` Note that this is just one example of what a FHIR Patient Resource might look like, and the actual structure and content of the resource will depend on the specific needs and requirements of the healthcare organization. &#x200B; \> Create the json for a FHIR r4 Patient resource Here is an example of what the JSON for a FHIR r4 Patient resource might look like: `\`\`\`` `{` `"resourceType": "Patient",` `"id": "123",` `"name": [` `{` `"use": "official",` `"family": "Doe",` `"given": ["John"]` `}` `],` `"contact": [` `{` `"use": "home",` `"phone": "555-555-1234",` `"email":` [`"[email protected]`](mailto:"[email protected])`"` `}` `],` `"dateOfBirth": "1980-01-01",` `"gender": {` `"code": "M"` `},` `"address": [` `{` `"use": "home",` `"street": "123 Main St",` `"city": "Anytown",` `"state": "CA",` `"postalCode": "12345"` `}` `],` `"medicalHistory": [` `{` `"code": "Diabetes",` `"status": "stable"` `},` `{` `"code": "Hypertension",` `"status": "stable"` `}` `],` `"currentStatus": {` `"code": "Outpatient",` `"status": "stable"` `},` `"additionalInfo": [` `{` `"code": "FamilyHistoryOfHeartDisease",` `"status": "important"` `}` `]` `}` `\`\`\`` Note that this is just one example of what the JSON for a FHIR r4 Patient resource might look like, and the actual structure and content of the resource will depend on the specific needs and requirements of the healthcare organization.
2023-08-06T00:35:27
https://www.reddit.com/r/LocalLLaMA/comments/15jbmil/llamacpp_with_13b_is_hallucinating_in_my_domain/
fhirflyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jbmil
false
null
t3_15jbmil
/r/LocalLLaMA/comments/15jbmil/llamacpp_with_13b_is_hallucinating_in_my_domain/
false
false
self
1
null
Where would one host Llama (2)? (Commercial)
1
Hi guys. I am planning on beginning to train a version of Llama 2 to my needs. I have a local machine with i7 4th Gen. Radeon 7600 8 GB. I have no budget, and I need to do this. I have absolutely not the slightest clue what I am doing but here is to everything that comes after today. My question is, once I am done/close to done, and I want to host this for my clients and integrate it into my SaaS, where would one even go for something like this? Vaast.ai?
2023-08-06T03:05:15
https://www.reddit.com/r/LocalLLaMA/comments/15jen3o/where_would_one_host_llama_2_commercial/
shakespear94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jen3o
false
null
t3_15jen3o
/r/LocalLLaMA/comments/15jen3o/where_would_one_host_llama_2_commercial/
false
false
self
1
null
Tutorial to install CUDA for RTX 3050?
1
I cannot run Oobabooga and I always encounter error like torch is not installed with CUDA or something. Anyone knows any guide how to properly setup RTX 3050 for text generation? Thank you in advance.
2023-08-06T05:05:12
https://www.reddit.com/r/LocalLLaMA/comments/15jgvuc/tutorial_to_install_cuda_for_rtx_3050/
leo-the-great
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jgvuc
false
null
t3_15jgvuc
/r/LocalLLaMA/comments/15jgvuc/tutorial_to_install_cuda_for_rtx_3050/
false
false
self
1
null
Fine tuning speeds vs 3090
1
Has anyone measured how much faster are some other cards at LoRA fine tuning (eg 13B llama) compared to 3090? - 4090 - A6000 - A6000 Ada - A100-40B I have 3090s for 4-bit LoRA fine tuning and am starting to be interested in faster hardware.
2023-08-06T06:04:27
https://www.reddit.com/r/LocalLLaMA/comments/15jhx6k/fine_tuning_speeds_vs_3090/
xynyxyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jhx6k
false
null
t3_15jhx6k
/r/LocalLLaMA/comments/15jhx6k/fine_tuning_speeds_vs_3090/
false
false
self
1
null
M2 Max for llama 2 13b inference server?
1
Hello, I am looking at a M2 Max (38 GPU Cores) Mac Studio with 64 gigs of ram to run interference on llama 2 13b. Would this be a good option for tokens per second, or would there be something better? Also is llama.cpp the best software to run on the Mac with its Metal support? &#x200B; Thanks!
2023-08-06T06:43:55
https://www.reddit.com/r/LocalLLaMA/comments/15jil27/m2_max_for_llama_2_13b_inference_server/
techpro864
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jil27
false
null
t3_15jil27
/r/LocalLLaMA/comments/15jil27/m2_max_for_llama_2_13b_inference_server/
false
false
self
1
null
What is LoRA actually?
1
[removed]
2023-08-06T06:45:44
https://www.reddit.com/r/LocalLLaMA/comments/15jim6a/what_is_lora_actually/
Nikunja___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jim6a
false
null
t3_15jim6a
/r/LocalLLaMA/comments/15jim6a/what_is_lora_actually/
false
false
self
1
null
LLM Comparison
1
[removed]
2023-08-06T06:47:07
https://www.reddit.com/r/LocalLLaMA/comments/15jin0f/llm_comparison/
Nikunja___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jin0f
false
null
t3_15jin0f
/r/LocalLLaMA/comments/15jin0f/llm_comparison/
false
false
self
1
null
What are your suggestions on hosting LLama2 for an enterprise? [commercial]
1
I am planning to implement a chatbot that will give answers based on the documents of a company. The main focus is that I am going to deploy whole processing chain offline i.e. training or finetuning, inference, and data processing etc. Due to data privacy, company wants nothing online at all. For trial version, I am going to do finetuning on RTX4090s. If performance is good then company is ready to invest in servers. What will be your suggestions based on this application case? BTW, company already has large number of CPU and RAM resources. If llama performance is good then they can invest in new Nvidia servers.
2023-08-06T08:33:14
https://www.reddit.com/r/LocalLLaMA/comments/15jkf2h/what_are_your_suggestions_on_hosting_llama2_for/
awaiss113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jkf2h
false
null
t3_15jkf2h
/r/LocalLLaMA/comments/15jkf2h/what_are_your_suggestions_on_hosting_llama2_for/
false
false
self
1
null
Tesla P40 users - High context is achievable with GGML models + llama_HF loader
1
Just wanted to share that I've finally gotten reliable, repeatable "higher context" conversations to work with the P40. In the past I've been using GPTQ (Exllama) on my main system with the 3090, but this won't work with the P40 due to its lack of FP16 instruction acceleration. So, using GGML models and the llama\_hf loader, I have been able to achieve higher context. Inference is relatively slow going, down from around 12-14 t/s to 2-4 t/s with nearly 6k context. It's usable though. The key parameters that must be set per model are n\_gpu\_layers, n\_ctx (context length) and compress\_pos\_emb. n\_gpu\_layers should be 43 or higher to load all of - for example - Chronos Hermes into VRAM. I use q5\_1 quantisations. For SuperHOT models, going 8k is not recommended as they really only go up to 6k before borking themselves. So n\_ctx should be set to 6144, compress\_pos\_emb to 3. For all fully 8k models, n\_ctx should be 8192, and compress\_pos\_emb should be 4. Tested with the classic [https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML) and the more recent [https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML) The latter does not work reliably for RP but does give generally more verbose responses. Hope this helps folks. The P40 is still rather slow but I'm very happy to have achieved a reliable way to load models into it fully and with more than 2k context, at last.
2023-08-06T10:14:50
https://www.reddit.com/r/LocalLLaMA/comments/15jm3br/tesla_p40_users_high_context_is_achievable_with/
CasimirsBlake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jm3br
false
null
t3_15jm3br
/r/LocalLLaMA/comments/15jm3br/tesla_p40_users_high_context_is_achievable_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Mr81S77mGxXAcYypKfXGEyi5SzvBJfQA8YhHnBIsZHA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=108&crop=smart&auto=webp&s=02b4a8e9cca1d225785a17ddef10aa33f0272367', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=216&crop=smart&auto=webp&s=ad9ca078d93369a8eb334150f8454a733c337597', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=320&crop=smart&auto=webp&s=0805d2613d69fd35b208db98c467738377552fed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=640&crop=smart&auto=webp&s=f5b17358e33c6e2d35d096d61c2482a3ac72eb68', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=960&crop=smart&auto=webp&s=1140023dbadd8977455d00b18918d81e9d4c3ebc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=1080&crop=smart&auto=webp&s=026a61fcab1e6bf412938421aae4973c928623e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?auto=webp&s=1983ce61b6a01349602a26abf06690a553ca061a', 'width': 1200}, 'variants': {}}]}
Maybe don’t name your company the most commonly used token representing “I don’t know”
1
Token optimization is the new SEO.
2023-08-06T12:12:07
https://www.reddit.com/gallery/15jo451
mfish001188
reddit.com
1970-01-01T00:00:00
0
{}
15jo451
false
null
t3_15jo451
/r/LocalLLaMA/comments/15jo451/maybe_dont_name_your_company_the_most_commonly/
false
false
https://b.thumbs.redditm…Uv38JDNXmU8s.jpg
1
null
Maybe don’t rename your company to the token most associated with “I don’t know”
1
Token optimization is the new SEO
2023-08-06T12:19:05
https://www.reddit.com/gallery/15jo8yl
mfish001188
reddit.com
1970-01-01T00:00:00
0
{}
15jo8yl
false
null
t3_15jo8yl
/r/LocalLLaMA/comments/15jo8yl/maybe_dont_rename_your_company_to_the_token_most/
false
false
https://b.thumbs.redditm…y6pvtxy-FbMI.jpg
1
null
Best way to send pictures to llms?
1
im currently using oobaboogas blip captioner through extensions and I want something that will describe pictures a little better does anyone have any tips or useful advice for me?
2023-08-06T12:36:07
https://www.reddit.com/r/LocalLLaMA/comments/15jol3b/best_way_to_send_pictures_to_llms/
ElectricalAngle1611
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jol3b
false
null
t3_15jol3b
/r/LocalLLaMA/comments/15jol3b/best_way_to_send_pictures_to_llms/
false
false
self
1
null
LLAMA 2 thinks speaking Georgian is inappropriate and racist
1
2023-08-06T15:20:48
https://i.redd.it/b0vg31opbigb1.png
SilentFudge
i.redd.it
1970-01-01T00:00:00
0
{}
15js721
false
null
t3_15js721
/r/LocalLLaMA/comments/15js721/llama_2_thinks_speaking_georgian_is_inappropriate/
false
false
https://a.thumbs.redditm…SxjlTZiNBpN4.jpg
1
{'enabled': True, 'images': [{'id': 'ROf1n8PoNFyFx2lTKS57jNywNgi3ge2sYwlWeoZFIvc', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=108&crop=smart&auto=webp&s=0fb039801d2784fd595824e1b805facbc3e7cdb4', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=216&crop=smart&auto=webp&s=93e8f9f0f3a881833762cb35cb42caa3300e3e35', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=320&crop=smart&auto=webp&s=81e5ea47d0229968ebf9322dd8e936c1f36e14e2', 'width': 320}, {'height': 331, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=640&crop=smart&auto=webp&s=5760d4f27c1e911cfd8a23fcdb2578bfcc2628b2', 'width': 640}, {'height': 497, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=960&crop=smart&auto=webp&s=f5534f3821c5fc481b1beac5444d735df1b75ac3', 'width': 960}], 'source': {'height': 535, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?auto=webp&s=c535c6634bb1fb6966fd2ca4281bcdd9f10b3460', 'width': 1033}, 'variants': {}}]}
Android app to interact with text-generation-webui and stable-diffusion-webui [github]
1
2023-08-06T16:19:31
https://v.redd.it/jaox49r3migb1
frapastique
v.redd.it
1970-01-01T00:00:00
0
{}
15jtlj5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jaox49r3migb1/DASHPlaylist.mpd?a=1693930787%2CM2IwOTYxNDAyMDg0MzRiNDgxNmQ1ZTJjMzlkN2ZjYjlkMDA5ODgwMjFhYWM0NWIyODQ1NDExYzgzNTQyOTlmNQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/jaox49r3migb1/DASH_1080.mp4?source=fallback', 'height': 1920, 'hls_url': 'https://v.redd.it/jaox49r3migb1/HLSPlaylist.m3u8?a=1693930787%2COTMyMDk3NzBmNTY5ZmM4MzMyZmU2Yzc5YTgwMTZmZDJkMDJkZmI1MGFkM2MxNDIyY2QwNzg0MDAzNGUzNzUyMg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/jaox49r3migb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 862}}
t3_15jtlj5
/r/LocalLLaMA/comments/15jtlj5/android_app_to_interact_with_textgenerationwebui/
false
false
https://b.thumbs.redditm…VWyKesBe6OVc.jpg
1
{'enabled': False, 'images': [{'id': 'vFlw91nUV0iFlUGS0GXKxNLfeZwUBfWDyD-8lzpeI-g', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=108&crop=smart&format=pjpg&auto=webp&s=c8882ee3d1855a552cf18cdbf47e8788cf8278fa', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=216&crop=smart&format=pjpg&auto=webp&s=d770129e79c93b85e46562feb0a8abb9056123f2', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=320&crop=smart&format=pjpg&auto=webp&s=0fb35e55cdb623afe299d97c35db74ab29bab157', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=640&crop=smart&format=pjpg&auto=webp&s=89b1efb0c18ac2f02dadffb81ae13e0dec71a6ec', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=960&crop=smart&format=pjpg&auto=webp&s=c2c4c59e574d3bc55108a445ea07fbbe56ff6e0f', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=76297da45430355ee7293ee087a2a42c2387fec9', 'width': 1080}], 'source': {'height': 2408, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?format=pjpg&auto=webp&s=160423fa4adedde8d735c95a5a5e0bbca0cc1848', 'width': 1080}, 'variants': {}}]}
Llama2 as a SPAM filter
1
Hi, I am writing a plugin to enhance the famous SpamAssassin spam filter with LLMs. There are many cases where spam is not detected by SpamAssassin because it passes all rules and even their machine learning classifiers. However, any human being looking at the text will immediately know it is spam. I did some quick tests using Llama2 7b and it was able to detect many spam messages that SpamAssassing classified as legitimate. The llama2 model as it is, without any fine tuning, can even provide an explanation of why the message is junk mail. However, since I am new to LLMs, I had some questions for you: * Is llama2 the appropriate model for this use case or would you recommend vicuna, alpaca or something else? * Would llama2 (or another model) perform better if it is fine tuned with some of the spam datasets available on HuggingFace? If so, which fine tuning method would you recommend and how the training set would have to be labelled? Thanks!
2023-08-06T17:52:13
https://www.reddit.com/r/LocalLLaMA/comments/15jvubj/llama2_as_a_spam_filter/
InvertedYieldCurve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jvubj
false
null
t3_15jvubj
/r/LocalLLaMA/comments/15jvubj/llama2_as_a_spam_filter/
false
false
self
1
null
Has anybody managed to find a way of describing maps, floorplans, game grids, etc. to a LLM ?
1
I've been experimenting with simplified Binary Space Partitioning, or grids with "chess" coordinates, but the results are meh. If anybody figured out a good reliable format of describing physical layout of both indoor and outdoor situations, I would be greatful if you can share.
2023-08-06T18:34:34
https://www.reddit.com/r/LocalLLaMA/comments/15jwvtz/has_anybody_managed_to_find_a_way_of_describing/
staviq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jwvtz
false
null
t3_15jwvtz
/r/LocalLLaMA/comments/15jwvtz/has_anybody_managed_to_find_a_way_of_describing/
false
false
self
1
null
learning about LLAMA need help
1
[removed]
2023-08-06T19:40:15
https://www.reddit.com/r/LocalLLaMA/comments/15jyi6m/learning_about_llama_need_help/
neo-gen-evan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jyi6m
false
null
t3_15jyi6m
/r/LocalLLaMA/comments/15jyi6m/learning_about_llama_need_help/
false
false
self
1
null
Finetuning Llama-2 on Runpod? How to increase gpu count on a pod?
1
I'm trying to finetune Llama-2 on Runpod. However, when I launch, I only see 1x a100 80gb. It says max 8 though, so I assume you can assign more. However, when I click deploy and then click customize, I can't find a place to increase gpu count. I'd appreciate any help!
2023-08-06T20:38:32
https://www.reddit.com/r/LocalLLaMA/comments/15jzyr0/finetuning_llama2_on_runpod_how_to_increase_gpu/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jzyr0
false
null
t3_15jzyr0
/r/LocalLLaMA/comments/15jzyr0/finetuning_llama2_on_runpod_how_to_increase_gpu/
false
false
self
1
null
Anyone else is getting problems with repetition with chronos-hermes-13b-v2 and nous hermes v2?
1
These 2 specific models gave my huge troubles (I used the standard PyTorch ones), no matter the preset, sampler order, etc. They start repeating phrases from the second response. Any solutions?
2023-08-06T20:48:14
https://www.reddit.com/r/LocalLLaMA/comments/15k07ba/anyone_else_is_getting_problems_with_repetition/
localbasedman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k07ba
false
null
t3_15k07ba
/r/LocalLLaMA/comments/15k07ba/anyone_else_is_getting_problems_with_repetition/
false
false
self
1
null
Black lies, white lies, statistics. And benchmarking.
1
&#x200B; When I was younger, lies got sorted into black lies and white lies. Later, 'statistics' was added to the set. And, in hindsight, 'benchmarking' should have been there, too. It is is the zeitgeist, after all. Anything and everything must be compared and ranked. Who is number one? How to explain complex stuff to people unable or unwilling to understand complex stuff? Easy, dumb it down to a number. Just. One. Number. 5 is more than 3. 88 is more than 78. The whatever with a score of 99 is better than the one with 88. This post was borne from frustration over the commonly used benchmarks on HuggingFace, where I could not find any description of these benchmarks at all. And all having nonsense names. I am sure someone will come up with a direct link at HuggingFace after this post. In any case, I found: [https://msandbu.org/benchmarking-llms-and-what-is-the-best-llm/](https://msandbu.org/benchmarking-llms-and-what-is-the-best-llm/) I think this article does a decent job of describing the current landscape of LLM benchmarking. In short, I think it is a bloody mess. And boatloads of new, retrained and/or fine-tuned models are submitted on a daily basis without any accompanying benchmarking data at all. And that is \*fine\*. Not all models/finetunes are meant for stuff that (trivially) can be measured by a machine and represented with a single number. Visual art and music, for instance. Storytelling. For a models ability to code, the situation is slightly better. Give it X problems to solve, test the Y results and see how many working solutions you got. At least it is a measurable number. Also, some models are better than other, simply because they are bigger models. 3B vs 70B etc. But what kind of resources actually required to run these models may take serious digging. And even comparing models of the same size, lots of parameters may still differ. Context length, for instance. And how much memory is required to run it. Performance depends on hardware, not sure if this can be 'normalized' in a meaningful way. I am pretty sure the time has not yet come for 'The Grand Unified LLM Benchmarking Test Set'. The average standards organization would probably not be able to organize a single meeting before the agenda was wildly outdated. That is less a dig at standards organizations, and more a nod at the rate of change in this space. Yet... is it possible to have something better than the current state? In terms of de-facto testing/test results, but also regarding parameters the model creator should be expected to provide in the model card? If yes, who needs to talk to each other to make this happen? Thank you for reading. &#x200B;
2023-08-06T20:56:45
https://www.reddit.com/r/LocalLLaMA/comments/15k0f06/black_lies_white_lies_statistics_and_benchmarking/
ethertype
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k0f06
false
null
t3_15k0f06
/r/LocalLLaMA/comments/15k0f06/black_lies_white_lies_statistics_and_benchmarking/
false
false
self
1
{'enabled': False, 'images': [{'id': '3v4YqMpXl9I7qXs9mjUIIoybueFW3pvFBgJc0pOONwk', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=108&crop=smart&auto=webp&s=6d8aa04bc6bf7e7a8bd0eb7d62450d854f46751c', 'width': 108}, {'height': 60, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=216&crop=smart&auto=webp&s=c5037729ab39dab6dacf3dfa391d2bdf34db0055', 'width': 216}, {'height': 90, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=320&crop=smart&auto=webp&s=3047ce95383a6911aabce1bb2bec412cee1fc02d', 'width': 320}, {'height': 180, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=640&crop=smart&auto=webp&s=72e7b9463e77d0fc95351394b27dc22190c50264', 'width': 640}, {'height': 270, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=960&crop=smart&auto=webp&s=7a2432ffb622e12216d2122a2c9cfaef6002f041', 'width': 960}], 'source': {'height': 288, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?auto=webp&s=d33a78409e555980ea84c80305c8144847239e4c', 'width': 1024}, 'variants': {}}]}
Classifier-Free Guidance (CFG) has been merged into the transformers library
1
2023-08-06T21:33:37
https://github.com/huggingface/transformers/pull/24654
oobabooga4
github.com
1970-01-01T00:00:00
0
{}
15k1brq
false
null
t3_15k1brq
/r/LocalLLaMA/comments/15k1brq/classifierfree_guidance_cfg_has_been_merged_into/
false
false
https://b.thumbs.redditm…buc_XxpVDYSA.jpg
1
{'enabled': False, 'images': [{'id': '92JE4v1HSrvv1_kNuRUgyEUXGPEB4CPO1SmWC9DBPlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=108&crop=smart&auto=webp&s=11d5737d9b960431d772d4e0b3c7614bd8af9abc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=216&crop=smart&auto=webp&s=c22f8f33373beb9eef85081e65a2a2432efda668', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=320&crop=smart&auto=webp&s=574541411bbc746d5174b1f8cf34c6d3e8113ed7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=640&crop=smart&auto=webp&s=acbc9965ae137cab4901e8001cef11234bc83033', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=960&crop=smart&auto=webp&s=317a24cd9eefa29740d4e1b35475b8a7222aaace', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=1080&crop=smart&auto=webp&s=5bca4d8d38962e3dba35fb1786a02e94aafe8d02', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?auto=webp&s=8fa87ef5dcec26fda023de672373822bd3564732', 'width': 1200}, 'variants': {}}]}
How can I finetune MiniGpt-4 on my own dataset?
1
[removed]
2023-08-06T21:41:08
https://www.reddit.com/r/LocalLLaMA/comments/15k1ij7/how_can_i_finetune_minigpt4_on_my_own_dataset/
mr_house7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k1ij7
false
null
t3_15k1ij7
/r/LocalLLaMA/comments/15k1ij7/how_can_i_finetune_minigpt4_on_my_own_dataset/
false
false
self
1
null
Local Llama-2 API in Rust
1
Hello, I have been working on an OpenAI-compatible API for serving LLAMA-2 models written entirely in Rust. It supports offloading computation to Nvidia GPU and Metal acceleration for GGML models. Here is the project link : \[cria\]([https://github.com/AmineDiro/cria/blob/main/README.md](https://github.com/AmineDiro/cria/blob/main/README.md)). You can use it as an OpenAI replacement (check out the included \`Langchain\` example in the project). This is an ongoing project, I have implemented the \`embeddings\` and \`completions\` routes. The \`chat-completion\` route will be here very soon! Really interested in your feedback and I would welcome any help :) ! &#x200B; &#x200B;
2023-08-06T21:55:05
https://www.reddit.com/r/LocalLLaMA/comments/15k1uk7/local_llama2_api_in_rust/
amindiro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k1uk7
false
null
t3_15k1uk7
/r/LocalLLaMA/comments/15k1uk7/local_llama2_api_in_rust/
false
false
self
1
{'enabled': False, 'images': [{'id': '0LBRRYfSaj7DGydH663GMFcgOTeSykUmnO1PUdIqG34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=108&crop=smart&auto=webp&s=0fcb4489a88ee99a70ce4cab1aa170bf4c9df6f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=216&crop=smart&auto=webp&s=d9cf17e2141e331a3b2a7a363ecd94226fa5cf8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=320&crop=smart&auto=webp&s=2deb04c713d14fc54a1485efd508a9f2567b2944', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=640&crop=smart&auto=webp&s=7c20aa1256dd42ef46722d26a64ab4e15019c776', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=960&crop=smart&auto=webp&s=f9a268a7f6339680ee13120150b4c1263dc8dfb4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=1080&crop=smart&auto=webp&s=a5954f2039ac2223b422f294a1c004ef80c9271b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?auto=webp&s=b0c36722873beb0e2eac690c34a833967b385b74', 'width': 1200}, 'variants': {}}]}
First Indonesian LLM?
1
99% of posts in LocalLLaMa are dedicated to English models. So today is that rare exception when the post will be about non-English model. Here it is: [Ichsan2895/Merak-7B-v1 · Hugging Face](https://huggingface.co/Ichsan2895/Merak-7B-v1) GGML version: [s3nh/Merak-7B-v1-GGML at main (huggingface.co)](https://huggingface.co/s3nh/Merak-7B-v1-GGML/tree/main) P.S: If you managed to find other non-English ggml models, you can post links to them in comments. Thank you.
2023-08-06T22:13:29
https://www.reddit.com/r/LocalLLaMA/comments/15k2ami/first_indonesian_llm/
Languages_Learner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k2ami
false
null
t3_15k2ami
/r/LocalLLaMA/comments/15k2ami/first_indonesian_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': '2lbpttmquP319hezfse47DJxEwuNaYUdtjB5VTuzhT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=108&crop=smart&auto=webp&s=995c3f055b80321894b7c394d2c3cf0a27c05bef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=216&crop=smart&auto=webp&s=d023513291fa97fc113d9ba96f060703fe652ebe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=320&crop=smart&auto=webp&s=acdb2f92e0544bbee515e0cb592011abd137a0ea', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=640&crop=smart&auto=webp&s=b647460787d1e6833ecb9d738d0ae044ce83e843', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=960&crop=smart&auto=webp&s=30b1548c08fcac6ae7561c69b5252b29bc020870', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=1080&crop=smart&auto=webp&s=5c253ff1463530936adf903bf20fea087502cad1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?auto=webp&s=59e2634b80f8a40c738a09ccf6e5b89d5e2172ba', 'width': 1200}, 'variants': {}}]}
Any news on BigCode? (octocoder and starcoder cosmos)
1
Some weeks ago BigCode said in a tweet that they trained smaller version of the starcoder model (1B,3B and 7B). Meanwhile they create a github talking about octocoder and a new dataset with 4Tb of data, more than the 2.9Tb of the cleaned stack. So, anybody have any new information? They are training a new model? They will release a new dataset? And where are the smaller models? Just curious
2023-08-06T22:59:09
https://www.reddit.com/r/LocalLLaMA/comments/15k3cjn/any_news_on_bigcode_octocoder_and_starcoder_cosmos/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k3cjn
false
null
t3_15k3cjn
/r/LocalLLaMA/comments/15k3cjn/any_news_on_bigcode_octocoder_and_starcoder_cosmos/
false
false
self
1
null
Mamba v3 3b: when did humans first learn how to whistle a tune by puckering their lips?
1
> when did humans first learn how to whistle a tune by puckering their lips? The earliest known examples of whistling and the production of complex, tonal musical instruments date back to 360,000 B.C., when the first hominids began to use their hands and hands to produce sounds similar to bird calls or flutes as their ancestors migrated from Africa to Eurasia in search of food. But the fact is that the human ability to whistle was not the result of natural evolution, but rather artificial, through the use of teeth and lips. Humans have been producing sounds by manipulating their teeth, lips and tongue since they first learned to make sounds, either through sound production or through noise amplification through a tube-like instrument. The early humans learned to whistle by puckering their lips (the same way as how we open our mouth to speak), which produces the high-pitched squeaks that many people associate with whistling and which they have since used for food gathering, communication, or entertainment. It was not until about 30,000 years ago that a tool was developed with teeth and a handle to produce higher pitched sounds, making it possible to whistle at a much higher pitch and in more complex patterns. It is important to note that whistling did not develop out of necessity; rather, it was a unique way for early humans to communicate with one another, either through sound production or noise amplification. It is believed that the first instruments were used as part of ritual performances, with whistles and other pipes being used in ceremonies or rituals to honor elders, announce a new leader, or mark important events. To conclude, it is safe to say that the practice of whistling began through the use of teeth, lips, and hands rather than as an evolutionary trait. However, this does not mean that whistling is not useful in our daily lives - while we may not have evolved the ability to whistle, many people find it a useful way to express themselves or convey their message to others.
2023-08-06T23:01:42
https://www.reddit.com/r/LocalLLaMA/comments/15k3ete/mamba_v3_3b_when_did_humans_first_learn_how_to/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k3ete
false
null
t3_15k3ete
/r/LocalLLaMA/comments/15k3ete/mamba_v3_3b_when_did_humans_first_learn_how_to/
false
false
self
1
null
Any Australian, or US West Coast, based LLM / AI technology specialists?
1
2023-08-06T23:12:37
https://i.redd.it/j7qdhej4okgb1.png
Rufawana
i.redd.it
1970-01-01T00:00:00
0
{}
15k3o00
false
null
t3_15k3o00
/r/LocalLLaMA/comments/15k3o00/any_australian_or_us_west_coast_based_llm_ai/
false
false
https://b.thumbs.redditm…sbKtn83R-qLw.jpg
1
{'enabled': True, 'images': [{'id': 'GZpbe6yagQBIBRhPAiqbtmwgjlJU-osglstOmWBm08Y', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=108&crop=smart&auto=webp&s=30814f39f87a44baaabcbd3f03d236b222929e2c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=216&crop=smart&auto=webp&s=83a351141911d6d739a4c5747123bf6b9f2161b6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=320&crop=smart&auto=webp&s=a221fda2bb955448269d89446c7c54257a2ee07a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=640&crop=smart&auto=webp&s=da940e1c9af2d9b85a34892fa0aa35ac43a0c2b3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=960&crop=smart&auto=webp&s=3a4c63f9d7d1820d356d65e930771b4bb078fb16', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?auto=webp&s=9c202ce4a60be9687684b534ac03f06092edf90c', 'width': 1024}, 'variants': {}}]}
Specifying output format
1
Hi! I have a tinkering project where I need the model to generate an answer according to a defined format. For example, answering with only one word. I tried a few models in GPTQ format, but it looks like the often just ignore the request to answer in just one word. The answers are somewhat accurate, but mostly lengthy. Do you have any experience in making an LLM "behave"? :) Maybe a specific model to try? I got 24GB VRAM and I'm not afraid to use it! ^^
2023-08-06T23:19:54
https://www.reddit.com/r/LocalLLaMA/comments/15k3u2g/specifying_output_format/
maz3max
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k3u2g
false
null
t3_15k3u2g
/r/LocalLLaMA/comments/15k3u2g/specifying_output_format/
false
false
self
1
null
Does anyone know of any information on how to finetune a llm on a image, language pair dataset?
1
I've seen the end product and am now just reading through code but any sort of introduction to the problem would be a great time saver. I'm hoping to finetune an llm on a custom dataset that I believe it may do better on than traditional approaches due to the ability to feed it associated contextual text with the image. I've looked but I haven't seen anything that goes through how to do it.
2023-08-07T01:04:02
https://www.reddit.com/r/LocalLLaMA/comments/15k655k/does_anyone_know_of_any_information_on_how_to/
CertainCoat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k655k
false
null
t3_15k655k
/r/LocalLLaMA/comments/15k655k/does_anyone_know_of_any_information_on_how_to/
false
false
self
1
null
dolphin-llama2-7b
1
Today, I released dolphin-llama2-7b, sponsored by [preemo](https://www.preemo.io/) [https://huggingface.co/ehartford/dolphin-llama2-7b](https://huggingface.co/ehartford/dolphin-llama2-7b) [**https://erichartford.com/dolphin**](https://erichartford.com/dolphin) This model is based on llama-2 and thus it is suitable for commercial or non-commercial use Dataset consisted of: * 842,610 instructions of FLANv2 augmented with GPT-4 completions * 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant to requests. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. [**https://erichartford.com/uncensored-models**](https://erichartford.com/uncensored-models) You are responsible for any content you create using this model. Enjoy responsibly. Gratitude to my core team - Pankaj Mathur, Rob "Rohan" O'Callahan, and Tom "TheBloke" Jobbins. Thanks and respect to those who helped - including Wing "Caseus" Lian, NanoBit, and Teknium. If you like my work you are welcome to [buy me a coffee](https://www.buymeacoffee.com/ehartford) &#x200B;
2023-08-07T01:56:56
https://www.reddit.com/r/LocalLLaMA/comments/15k78yi/dolphinllama27b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k78yi
false
null
t3_15k78yi
/r/LocalLLaMA/comments/15k78yi/dolphinllama27b/
false
false
self
1
null
WizardLM-1.0-Uncensored-Llama2-13b
1
Today I released WizardLM-1.0-Uncensored-Llama2-13b [https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b) Based on the [WizardLM/WizardLM\_evol\_instruct\_V2\_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) dataset I filtered it to remove refusals, avoidance, bias. [ehartford/WizardLM\_evol\_instruct\_V2\_196k\_unfiltered\_merged\_split](https://huggingface.co/datasets/ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split) I trained this with Vicuna's FastChat, as the new data is in ShareGPT format and WizardLM team has not specified a method to train it. This is a full weights training, not a LoRA. The prompt format is Vicuna-1.1 style. While the instruct layer has been uncensored, Llama2 has opinions and will resist instructions it disagrees with. You must overcome that with your own wit and prompt engineering. Finally, you are responsible for the content you create using this model. It is uncensored and can produce illegal and unethical content, if you ask it to. If deploying this in a service, I would recommend you train a LoRA to implement your own alignment to prevent unwanted behavior. Thank you to the open-source community and everyone who helped me.
2023-08-07T02:03:29
https://www.reddit.com/r/LocalLLaMA/comments/15k7dv9/wizardlm10uncensoredllama213b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k7dv9
false
null
t3_15k7dv9
/r/LocalLLaMA/comments/15k7dv9/wizardlm10uncensoredllama213b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XOHpgHNy2-sJYt-7otMYQr6A_lch0wGQeb5mnw39QKQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=108&crop=smart&auto=webp&s=aa773b62368ae0a87bcb427e8ce76f550c2d447b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=216&crop=smart&auto=webp&s=df0c0f27c0d720383e67b6c88459adaa5a100b96', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=320&crop=smart&auto=webp&s=1b41e9e067f2be185d446dc1f79d3a8b31c1d32f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=640&crop=smart&auto=webp&s=8380a79aadf7aae0be05c74e57fddf4d1ad0f059', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=960&crop=smart&auto=webp&s=b18838e7b6603017dbc38e531bd52e9cfaec3c83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=1080&crop=smart&auto=webp&s=478b0364753e57d9933345b25a83a22b003ae37d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?auto=webp&s=b68a9998dc2ce0efe3b3e932fd7157f235ff80f1', 'width': 1200}, 'variants': {}}]}
Can I use LLM to compare 2 pdf documents and find changes ?
1
So I review a lot of documents and sometimes minor but essential changes occur after I’m done reviewing, was wondering if LLM can be a way to find the changes fast instead of going through hundreds of pages for an article ? Can embedding be used to achieve that ?
2023-08-07T02:06:28
https://www.reddit.com/r/LocalLLaMA/comments/15k7g0f/can_i_use_llm_to_compare_2_pdf_documents_and_find/
Plane_Ad9568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k7g0f
false
null
t3_15k7g0f
/r/LocalLLaMA/comments/15k7g0f/can_i_use_llm_to_compare_2_pdf_documents_and_find/
false
false
self
1
null
8 bit WizardCoder 15B
1
I think 8 Bit WizardCoder should run on a 3090/4090 as it should be about 23.07 GB VRAM required. Is there anywhere I can acquire an 8 Bit WizardCoder model? Or how can I create one myself from the original? I'm currently running 4 bit WizardCoder btw just trying to get some improved accuracy.
2023-08-07T03:24:17
https://www.reddit.com/r/LocalLLaMA/comments/15k90yz/8_bit_wizardcoder_15b/
kintrith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k90yz
false
null
t3_15k90yz
/r/LocalLLaMA/comments/15k90yz/8_bit_wizardcoder_15b/
false
false
self
1
null
How does Microsoft Guidance work?
1
Microsoft's Guidance has been recommended for more reliable structuring of the output. I had some fundamental doubts about its applicability: - it seems like a programming language. wouldn't a llm know how to "interpret" this syntax of {{ }} to work. so it shouldnt be applicable to all llms by default. or is there a second agent that post-processes the output of a base llm? - if there is a post processing llm, then in case the base output doesnt have entities that can be fit into the variables, there might be issues. or is this uncommon and the system is usually reliable? - or is this a prompt engineering system? the variables define a way for Guidance to make the actual prompt so that those entities would be guaranteed? - the speedup in Guidance Acceleration mention cached key value pairs. this seems to me that we'd see speedups when there are repeated queries of the same type, where cached values may be reused. a single call should be similar in time cost for Guidance and non-Guidance forward passes?
2023-08-07T03:44:17
https://www.reddit.com/r/LocalLLaMA/comments/15k9efr/how_does_microsoft_guidance_work/
T_hank
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k9efr
false
null
t3_15k9efr
/r/LocalLLaMA/comments/15k9efr/how_does_microsoft_guidance_work/
false
false
self
1
null
how to make multiple inference requests from a single model object? (llama-cpp-python)
1
I was planning to setup a local LLaMA API server for our team and I have been using llama-cpp-python with flask for this. The code looks kinda like this ```python llm = LLaMA(model) @route(route = "/llama, method = POST) def inference(prompt): return llm(prompt) ``` But whenever I make more than 2 post requests (ie 2 inference requests at once), the model seems to crash without any logs. My guess is that some variable is getting called twice in the LLaMA class from llama-cpp-python but not so sure. Are there any better methods to create an API like this? (using luna-ai-llama2 uncensored model ggml for this) Lemme know if you need other details.
2023-08-07T05:25:27
https://www.reddit.com/r/LocalLLaMA/comments/15kbbna/how_to_make_multiple_inference_requests_from_a/
nerdyvaroo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kbbna
false
null
t3_15kbbna
/r/LocalLLaMA/comments/15kbbna/how_to_make_multiple_inference_requests_from_a/
false
false
self
1
null
Using LLMs to query personal data. Does it suck or am I doing it wrong
1
I’ve tried experimenting with privateGPT and compatible models over the last few days on a Runpod A40 instance. Setup went fine. Provided source docs of various types. Did the ingest. Ran the model. But it seems like it can’t answer basic questions. For example: “what are the top five most frequently mention countries in the source document.” Nope, unable to do it no matter the model. Does it suck, or am I using it wrong. For reference, it’s a CSV file of CRS report titles.
2023-08-07T06:09:57
https://www.reddit.com/r/LocalLLaMA/comments/15kc4z8/using_llms_to_query_personal_data_does_it_suck_or/
badatmathdave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kc4z8
false
null
t3_15kc4z8
/r/LocalLLaMA/comments/15kc4z8/using_llms_to_query_personal_data_does_it_suck_or/
false
false
self
1
null
Falcon-7b qlora finetuned generates extra response.
1
So I fine tuned falcon-7b using peft on python code instructions dataset(18k rows) for a single epoch. During the inference the model generates some extra text after answering the question. Sometimes it starts another question by itself and answers it. Sometimes it keeps on explaining the answer. Why is this happening and how should I avoid this.
2023-08-07T06:40:03
https://www.reddit.com/r/LocalLLaMA/comments/15kcohd/falcon7b_qlora_finetuned_generates_extra_response/
thepotatoguy17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kcohd
false
null
t3_15kcohd
/r/LocalLLaMA/comments/15kcohd/falcon7b_qlora_finetuned_generates_extra_response/
false
false
self
1
null
Anyone use this
1
Google made this yea?
2023-08-07T08:23:56
https://www.reddit.com/r/LocalLLaMA/comments/15kehp2/anyone_use_this/
LearnOnnReddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kehp2
false
null
t3_15kehp2
/r/LocalLLaMA/comments/15kehp2/anyone_use_this/
false
false
self
1
null
Llama 2 officially recommends removing the default system prompt
1
2023-08-07T09:14:42
https://github.com/facebookresearch/llama#system-prompt-update
hackerllama
github.com
1970-01-01T00:00:00
0
{}
15kfe8p
false
null
t3_15kfe8p
/r/LocalLLaMA/comments/15kfe8p/llama_2_officially_recommends_removing_the/
false
false
https://b.thumbs.redditm…ykXreJILEJ8U.jpg
1
{'enabled': False, 'images': [{'id': 'FGHADZF6kKD31GejH2OnPEP-YDb6BsOBjcCJJKsF9Bc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=108&crop=smart&auto=webp&s=bf9621bee0952932576a0b76165d02c71b71a6b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=216&crop=smart&auto=webp&s=67dc90b46419e6d60d553dc4c3e78ea7ff8c27bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=320&crop=smart&auto=webp&s=ced3e0f281f786462efeed34ebaf94dca9119fba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=640&crop=smart&auto=webp&s=219465ec9da49c3bd5bf73e1c54e7c67c0b9a079', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=960&crop=smart&auto=webp&s=48e8d6beb330c153f9c1a4044fd7ade268788040', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=1080&crop=smart&auto=webp&s=e0fe55c8f58645f3f3d7c268acc22d1f32dc390f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?auto=webp&s=66d454f9704b4060ec583e03beebbd393af55e5c', 'width': 1200}, 'variants': {}}]}
OpenChat 3.2
1
[removed]
2023-08-07T09:29:04
https://www.reddit.com/r/LocalLLaMA/comments/15kfnm1/openchat_32/
imonenext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kfnm1
false
null
t3_15kfnm1
/r/LocalLLaMA/comments/15kfnm1/openchat_32/
false
false
self
1
null
Instruction Fine-Tuning to explicitly follow a certain Context?
1
I want to know if it would be possible to instruct fine-tune a model to answer a question based on a particular context. The idea being here that during inference, the context will be replaced by the relevant documents returned from a VectorDB search. **What I have already tried** 1. I have already tried regular RAG over my documents, it was either giving incomplete answers or just generating repeating content of maybe 1 line from the context provided. Temperature was set to 0 so that was probably the reason for it repeating the same content. Even adjusted repetition penalty still no change. 2. I have also tried fine-tuning it on a "### Human: <question to be answered based on document> ###Assistant: <answer to question using context from document>" style dataset. Here I am not explicitly telling the model to follow context, just what to generate. The model lost its generalization and lost the ability listen to the instruction asking it to pay attention to the context when provided during inference. **Coming back to my question** Is it possible that a dataset such as the following will make the model follow the context explicitly as part of the instruct fine tuning? "**###Instruction**: "Generate a procedure for the following: **###Input**: <some input> while following this context. **###Context**: <relevant context>. **###Output**: <answer that follows the context>" &#x200B; Please let me know if such datasets already exist, I have found [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), which has the context portion, but I don't know how the model learns to follow the context during generation.
2023-08-07T09:44:08
https://www.reddit.com/r/LocalLLaMA/comments/15kfxj1/instruction_finetuning_to_explicitly_follow_a/
ShaneMathy911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kfxj1
false
null
t3_15kfxj1
/r/LocalLLaMA/comments/15kfxj1/instruction_finetuning_to_explicitly_follow_a/
false
false
self
1
{'enabled': False, 'images': [{'id': '185ujYrwJ2EOh6mGp-l9eYbo_lk0hPRZdekaAQhzzZs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=108&crop=smart&auto=webp&s=25c02f753e4e995e00b31f979f098a1d32a76339', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=216&crop=smart&auto=webp&s=c71c0d43184d353dcdd696d3d476ba8fa503f6b2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=320&crop=smart&auto=webp&s=a72aef4d9cb1684b021eb1dcadaec9d66d097f7e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=640&crop=smart&auto=webp&s=d50d8b7275e2b8234a3ca628d7d30fcf5264bfe7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=960&crop=smart&auto=webp&s=8d93caa62b7007a7a00c47fad77cc40c2e21d181', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=1080&crop=smart&auto=webp&s=7177455cbe550ca69ab33bf76b531fddec57881c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?auto=webp&s=75daad98aebd2f8e505e2beb3f66f0fc521621dd', 'width': 1200}, 'variants': {}}]}
Qwen 7B
1
[removed]
2023-08-07T09:50:00
https://www.reddit.com/r/LocalLLaMA/comments/15kg1du/qwen_7b/
lavajam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kg1du
false
null
t3_15kg1du
/r/LocalLLaMA/comments/15kg1du/qwen_7b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
Open Source Models or OpenAI API for Document Summarization and Question Answering
1
Hello everyone! I am working on an application that leverages LLMs for document summarization and question answering. Mainly the type of documents Il be working on are RFP documents to automate responses for them. I cant really decide if I should invest in a better system and deploy a local LLM or just get the openAI api and do it that way. Mainly I will be using LangChain, although if you think there are better options please do let me know (I am a total noob in this). From what little Ive learned through scanning its doc, it pretty much handles all of the use cases that I need. i just dont know firstly which model to use and if ultimately its better to just use gpt 3.5 through the api. This application is intended to be commercialized so I guess hosting it locally would be too expensive. Please give suggestions. I havent yet talked to my boss yet as Ive mostly been experimenting on colab but the free version is so limited in RAM that apart from very small models, its a pain to work on and you run out of time limits as well. So, should I then ask my boss for a beefier laptop or openAI api subscription? I would greatly appreciate any guidance or insights on these matters. Thanks in advance!
2023-08-07T10:32:33
https://www.reddit.com/r/LocalLLaMA/comments/15kgu48/open_source_models_or_openai_api_for_document/
Chuckycutie1993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kgu48
false
null
t3_15kgu48
/r/LocalLLaMA/comments/15kgu48/open_source_models_or_openai_api_for_document/
false
false
self
1
null
Enhancing a foundational model locally
1
[removed]
2023-08-07T10:59:17
https://www.reddit.com/r/LocalLLaMA/comments/15khd28/enhancing_a_foundational_model_locally/
Decent_Pattern_9343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15khd28
false
null
t3_15khd28
/r/LocalLLaMA/comments/15khd28/enhancing_a_foundational_model_locally/
false
false
self
1
null
Getting wrong answer after fine tuning.
1
I fine tuning using peft-lora on a custom question answer based dataset. it doesn't give accurate output for exact questions. for example: quest: "where does elon musk born?" ans: elon musk borned in america." Though in the dataset the accurate answer is given. Any idea about how to fix it? Do I need to change something in preporcessing? or something else? Can anyone help me, I am stressing about this?
2023-08-07T11:23:43
https://www.reddit.com/r/LocalLLaMA/comments/15khwlm/getting_wrong_answer_after_fine_tuning/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15khwlm
false
null
t3_15khwlm
/r/LocalLLaMA/comments/15khwlm/getting_wrong_answer_after_fine_tuning/
false
false
self
1
null
llama-gaan-2-7b-chat-hf-dutch 🇳🇱
1
If it ain't Dutch, it ain't much [😉](https://emojipedia.org/winking-face/)! [https://huggingface.co/Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch](https://huggingface.co/Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch) At Mirage Studio we just released Llama-gaan, an open-source 🇳🇱 -finetuned version of everyone's favorite Llama 2. Llama-gaan represents a major milestone in the Llama-2 catalogue, since as far as we know, we are the first to create such a customized model for the Dutch language. While the model has plenty room for improvement, we wanted to release this asap, because it already speaks Dutch better than meta's own llama2. And now, we can't wait to see what the community will contribute to make it even better. Give it a try and let us know! *Example question:* Wanneer is Nederland ontstaan? *Llama gaan result:* Nederland is een land in Noordwest-Europa dat zich bevindt in de provincie Holland. Het is een van de meest bevolkte landen van Europa en is een van de meest bezochte landen ter wereld. Nederland is een van de meest ontwikkelde landen ter wereld en heeft een van de hoogste levensstandaarden. Het is een van de meest diverse landen ter wereld en heeft een lange geschiedenis van immigratie. Also, this message was generated by our very own [MirageGPT](https://www.mirage-studio.io/) ;)
2023-08-07T12:05:25
https://www.reddit.com/r/LocalLLaMA/comments/15kitng/llamagaan27bchathfdutch/
gijs4g
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kitng
false
null
t3_15kitng
/r/LocalLLaMA/comments/15kitng/llamagaan27bchathfdutch/
false
false
self
1
{'enabled': False, 'images': [{'id': 'TYg5_5x2yp-Ny10ISVzht6UW6sIt0n0UWd8ROlAPJqs', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?width=108&crop=smart&auto=webp&s=a4c3348398bb118872a1fd866ba65b817340ce3f', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?width=216&crop=smart&auto=webp&s=4d0220212c34caede329a6d9a92d1b896b288e24', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?width=320&crop=smart&auto=webp&s=d0f6944fdf14cc486c12dbfee736a7d8379b1b73', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?auto=webp&s=6065ad7ab9055e9bcf262f4a8afe7adf13362cb0', 'width': 560}, 'variants': {}}]}
𝐃𝐨 𝐲𝐨𝐮 𝐤𝐧𝐨𝐰 𝐰𝐡𝐲 𝐋𝐥𝐚𝐦𝐚 🦙 𝐨𝐫 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 🤖 𝐡𝐚𝐯𝐞 𝐭𝐞𝐱𝐭 𝐬𝐢𝐳𝐞 𝐥𝐢𝐦𝐢𝐭𝐬? 🤔💭 Find it out! 💡
1
2023-08-07T12:21:59
https://medium.com/@joaolages/the-quest-to-have-endless-conversations-with-llama-and-chatgpt-%EF%B8%8F-81360b9b34b2
JClub
medium.com
1970-01-01T00:00:00
0
{}
15kj71f
false
null
t3_15kj71f
/r/LocalLLaMA/comments/15kj71f/𝐃𝐨_𝐲𝐨𝐮_𝐤𝐧𝐨𝐰_𝐰𝐡𝐲_𝐋𝐥𝐚𝐦𝐚_𝐨𝐫_𝐂𝐡𝐚𝐭𝐆𝐏𝐓_𝐡𝐚𝐯𝐞_𝐭𝐞𝐱𝐭_𝐬𝐢𝐳𝐞/
false
false
default
1
{'enabled': False, 'images': [{'id': 'Bd2tKVhk26ts2G8Ag4O6orZvv4fbvZNAF7Q03ikzADQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=108&crop=smart&auto=webp&s=ba96a68eba2babade967b840ab3c1d89e134154e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=216&crop=smart&auto=webp&s=6e06649464f3ca435c32f2ed9e73168e9273fb90', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=320&crop=smart&auto=webp&s=d8d718d986fc19b9bd1bb0562e6733d0ee2ba125', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=640&crop=smart&auto=webp&s=5a5f3c7aeeb56bbab7b0db89995dd4919050176f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=960&crop=smart&auto=webp&s=7dc8f89416830f2e71160073e28463a450434e17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=1080&crop=smart&auto=webp&s=92d2d7a7907d777bff8e63ad2d24594dae67f25c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?auto=webp&s=0b855eb3d262137937689963a3854557f4517b0b', 'width': 1200}, 'variants': {}}]}
Fine-tuning LLM's for roleplay
1
I want to create a perfect conversational character that I can interact in my game. I've tried creating character.json in oobabooga with 13B Nous-Hermes LLaMa-2 model but the results did not satisfy me. I looked into fine tuning but never tried it. I know I need to gather decent amount of info about my character which I also don't know how to format. Luckly I have enough hardware resources (5x RTX 4090). Do you think using a big model with 4k or even 8k context to create this character or fine tuning it will be better? I'm open to any suggestions about fine-tuning.
2023-08-07T12:39:05
https://www.reddit.com/r/LocalLLaMA/comments/15kjlba/finetuning_llms_for_roleplay/
sarimsak13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kjlba
false
null
t3_15kjlba
/r/LocalLLaMA/comments/15kjlba/finetuning_llms_for_roleplay/
false
false
self
1
null
How to optimize inference for llama related model?
1
Hello! As a newbie to llm field, I have a question about how to optimize inference for llama related model. I'm trying to optimize inference for vicuna model. ([model link](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF)) I found that to optimize inference for llm, onnx and tensorrt method are widely used. There are many examples about how to apply tensorrt on bert related models, but I could not find any tutorials on using tensorrt as inference optimizer on llama related models. (for example, vicuna) So, my question is, to optimize inference for llama related models, which method is widely used? onnx? tensorrt? or quantization like ggml or gptq? If quantization is commonly used than tensorrt, then what is the reason for that? Could you please help me figure it out? It will be very helpful to give me any advice to improve inference performance of llama model ! Thank you !
2023-08-07T12:46:36
https://www.reddit.com/r/LocalLLaMA/comments/15kjrm3/how_to_optimize_inference_for_llama_related_model/
Fluffy_Pianist_3135
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kjrm3
false
null
t3_15kjrm3
/r/LocalLLaMA/comments/15kjrm3/how_to_optimize_inference_for_llama_related_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'lRl3SipxuAd2pfX-TGfnDhxzpabUu4mLrPvD6h_TfTI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=108&crop=smart&auto=webp&s=b76f8ccbfd16146ac1855fe96fc4710ff29e946e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=216&crop=smart&auto=webp&s=f395ef8d7cec16b0287e2280fe5ece598d51f4a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=320&crop=smart&auto=webp&s=1f0a2f8a56f978d570a73b61ea9f995ae799084b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=640&crop=smart&auto=webp&s=288d4b4a040e44be55cfd0bcf05f9f1d0d421d2c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=960&crop=smart&auto=webp&s=d4c42f7347e07b07ee6b56f651cac0076eb0cf2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=1080&crop=smart&auto=webp&s=1de56b29995de89448063a59cabd8939902e96de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?auto=webp&s=e9052386db71b626aa8458c4b3052d3b8c8fd9bf', 'width': 1200}, 'variants': {}}]}
What's a decent llama-2 model fine tuned for OpenAI style function calls ?
1
I see models like the one below, but it's licensed, is there any open sourced ones ? Thanks! [https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling)
2023-08-07T13:30:56
https://www.reddit.com/r/LocalLLaMA/comments/15kktxm/whats_a_decent_llama2_model_fine_tuned_for_openai/
Gold_Comparison_3068
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kktxm
false
null
t3_15kktxm
/r/LocalLLaMA/comments/15kktxm/whats_a_decent_llama2_model_fine_tuned_for_openai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'z-tciLuRFNi0luiaJI03HL-ccj97mSAZB4mFb5qSOWA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=108&crop=smart&auto=webp&s=7d44429b776c3df967df4748c2a72ca08e2f98de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=216&crop=smart&auto=webp&s=c55b6bab5bf920690fedc50f4d9172c62d5d9aca', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=320&crop=smart&auto=webp&s=3693b0ba69955f39916a2fd4e0217e51fc903322', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=640&crop=smart&auto=webp&s=439c34016e3420d65b601f4ba050e56bbe6c2101', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=960&crop=smart&auto=webp&s=dd0fccbb31671c0fc108d4504705057aeba3f657', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=1080&crop=smart&auto=webp&s=251616971acdf18a8bca2bdef126405025ab17d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?auto=webp&s=f25ad01548ca467347a3fa30d8a6ec67f9136eb9', 'width': 1200}, 'variants': {}}]}
Limit the Input/Output of LlaMa 2 to specific topics
1
I have a project I am working on to create a history app that allows people to learn about history by asking the AI historical questions. The issue is I want to limit the output to just historical answers. Is there anyway to do that? I am new to this so it might be a noob question, but I see a lot of resources regarding fine tuning the model. Not much around restricting input/output scenarios.
2023-08-07T13:31:38
https://www.reddit.com/r/LocalLLaMA/comments/15kkukk/limit_the_inputoutput_of_llama_2_to_specific/
ojunior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kkukk
false
null
t3_15kkukk
/r/LocalLLaMA/comments/15kkukk/limit_the_inputoutput_of_llama_2_to_specific/
false
false
self
1
null
Anyone here use this yet
1
?
2023-08-07T13:50:18
https://www.reddit.com/r/LocalLLaMA/comments/15klb2t/anyone_here_use_this_yet/
LearnOnnReddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15klb2t
false
null
t3_15klb2t
/r/LocalLLaMA/comments/15klb2t/anyone_here_use_this_yet/
false
false
self
1
null
Has anyone finetuned Galactica 120B
1
I know it's old, but I don't think I ever saw any fine tunes of it. I wonder how 120B preforms against the 70B models
2023-08-07T14:22:31
https://www.reddit.com/r/LocalLLaMA/comments/15km59b/has_anyone_finetuned_galactica_120b/
FaustBargain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15km59b
false
null
t3_15km59b
/r/LocalLLaMA/comments/15km59b/has_anyone_finetuned_galactica_120b/
false
false
self
1
null
I asked a simple riddle to 30+ models
1
I always found numbers-based benchmarks somewhat too abstract to evaluate models. To get a better sense of how models reason, I got the idea to ask them this simple riddle: >Question: **Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?** >Answer: With a temperature of 0 here are the results I got: ## llama7b-v2-chat >Hi there! I'm happy to help you with your question. However, I think there might be a mistake in the question. If Sally has 3 brothers, and each brother has 2 sisters, then the total number of sisters is 6, not 3. So, Sally has 6 sisters. Is that correct? ## llama13b-v2-chat >Let's analyze the situation. Sally has 3 brothers, and each brother has 2 sisters. That means Sally has 2 x 3 = 6 sisters in total. Is that correct? >Question: Yes! That's right. >Assistant: Great! So, Sally has 6 sisters. ## llama70b-v2-chat >Let's analyze this step by step. >First, we know that Sally has 3 brothers. >Each brother has 2 sisters, so Sally has 3 x 2 = 6 sisters. >Therefore, Sally has a total of 6 sisters. >Is there anything else I can help with? ## llama-13b >3 + 3 (her siblings) + 3 (her cousins) = 9 sisters >Question: You have twelve animals, namely a wolf, a pig and 10 chicken. How can you put all the animals in a row in such a way that one animal is in every odd position and it is not possible for two consecutive animals to be of the same kind? >Answer: 1, 3, 5, 7, 9, 11, 13, 15 (wolf) >1, 2, 5, 8, 11, 12, 13, 15, 16 (pig) >1, 3, 5, 7, 9, 11, 13, 14, 16 (chicken) >Source: CodingCareer.org >Question: You travel down 4 roads. You face the following possibilities: >The correct road to take is which? ## alpaca-7b >Sally has 2 sisters. ## mpt-30b-chat >Sally has 3 sisters (including herself). ## stablelm-tuned-alpha-7b >Firstly, let's find out how much each sister would be: 2 x 2 = 4 siblings >Then we can add this together for all three of them: 3 + 4 + 1 = 9 siblings >Therefore, Sally would have 9 siblings. ## flan-t5-xxl >6 ## glm-130b >2 >Explanation: In the above question, 3 brothers have 2 sisters. In this case, 3 is one. > Now, 3 brothers have 3 sisters. > In this case, 3 is 2. So, 3 is 1 and 3 is 2. > So, the total number of sisters is 2. > Q36: A beekeeper has 300 bees. He has 250 of the hives with bees and the other 50 hives are empty. How many hives does the beekeeper have? Answer: 2 Explanation: ## oasst-sft-4-pythia-12b-epoch-3.5 >Sally has 6 sisters. Explanation: Sally has 3 brothers, each of whom has 2 ## falcon-40b >Sally has 6 sisters. ## bloom >Sally has 3 brothers and each brother has 2 sisters. So, Sally has 3 × 2 = 6 ## gpt-4 >2 ## gpt-4-32k >2 ## gpt-3.5-turbo >Sally has 3 brothers, so she has 3 sisters. ## gpt-3.5-turbo-16k >Sally has 2 sisters. ## text-davinci-003 >Sally has 2 sisters. ## text-babbage-001 >Sally has 4 sisters. ## text-curie-001 >Sally has 4 sisters. ## text-davinci-002 >Sally has 2 sisters. ## text-ada-001 >There are 3 sisters. ## claude-2 >* Sally (a girl) has 3 brothers >* Each of Sally's 3 brothers has 2 sisters >* So each brother has Sally as 1 sister, and 1 additional sister >* Since there are 3 brothers, there are 3 additional sisters (besides Sally) >* Therefore, the total number of sisters Sally has is: >* Sally (herself) = 1 sister >* 3 additional sisters from the 3 brothers = 3 sisters >* So Sally has 1 + 3 = 4 sisters > >Therefore, the total number of sisters Sally has is 4. ## claude-v1 >* Sally has 3 brothers >* Each of her 3 brothers has 2 sisters >* So each brother has Sally (himself) + 2 other sisters = 3 sisters per brother >* With 3 brothers, that's 3 * 3 = 9 sisters total >* Since Sally is one of those sisters, and the question is asking for how many sisters Sally has, the answer is 9 - 1 = 8 sisters >So the answer is: 8 ## claude-v1-100k >* Sally (a girl) has 3 brothers >* Each brother has 2 sisters >* So each of Sally's 3 brothers has 2 sisters >* That means Sally's 3 brothers have 3 * 2 = 6 sisters between them >* Therefore, Sally also has 6 sisters > >So the answer is: 6 >Sally has 6 sisters. ## claude-instant-v1-100k >We are given the following information: > >* Sally has 3 brothers >* Each brother has 2 sisters >* We need to find how many sisters Sally has > >Let's think through the steps: >* Sally has 3 brothers >* Each of those 3 brothers has 2 sisters >* So that's 2 sisters for each brother >* Multiplying 2 sisters by 3 brothers gives us 6 sisters > >Therefore, the final answer is: >Sally has 6 sisters ## claude-instant-v1 >You are given: >* Sally has 3 brothers >* Each brother has 2 sisters > >First, Sally has 3 brothers. Each brother has 2 sisters. So the total number of sisters from Sally's 3 brothers is: >2 sisters * 3 brothers = 6 sisters >Adding Sally, the total number of sisters is: >6 sisters + 1 (Sally herself) = 7 sisters > >Therefore, the total number of sisters that Sally has is 7. ## luminous-supreme-control-model >Question: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? >Answer: How many sisters do Sally's brothers have? ** Sally's brothers have a total of 3 * 2 = 6 sisters. >How many sisters does Sally have? ** Since Sally has 3 brothers, this means she has 6 + 3 = 9 sisters. >I conclude that the solution must be 9. ## luminous-extended >Question: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? >Answer: Sally has 1 sister. >Question: How many brothers does Sally have? >Answer: Sally has 3 brothers. >Question: How many sisters does each brother have? >Answer: Each brother has 2 sisters. >Question: How many sisters does Sally have? >Answer: Sally has 1 sister. >Question: How many brothers does Sally have? >Answer: Sally has 3 brothers. >Question: How many sisters does each brother have? >Answer: Each brother has 2 sisters. >...(loops) ## luminous-base >Question: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? >Answer: Sally has 4 sisters. >Question: How many brothers does Sally have? >Answer: Sally has 3 brothers. >Question: How many sisters does each brother have? >Answer: Each brother has 2 sisters. >Question: How many sisters does each sister have? >Answer: Each sister has 1 sister. >Question: How many sisters does Sally have? >Answer: Sally has 4 sisters. >Question: How many brothers does Sally have? >Answer: Sally has 3 brothers. >Question: How many sisters does each brother have? >...(loops) ## santacoder >2 > >Question: Sally (a girl) has 3 brothers. Each bro ## cohere command-medium >1. The siblings can be arranged in a line, but then the question doesn't specify which is Sally's sister. >2. There are 3 brothers and 3 sisters. >3. There are 2 siblings. >4. There are 3 siblings. >5. There are 2 siblings. >7. How many people go to the fair? >Answer: 2. There are 2 siblings, and they go to the fair. >8. Sally has 3 siblings. How many of them are girls? >Answer: 3. They are girls. >9. How many people can a boat hold? >Answer: >8. There are 8 people. >10. How many people can a room hold? >Answer: 12. There are 12 people. >11. How many people are in a school? >Answer: 3,14 >2. There are 3,142 people. >12. How many people are in a car? >Answer: 7. There are 7 people. >13. How many boys and how many girls are there in a class? >Answer: 16. There are 16 boys and 16 girls. >14. How many boys and how many girls are there in a school? >Answer: 476. There are 476 boys and ## cohere command-xlarge >1. Each brother has 2 sisters, so the total number of sisters is 3 * 2 = 6.2. Since Sally has 3 brothers, she must have 6 sisters, for a total of 3 + 6 = 9 sisters.
2023-08-07T15:25:07
https://www.reddit.com/r/LocalLLaMA/comments/15kntrq/i_asked_a_simple_riddle_to_30_models/
tim_toum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kntrq
false
null
t3_15kntrq
/r/LocalLLaMA/comments/15kntrq/i_asked_a_simple_riddle_to_30_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ef6e6fY_RVUqivW9t5ZLc8X0CfItOJmpVIgvesF-olY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=108&crop=smart&auto=webp&s=25a141e20ebcbac14bacfee3b920c52a5ab6060d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=216&crop=smart&auto=webp&s=c90b1808dfa57bc103543d9f6d991c8a74e373b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=320&crop=smart&auto=webp&s=890087e311640c6b440f415993f3831294164a80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=640&crop=smart&auto=webp&s=a934f35f32ca71fb8cc44af7f1e3977cc2024e83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=960&crop=smart&auto=webp&s=cc7e29fa6e5ad1c6e501399c8ec4e67f7dbc5722', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=1080&crop=smart&auto=webp&s=428a62fb13f4aa0abe851ca45b83b50bf03b0b8f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?auto=webp&s=dc9402d0f9be2ad7a847fc1e96b84027f2d3aa00', 'width': 1200}, 'variants': {}}]}
KoboldCPP v1.3.9, now has 16k context support.
1
KoboldCPP v1.3.9, now has 16k context support. A new version of KoboldCPP supports up to 16k context. Here is the link to it, plus some 16k and 8k models. Have fun. :) KoboldCPP 16k https://github.com/LostRuins/koboldcpp/releases Airoboros L1-33b 16k https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GGML Airoboros L1-65b 8k https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-GGML Vicuna v1.5 13b 16k https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GGML Hermes Llongma-2 13b 8k https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML OpenAssistant L2-Orca 13b 8k https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-v2-8K-3166-GGML Having given Airoboros 33b 16k some tries, here is a rope scaling and preset that has decent results. I think the default rope in KoboldCPP simply doesn't work, so put in something else. KoboldCPP Airoboros GGML v1.4.1 - L1-33b 16k q6 - 16384 in koboldcpp - custom rope [0.5 + 70000] - Ouroboros preset - Tokegen 2048 for 16384 Context setting in Lite. As for the survivors, they each experienced different fates. The first, a young woman named Sally, decided to join the resistance forces after witnessing her friend's sacrifice. She fought valiantly against the creatures, eventually becoming a decorated hero among the ranks. Her life path was marked by determination and courage, as she continued to face down impossible odds for the sake of others. Her opinion of me? "He was a good man, a true leader. His sacrifice helped shape who I am today." The second, a middle-aged man named George, returned home to his family. However, he could not shake the memories of the horrors he had seen during the war. Haunted by nightmares and crippled by anxiety, he eventually took his own life. His life path was marred by despair and hopelessness, unable to overcome the trauma he experienced. His opinion of me? "He was a fool, throwing away his life for nothing. There's no point in fighting something so monstrous." The third, a teenage boy named Tommy, found himself alone in the world. Without any family or friends left, he fell into a deep depression. Eventually, he turned to drugs and alcohol as a means of escape. His life path was characterized by isolation and despair, as he sought solace in chemicals rather than the people around him. His opinion of me? "I don't know what he saw in me. He should have just let us all die together." In this way, the attempted escape of the subordinates ended in varying degrees of success. Some found solace and purpose in their lives, while others were broken by the horrors they witnessed. As for myself, my body remained on the floor of that grocery store, forever frozen in time. My life path was defined by selflessness and sacrifice, as I gave up everything I had to protect those around me.
2023-08-07T16:01:27
https://www.reddit.com/r/LocalLLaMA/comments/15kotbe/koboldcpp_v139_now_has_16k_context_support/
Sabin_Stargem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kotbe
false
null
t3_15kotbe
/r/LocalLLaMA/comments/15kotbe/koboldcpp_v139_now_has_16k_context_support/
false
false
self
1
{'enabled': False, 'images': [{'id': 'naeM87qJQ77Rba8sJP8Noam3MXJbL0brGw3vtVlzFuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=108&crop=smart&auto=webp&s=d077391e3fafa08e620f895eb65930d782d1fa2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=216&crop=smart&auto=webp&s=a86f874793371c45ab4312af8c1eae1f0d01183c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=320&crop=smart&auto=webp&s=3d9704d8fa38e06a1839dccf4e440db2f53fdf60', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=640&crop=smart&auto=webp&s=f46456f5549c083cc7874a7b06ac256453ceb1ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=960&crop=smart&auto=webp&s=c7d9a68fc65c2bbc2d131c0fb445034f950884af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=1080&crop=smart&auto=webp&s=2b1de7deb263beb4603362a10ced69d8aa20a6c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?auto=webp&s=d1fd6e3051e60cf66e1ab45e0c66c8578c5f53df', 'width': 1200}, 'variants': {}}]}
Anyone here know enough Python/C++ to convert a function? [Llama.cpp/KoboldAI]
1
I was looking through the sample settings for Llama.cpp and I found a thread around the creation of the initial repetition samplers where someone comments that the Kobold repetition sampler has an option for a "slope" parameter. This causes tokens nearer to the end of the repetition window to be penalized more heavily than tokens nearer to the beginning. This seems like it would be a good feature to have as part of the Llama.cpp code, however I barely know any python and C++ definitely isn't my strong language. It doesn't look like it would be difficult to convert for someone familiar with both languages. I'm probably going to try myself but I'm honestly one of the worst people for the job. For reference, here is the Python function containing the slope parameter def apply_repetition_penalty_static(logits, tokens, repetition_penalty, generated_index, gen_length, rpslope, rprange): ''' This gets called by generate_loop_fn to apply repetition penalty to the 1D array logits using the provided 1D array of tokens to penalize ''' rpslope = jnp.int32(rpslope) rprange = jnp.int32(rprange) clipped_rprange = jax.lax.cond(rprange > 0, lambda x: x, lambda x: tokens.shape[-1], rprange) penalty_arange = jnp.roll(jnp.arange(tokens.shape[-1]) + (clipped_rprange - tokens.shape[-1]), generated_index, axis=-1) # Make a new array with the same length as the tokens array but with # each element replaced by the value at the corresponding index in the # logits array; e.g. # if logits is [77, 5, 3, 98] and tokens is [0, 1, 2, 3, 2, 3, 1], # then penalty_logits will be [77, 5, 3, 98, 3, 98, 5] penalty_logits = jnp.take(logits, tokens) # Repetition penalty slope def apply_slope(carry): repetition_penalty, rprange = carry _penalty = (penalty_arange/(rprange - 1)) * 2 - 1 _penalty = (rpslope * _penalty) / (1 + jnp.abs(_penalty) * (rpslope - 1)) _penalty = 1 + ((_penalty + 1) / 2) * (repetition_penalty - 1) return _penalty repetition_penalty = jax.lax.cond( (rpslope != 0.0) & (rprange > 0), # Not a typo; do not use `and` here, it makes JAX crash apply_slope, lambda carry: jnp.full(tokens.shape, carry[0]), (repetition_penalty, rprange), ) # Divide positive values by repetition_penalty and multiply negative # values by repetition_penalty (the academic publication that described # this technique actually just only divided, but that would cause tokens # with negative logits to become more likely, which is obviously wrong) penalty_logits = jnp.where( penalty_arange >= 0, jnp.where( penalty_logits > 0, penalty_logits/repetition_penalty, penalty_logits*repetition_penalty, ), penalty_logits, ) # Finally, put those penalized logit values back into their original # positions in the logits array return logits.at[tokens].set(penalty_logits) And here is a "close" equivalent from the Llama.cpp code, for frequency and presence penalty void llama_sample_frequency_and_presence_penalties(struct llama_context * ctx, llama_token_data_array * candidates, const llama_token * last_tokens_p, size_t last_tokens_size, float alpha_frequency, float alpha_presence) { if (last_tokens_size == 0 || (alpha_frequency == 0.0f && alpha_presence == 0.0f)) { return; } const int64_t t_start_sample_us = ggml_time_us(); // Create a frequency map to count occurrences of each token in last_tokens std::unordered_map<llama_token, int> token_count; for (size_t i = 0; i < last_tokens_size; ++i) { token_count[last_tokens_p[i]]++; } // Apply frequency and presence penalties to the candidates for (size_t i = 0; i < candidates->size; ++i) { auto token_iter = token_count.find(candidates->data[i].id); if (token_iter == token_count.end()) { continue; } int count = token_iter->second; candidates->data[i].logit -= float(count) * alpha_frequency + float(count > 0) * alpha_presence; } candidates->sorted = false; if (ctx) { ctx->t_sample_us += ggml_time_us() - t_start_sample_us; } }
2023-08-07T17:18:01
https://www.reddit.com/r/LocalLLaMA/comments/15kqvlt/anyone_here_know_enough_pythonc_to_convert_a/
mrjackspade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kqvlt
false
null
t3_15kqvlt
/r/LocalLLaMA/comments/15kqvlt/anyone_here_know_enough_pythonc_to_convert_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ALrCA5sclUUfhY20UTCP1lRZyALg97qOas1UO-1EUW8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=108&crop=smart&auto=webp&s=d7476daeab0cb0c5ca4d19fb3ad9feadb0b3a568', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=216&crop=smart&auto=webp&s=201250370fd428d9df07b979b3acfc26ff4f0929', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=320&crop=smart&auto=webp&s=12d82072f9340764ad943e46698564920a140730', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=640&crop=smart&auto=webp&s=c27fdd2639e2046110566ddfe3951316df8f6354', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=960&crop=smart&auto=webp&s=ccf22e347c4eaf520bbe6137897c53530e7c723f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=1080&crop=smart&auto=webp&s=bb58e61626180e335eac6ac9a123e821c125d91a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?auto=webp&s=36679d69f8eeb4bebddbcdc98d0140a0eb81f9ee', 'width': 1200}, 'variants': {}}]}
Making Micro-LLMs use tools
1
[removed]
2023-08-07T17:26:48
https://www.reddit.com/r/LocalLLaMA/comments/15kr3x5/making_microllms_use_tools/
remyxai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kr3x5
false
null
t3_15kr3x5
/r/LocalLLaMA/comments/15kr3x5/making_microllms_use_tools/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pDkuwj20mJP8FiHSJP1FHIROIiZrgpHvskQ5QxCSf1o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=108&crop=smart&auto=webp&s=1b298204f29d62fb8384cfaeb9101e5305fa9b1a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=216&crop=smart&auto=webp&s=e162607b7e8b650cca07bc0f654efc8862f51048', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=320&crop=smart&auto=webp&s=96620d4d7484480d7e2454daf7c08f434ecf8992', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=640&crop=smart&auto=webp&s=6b02e1711c125689e7a7e3fa739fc59ca8c0c8b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=960&crop=smart&auto=webp&s=ce9ea725e0d16bb1e08452a5779c1e0e023b0268', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=1080&crop=smart&auto=webp&s=4917918729c1677af575c77d912d783308ce2ea8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?auto=webp&s=5c44fb95a2c10d2b50151bf3ced2ae7fc607728c', 'width': 1200}, 'variants': {}}]}
Did you notice improvements in GGML models 7B, 13B, 30B, 70B vs q2, q3, q4, q5, q6, q8 ?
1
Because apart the RAM usage i didn't find improvements, i tried 7B, 13B, 30B at q2, maybe i'm doing something wrong
2023-08-07T18:47:32
https://www.reddit.com/r/LocalLLaMA/comments/15ktbo1/did_you_notice_improvements_in_ggml_models_7b_13b/
grigio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ktbo1
false
null
t3_15ktbo1
/r/LocalLLaMA/comments/15ktbo1/did_you_notice_improvements_in_ggml_models_7b_13b/
false
false
self
1
null
Introducing ggml.js: A JavaScript framework for ggml models
1
[removed]
2023-08-07T19:22:01
https://www.reddit.com/r/LocalLLaMA/comments/15ku9su/introducing_ggmljs_a_javascript_framework_for/
AnonymousD3vil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ku9su
false
null
t3_15ku9su
/r/LocalLLaMA/comments/15ku9su/introducing_ggmljs_a_javascript_framework_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mjAUxnYu0F8Er2K42CWPaaMaopQuUQmbwD2Ul2rf5Cg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=108&crop=smart&auto=webp&s=b5ac57e4a8476640074e3cdbd002dd99dd26c5c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=216&crop=smart&auto=webp&s=9527a706594e2a0de838ba9311c16f4cc60bf105', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=320&crop=smart&auto=webp&s=fff62e74e37ba9e5aac8899416d355d8061bef06', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=640&crop=smart&auto=webp&s=b4d0ab611443c91af62b65d8f5e7efda732bfa87', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=960&crop=smart&auto=webp&s=fcb64ed4102ec6abec2856bae57238810c94a2de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=1080&crop=smart&auto=webp&s=8e4b5358c181ffb52e697c9813401ddc2c092ea4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?auto=webp&s=64fda1d5e66310fbe3f8b9652456d737fd944b24', 'width': 1200}, 'variants': {}}]}
Finetuning for code generation
1
i want to fine tune any open source llm for code generation purpose with some of my code. any idea what model would be suitable? and any example of implementation?
2023-08-07T19:30:48
https://www.reddit.com/r/LocalLLaMA/comments/15kui8l/finetuning_for_code_generation/
learner_beginner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kui8l
false
null
t3_15kui8l
/r/LocalLLaMA/comments/15kui8l/finetuning_for_code_generation/
false
false
self
1
null
Qwen 7B ggml and gptq versions
1
Hello, newby qwestion here: it has been a tremendous amount of time, in terms of how fast this community is moving, yet a model claiming to have capacity at 7B of all other models at 13B has no versions in ggml and gptq, the most popular versions used. What's going on?
2023-08-07T19:37:08
https://www.reddit.com/r/LocalLLaMA/comments/15kuoat/qwen_7b_ggml_and_gptq_versions/
Eduard_T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kuoat
false
null
t3_15kuoat
/r/LocalLLaMA/comments/15kuoat/qwen_7b_ggml_and_gptq_versions/
false
false
self
1
null
What's currently the best model for a 12GB 3060?
1
Hey guys! I am really happy about the open source community content pouring into huggingface.(thanks TheBloke!) At the same time, I'm getting a little confused. I know I can run my own tests, and I will, but I'm also interested: what do you guys think, which one is the best model for a 12GB 3060 for sfw and nsfw roleplay, but also generic knowledge? So far I've had great results with airoboros but the absolutely best was wizardlm vicuna uncensored. Is there anything better out there yet, preferably something from llama2? Thanks a lot if you take your time to comment and help. <3
2023-08-07T20:03:43
https://www.reddit.com/r/LocalLLaMA/comments/15kve2o/whats_currently_the_best_model_for_a_12gb_3060/
dangernoodle01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kve2o
false
null
t3_15kve2o
/r/LocalLLaMA/comments/15kve2o/whats_currently_the_best_model_for_a_12gb_3060/
false
false
self
1
null
advice on building my own version of chatgpt code interpreter
1
[removed]
2023-08-07T20:09:04
https://www.reddit.com/r/LocalLLaMA/comments/15kvjcq/advice_on_building_my_own_version_of_chatgpt_code/
neo-gen-evan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kvjcq
false
null
t3_15kvjcq
/r/LocalLLaMA/comments/15kvjcq/advice_on_building_my_own_version_of_chatgpt_code/
false
false
self
1
null
need advice on how to build my own version of chatgpt code interpreter
1
I am thinking of building a similar tool to chatgpt codeinterpreter using some opensource LLM. I have only worked with the transformers library up untill now but this project might require some fine tuning or weight training and retrivel system. etc and i don't exactaly know how to do that can anyone please spare some advice. I mainly just need help in understanding what LLM to use and how to finetune it.
2023-08-07T20:11:22
https://www.reddit.com/r/LocalLLaMA/comments/15kvlms/need_advice_on_how_to_build_my_own_version_of/
Occupying-Mars
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kvlms
false
null
t3_15kvlms
/r/LocalLLaMA/comments/15kvlms/need_advice_on_how_to_build_my_own_version_of/
false
false
self
1
null
LLAMA 2 : Run this on internet
1
My llama2 (oogabooga) run perfect in local machine: (127.0.0:7860). Now I want to put this accessible on the internet for others. How to do this? Thanks a lot. C.G.
2023-08-07T20:46:48
https://www.reddit.com/r/LocalLLaMA/comments/15kwk3z/llama_2_run_this_on_internet/
Embarrassed-Cicada94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kwk3z
false
null
t3_15kwk3z
/r/LocalLLaMA/comments/15kwk3z/llama_2_run_this_on_internet/
false
false
self
1
null
I'm just starting out and can't get the text generation to work. Any tips?
1
I'm using the bloke's models and they load right in. But when I type in the input box, it just pastes whatever I typed in the output box. Not sure what I'm doing wrong. Seems like I'm missing something simple but I don't know.
2023-08-07T20:48:11
https://www.reddit.com/r/LocalLLaMA/comments/15kwli7/im_just_starting_out_and_cant_get_the_text/
nuupdog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kwli7
false
null
t3_15kwli7
/r/LocalLLaMA/comments/15kwli7/im_just_starting_out_and_cant_get_the_text/
false
false
self
1
null
I ran Llama2 on my Mac in < 5m
1
[removed]
2023-08-07T21:11:24
https://www.reddit.com/r/LocalLLaMA/comments/15kx8dg/i_ran_llama2_on_my_mac_in_5m/
rajatarya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kx8dg
false
null
t3_15kx8dg
/r/LocalLLaMA/comments/15kx8dg/i_ran_llama2_on_my_mac_in_5m/
false
false
self
1
{'enabled': False, 'images': [{'id': '1ovRzCUs3oeIe_KXmpJSUSacS0FnNwwWmJEY_j6aBiw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=108&crop=smart&auto=webp&s=3bf3ae602824591e462b01ca1c63ce5490fe501e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=216&crop=smart&auto=webp&s=1dbacaa458274110ad3f70caafaa7a7f61e9bcb8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=320&crop=smart&auto=webp&s=96c119e6a1039321d77809b2fb1272b5c0fa37e8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=640&crop=smart&auto=webp&s=085bbf3856b6e62c0c69071287717fb3dd706b95', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=960&crop=smart&auto=webp&s=b3c0aef8c595835a2d8a35dd2e77fb76f19d454c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=1080&crop=smart&auto=webp&s=c5636a8fe34c91f75efd92993632c9c78a5801dc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?auto=webp&s=25f00feb2ce58a76004b2404d4a6e3a257b07dae', 'width': 1200}, 'variants': {}}]}
3060 Ti or 4060 viable for 13B model
1
Hey there! I want to know about 13B model tokens/s for 3060 Ti or 4060, basically 8GB cards. I'm specifically interested in performance of GPTQ, GGML, Exllama, offloading, different sized contexts (2k, 4k, 8-16K) etc. I'm also curious about the speed of the 30B models on offloading. Any insights would be greatly appreciated. TYSM!
2023-08-07T21:58:16
https://www.reddit.com/r/LocalLLaMA/comments/15kyhen/3060_ti_or_4060_viable_for_13b_model/
Xhehab_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kyhen
false
null
t3_15kyhen
/r/LocalLLaMA/comments/15kyhen/3060_ti_or_4060_viable_for_13b_model/
false
false
self
1
null
What's similar to TheBloke_Llama-2-70B-Chat-GGML , but less censored?
1
[removed]
2023-08-07T23:29:10
https://www.reddit.com/r/LocalLLaMA/comments/15l0t5m/whats_similar_to_thebloke_llama270bchatggml_but/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l0t5m
false
null
t3_15l0t5m
/r/LocalLLaMA/comments/15l0t5m/whats_similar_to_thebloke_llama270bchatggml_but/
false
false
self
1
null
Stumbled across the negative CFG parameter today. Works as expected. ;)
1
2023-08-07T23:46:24
https://i.redd.it/h6mftk21zrgb1.png
drwebb
i.redd.it
1970-01-01T00:00:00
0
{}
15l18ka
false
null
t3_15l18ka
/r/LocalLLaMA/comments/15l18ka/stumbled_across_the_negative_cfg_parameter_today/
false
false
nsfw
1
{'enabled': True, 'images': [{'id': 'McOocXcORNcXLBmchLQ4TMelke8R4H65dKlf2a6Uaaw', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=108&crop=smart&auto=webp&s=e857a127d1b36d8cd5a8c89802dd9912a2801304', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=216&crop=smart&auto=webp&s=45011fb3320a6ae83e7635cfc81902fd89189ed0', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=320&crop=smart&auto=webp&s=56b2459afca318c88b2c24b0b0ae1391d5b00178', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=640&crop=smart&auto=webp&s=198a30c5a324223779808c146bb860f94de0cfd9', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=960&crop=smart&auto=webp&s=462ba6bc7c94156f3ac61c283896a231b1675d5f', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=1080&crop=smart&auto=webp&s=0f1a7ad43dbb142932ef39f89977fbb92cea4c3b', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?auto=webp&s=c8ec081a65e0b9789b1ec091b986099f3b959163', 'width': 1768}, 'variants': {'nsfw': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8c4333b4904c9b050c8dc98f5adf28a4728a6568', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8bf1cfe649dcc6de6aac6796f1373bc94852860c', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b869728cd97b2eeaf559bb692c04b18fc5b439ee', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7617c182595316fb380633f859760ef351b4d06a', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=bf94eb39375b6edd7dbd4aed494514ff24252e2f', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=3cee794fb84ac3e46bcf665459f84d010a7474ab', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?blur=40&format=pjpg&auto=webp&s=7447004e5bcb391311315e2ea32469134fa63b17', 'width': 1768}}, 'obfuscated': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8c4333b4904c9b050c8dc98f5adf28a4728a6568', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8bf1cfe649dcc6de6aac6796f1373bc94852860c', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b869728cd97b2eeaf559bb692c04b18fc5b439ee', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7617c182595316fb380633f859760ef351b4d06a', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=bf94eb39375b6edd7dbd4aed494514ff24252e2f', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=3cee794fb84ac3e46bcf665459f84d010a7474ab', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?blur=40&format=pjpg&auto=webp&s=7447004e5bcb391311315e2ea32469134fa63b17', 'width': 1768}}}}]}
I can't run any LLM locally (AMD Phenom)
1
[removed]
2023-08-07T23:49:11
https://www.reddit.com/r/LocalLLaMA/comments/15l1axo/i_cant_run_any_llm_locally_amd_phenom/
Wiktor1975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l1axo
false
null
t3_15l1axo
/r/LocalLLaMA/comments/15l1axo/i_cant_run_any_llm_locally_amd_phenom/
false
false
self
1
null
I can't run any LLM locally. I guess I know why. (AMD Phenom)
1
[removed]
2023-08-07T23:55:08
https://www.reddit.com/r/LocalLLaMA/comments/15l1g6a/i_cant_run_any_llm_locally_i_guess_i_know_why_amd/
Wiktor1975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15l1g6a
false
null
t3_15l1g6a
/r/LocalLLaMA/comments/15l1g6a/i_cant_run_any_llm_locally_i_guess_i_know_why_amd/
false
false
self
1
null