title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OMG Orca-mini 7b quantized to q4_0 ggml surprised me by its mathematical skills! | 20 |
Prompt: Evaluate f(2) - f(1) when f(x) = 3x + 1.
Response: To evaluate f(2) - f(1), we need to substitute x=2 and x=1 in the given function f(x) = 3x + 1.
f(2) = 3(2) + 1 = 6 + 1 = 7
Similarly,
f(1) = 3(1) + 1 = 3 + 1 = 4
Therefore, f(2) - f(1) = 7 - 4 = 3. | 2023-06-26T03:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/14j5vhm/omg_orcamini_7b_quantized_to_q4_0_ggml_surprised/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14j5vhm | false | null | t3_14j5vhm | /r/LocalLLaMA/comments/14j5vhm/omg_orcamini_7b_quantized_to_q4_0_ggml_surprised/ | false | false | self | 20 | null |
Best model for commercial use? | 5 | So, I've been having a very good time so far with vicuna 1.3 and exllama. Managed to get them to work amazingly well with langchain and llama-index. I'm know starting to think in potential software applicatio s and llama's restrictive license is a nuisance.
What's the best model for commercial use you've found? I've heard good things about Falcon, however I haven't found quantized versions for it (if someone knows about those, please do tell). | 2023-06-26T03:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/14j5z7q/best_model_for_commercial_use/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14j5z7q | false | null | t3_14j5z7q | /r/LocalLLaMA/comments/14j5z7q/best_model_for_commercial_use/ | false | false | self | 5 | null |
Why not standardize 3bit & 2bit GPTQ? | 2 | We know that any 2-bit quantized model outperforms all smaller 16-bit models
[k_quantization_vs_perplexity](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button)
With the recent updates that allow for a longer context size, 30B models at 4-bit will not be able to take full advantage of an 8k context size on 24GB of VRAM. Likewise, 13B models will not be able to utilize a longer context size with only 12GB of VRAM.
[6000+ tokens context with ExLlama : r/LocalLLaMA - Reddit](https://www.reddit.com/r/LocalLLaMA/comments/14j4l7h/6000_tokens_context_with_exllama/)
Thanks to ExLlama, a 2k context on 30B models is now possible with 24GB of VRAM.
However, we should consider switching to lower-bit quantized models to take full advantage of longer context sizes, even at cost of model performance. I know that 3-bit GPTQ is possible, and, as seen in k_quant, 2-bit models are also viable. | 2023-06-26T03:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/14j6gpv/why_not_standardize_3bit_2bit_gptq/ | onil_gova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14j6gpv | false | null | t3_14j6gpv | /r/LocalLLaMA/comments/14j6gpv/why_not_standardize_3bit_2bit_gptq/ | false | false | self | 2 | null |
What are the current fastest multi-gpu inference frameworks? | 7 | Hi community!
So I rent a cloud server today to try out some of the recent LLMs like falcon and vicuna. I started with huggingface's [generate API](https://huggingface.co/docs/transformers/main_classes/text_generation) using [accelerate](https://github.com/huggingface/accelerate). It got about 2 instances/s with 8 A100 40GB GPUs which I think is a bit slow. I was using batch size = 1 since I do not know how to do multi-batch inference using the `.generate` API. I did `torch.compile` \+ `bf16` already. Do we have an even faster multi-gpu inference framework? I have 8 GPUs so I was thinking about MUCH faster speed like \~10 or 20 instances per second (or is it possible at all? I am pretty new to this field).
Something I think is relevant and I am going to look at:
* Vicuna has a [FastChat](https://github.com/lm-sys/FastChat), not sure how flexible it is to configure tho
* Deepspeed seems to have an inference mode but I do not know how good is it integrated with huggingface. I had some experiences training with deepspeed but never inference.
* [ChatGLM](https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md) seems to be pretty popular but I've never used this before.
BTW I heard quantizing the model to 8bit or even 4 bit will be helpful during training. Is this method also helpful for inference (and the quality of the generated response)? Any good pointer on how to do quantized inference?
Thanks all in advance! | 2023-06-26T06:44:53 | https://www.reddit.com/r/LocalLLaMA/comments/14j9l6b/what_are_the_current_fastest_multigpu_inference/ | tt19234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14j9l6b | false | null | t3_14j9l6b | /r/LocalLLaMA/comments/14j9l6b/what_are_the_current_fastest_multigpu_inference/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]} |
Poll: When will we have very high speed and accuracy? | 0 | When do you predict this will happen:
A publicly available GPT4 quality (or better) model
**-AND-**
Runs at 1000+ tokens/s on a (then) modern (high-end ok) consumer hardware (e.g., equivalent to what a 3090 is considered today)
?
[View Poll](https://www.reddit.com/poll/14j9lt3) | 2023-06-26T06:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/14j9lt3/poll_when_will_we_have_very_high_speed_and/ | omasoud | self.LocalLLaMA | 2023-06-26T06:51:11 | 0 | {} | 14j9lt3 | false | null | t3_14j9lt3 | /r/LocalLLaMA/comments/14j9lt3/poll_when_will_we_have_very_high_speed_and/ | false | false | self | 0 | null |
For fine-tuning a LLaMA model, which cloud GPU provider to use? runpod or lambda labs or …? | 6 | I'm curious what kind of resources people who are doing some of our current fine-tunes are using and why. It looks to me like lambda labs is cheaper for A100s but I seem to recall that a lot of people are using runpod. (?) | 2023-06-26T06:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/14j9lzr/for_finetuning_a_llama_model_which_cloud_gpu/ | Maristic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14j9lzr | false | null | t3_14j9lzr | /r/LocalLLaMA/comments/14j9lzr/for_finetuning_a_llama_model_which_cloud_gpu/ | false | false | self | 6 | null |
Passing Tabular Data to LLMs | 8 | Does anyone have any experience attempting this? I am trying to pass some Tabular Data to my model to gain insights out of. Imagine you have a research paper which holds results in Tabular format which I want to gain insights from using a large language model.
I have written the code to extract the Tabular Data and convert it into a pandas dataframe which in most cases works quite well.
What I'm struggling with is how to pass the data to the LLM while retaining the information contained within the original structure of rows and columns.
I am currently considering two approaches. The first one would be to simply recreate the table denoting cell and line breaks with some special character and informing the model of this. Alternatively, I am considering converting the Tabular Data into some form of natural language which could be processed easily by the model, however, this approach is extremely difficult and error-prone considering the wide variety of table structures out there, especially when considering the variability of row and column titles.
Does anyone have any experience with this they could share? | 2023-06-26T08:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/14jblyr/passing_tabular_data_to_llms/ | kotschi1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jblyr | false | null | t3_14jblyr | /r/LocalLLaMA/comments/14jblyr/passing_tabular_data_to_llms/ | false | false | self | 8 | null |
"Interactive mode" noob q: It always starts up with weird spam output. | 2 | [removed] | 2023-06-26T09:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/14jchfu/interactive_mode_noob_q_it_always_starts_up_with/ | redzorino | self.LocalLLaMA | 2023-06-26T09:38:45 | 0 | {} | 14jchfu | false | null | t3_14jchfu | /r/LocalLLaMA/comments/14jchfu/interactive_mode_noob_q_it_always_starts_up_with/ | false | false | default | 2 | null |
Using local models for qualitative analysis | 3 | I’m looking for peers who are interested in the use of local models for qualitative analysis. This is things like inductive, thematic content analysis, argument, analysis, and that sort of work. Right now I’m putting together a opinion paper and looking for co-authors. I’m working at a university in Europe, so we have particular challenges when it comes to privacy. I think I have a decent introduction, but I’d like to make sure that the discussion I making of models is sound.
If you’re interested in taking part direct, message me please. | 2023-06-26T10:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/14jd1zs/using_local_models_for_qualitative_analysis/ | Bozo32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jd1zs | false | null | t3_14jd1zs | /r/LocalLLaMA/comments/14jd1zs/using_local_models_for_qualitative_analysis/ | false | false | self | 3 | null |
Is it possible to Fine tune with QLoRa an already fine tuned model (with QLoRa) | 5 | [removed] | 2023-06-26T10:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/14jd2xn/is_it_possible_to_fine_tune_with_qlora_an_already/ | AlexzombieEE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jd2xn | false | null | t3_14jd2xn | /r/LocalLLaMA/comments/14jd2xn/is_it_possible_to_fine_tune_with_qlora_an_already/ | false | false | default | 5 | null |
LLM and Advanced Syntax Tree's | 4 | Is anyone aware of a project that first parses the input like a lexer and then turn it into a advanced syntax tree of some sort before it is vectorized as the input of a LLM? | 2023-06-26T10:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/14jdpap/llm_and_advanced_syntax_trees/ | jipinx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jdpap | false | null | t3_14jdpap | /r/LocalLLaMA/comments/14jdpap/llm_and_advanced_syntax_trees/ | false | false | self | 4 | null |
Finetuning 7B LLAMA models | 7 | Hi all,
I'm looking to finetune a 7B LLAMA model on the conversation history between a friend and I and would like some advice on doing that. If it helps, I'd preferably like the format of conversation to be similar to the Samantha model. I'm aware that I can engineer my prompt to mimic the style of our messages and such, but I want to try finetuning regardless.
My main questions are:
1. What tool is currently best for finetuning a llama model?
2. If I am looking for somewhat coherent conversation, which model would be best for this (eg WizardLM-1.0)?
3. Is it possible to finetune using a CPU only? If not, is it something I could set up in a tool like Google Colab?
Any help is appreciated, thanks! | 2023-06-26T10:37:28 | https://www.reddit.com/r/LocalLLaMA/comments/14jdr73/finetuning_7b_llama_models/ | Sebba8 | self.LocalLLaMA | 2023-06-26T10:45:45 | 0 | {} | 14jdr73 | false | null | t3_14jdr73 | /r/LocalLLaMA/comments/14jdr73/finetuning_7b_llama_models/ | false | false | self | 7 | null |
airoboros mpt-30b qlora - mostly successful | 43 | I forked mpt-30b and copied over a few changes that others had previously made to the 7b version to make it peft compatible:
[https://huggingface.co/jondurbin/mpt-30b-qlora-compatible](https://huggingface.co/jondurbin/mpt-30b-qlora-compatible)
I updated my fork of qlora to support mpt-30b, specifically setting pad\_token\_id to 0 (since it doesn't have one?), using triton attention implementation, and some tokenizer changes (left padding, use\_fast=True).
[https://github.com/jondurbin/qlora](https://github.com/jondurbin/qlora)
I suspect there is a bug in the gradient accumulation code in the model however. I tested with and without gradient accumulation for 3 epochs, and the results were substantially better without. For example, there are 200 examples of system prompts like "What is your name?" in the training data, and with gradient accumulation the model produced random results, none of which where "airoboros", but without gradient accumulation it generally produced the correct answer most times. Similar results with other tests.
I had to perform more epochs to achieve decent results, particularly with the contextual question answering format. 3 epochs works fine in llama models, but here 5 seemed to be the sweet spot. It's still not as good as llama-33b, but at least it's a proof of concept that it \*can\* work, although probably needs some bug fixing.
I uploaded four versions, 3-6 epochs, but recommend using the five epoch version:
[https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs)
[https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-three-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-three-epochs)
[https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-four-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-four-epochs)
[https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs)
I also had to keep the model names as '1p4' instead of '1.4' because of the way it does imports in the python code. You'll need 'trust\_remote\_code=True' for this to work, as with all mpt-30b models.
My fork of FastChat has been updated to support this model as well: https://github.com/jondurbin/FastChat | 2023-06-26T11:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jf5xk | false | null | t3_14jf5xk | /r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'H7FytC2zSMiLjv-Ubpiv98bsBSMqt_GC-rEGfkjzshI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=108&crop=smart&auto=webp&s=5e49a5e719eb63f2e3b109673324409167488c93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=216&crop=smart&auto=webp&s=d877860a9318bebf35fcc68e43a01cb19b7bb36f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=320&crop=smart&auto=webp&s=0800450c447be39debaca2a65ae8dd3032acd40a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=640&crop=smart&auto=webp&s=f2a45b955171a2dad80a1d4e5fbc80c2580342e9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=960&crop=smart&auto=webp&s=fa594752228420c7aba848d22568925856727f79', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=1080&crop=smart&auto=webp&s=abae367c8e2b21902d178812089ac38401658a37', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?auto=webp&s=b50d86c07c363a63bf44cf1dc598084d9a9751a0', 'width': 1200}, 'variants': {}}]} |
MosaicML Agrees to Join Databricks to Power Generative AI for All | 107 | 2023-06-26T12:28:01 | https://www.mosaicml.com/blog/mosaicml-databricks-generative-ai-for-all | Balance- | mosaicml.com | 1970-01-01T00:00:00 | 0 | {} | 14jfzj3 | false | null | t3_14jfzj3 | /r/LocalLLaMA/comments/14jfzj3/mosaicml_agrees_to_join_databricks_to_power/ | false | false | 107 | {'enabled': False, 'images': [{'id': '2wrPnlTfX6YTssD5YyzokRB_2grMkj8sZB-rUZ3bCJM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=108&crop=smart&auto=webp&s=53d666754d09a0bd31bf9aa5d3f88fae7f5539f5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=216&crop=smart&auto=webp&s=d08496f504c32eb250004faaf31be85a2e5b290a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=320&crop=smart&auto=webp&s=3971c4f86bc84894f2bdd92b5143cbadacb64609', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=640&crop=smart&auto=webp&s=53a33d27ec571f28b6ea18b6515380dfc625702b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=960&crop=smart&auto=webp&s=abd506898d917237bbe092adcc0423463f04709e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=1080&crop=smart&auto=webp&s=d5afb0501d22399525f1694339485b4e0a47fdc9', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?auto=webp&s=d8b44fddbf704c42651c6a30014677c1043491c2', 'width': 2400}, 'variants': {}}]} |
||
How would you structure the dataset for a LORA to confer a specific writing style? | 11 | What it says on the tin. Would it be sufficient to use a textfile with a bunch of writing or would I need to structure it in a question/response format? | 2023-06-26T13:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/14jhrb5/how_would_you_structure_the_dataset_for_a_lora_to/ | freylaverse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jhrb5 | false | null | t3_14jhrb5 | /r/LocalLLaMA/comments/14jhrb5/how_would_you_structure_the_dataset_for_a_lora_to/ | false | false | self | 11 | null |
Guidance on creating a very lightweight model that does one task very well | 29 | I am currently using an LLM (ChatGpt 3.5 turbo) to get start and end date based on the time context from a query.
For example, if query contains last quarter, it should get start date as "2023-01-01" and end_date as "2023-03-31". I am passing current date along with a query.
I am using LLMs for different simple tasks like above and they are getting too expensive now. I would like to have a very lightweight model that runs on my machine and does this kind of tasks very well. I am not sure how to start with taking any opensource model and fine tune it for this specific task and make it very lightweight.
Please guide me if you have any idea or suggestions. Thank you.
Edit: this is just an example use case. I have many similar use cases i.e. classifying whether a query is relevant to specific industry or not, whether a user is looking for pricing data or production data or something else etc. | 2023-06-26T14:21:21 | https://www.reddit.com/r/LocalLLaMA/comments/14jio71/guidance_on_creating_a_very_lightweight_model/ | arch_d3sai | self.LocalLLaMA | 2023-06-26T14:59:03 | 0 | {} | 14jio71 | false | null | t3_14jio71 | /r/LocalLLaMA/comments/14jio71/guidance_on_creating_a_very_lightweight_model/ | false | false | self | 29 | null |
Offering small grants for researchers to train open-source fine-tuned models | 1 | [removed] | 2023-06-26T14:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/14jip7v/offering_small_grants_for_researchers_to_train/ | newy66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jip7v | false | null | t3_14jip7v | /r/LocalLLaMA/comments/14jip7v/offering_small_grants_for_researchers_to_train/ | false | false | default | 1 | null |
Is there a LocalLLaMA or dedicated sub to Apple Silicone & M1/M2 Chips? | 20 | [removed] | 2023-06-26T14:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/14jj0bp/is_there_a_localllama_or_dedicated_sub_to_apple/ | chucks-wagon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jj0bp | false | null | t3_14jj0bp | /r/LocalLLaMA/comments/14jj0bp/is_there_a_localllama_or_dedicated_sub_to_apple/ | false | false | default | 20 | null |
Dumb question...But is my LLM trying to show me an image (<|im_sep|> in response? | 2 | [removed] | 2023-06-26T15:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/14jjs83/dumb_questionbut_is_my_llm_trying_to_show_me_an/ | solidhadriel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jjs83 | false | null | t3_14jjs83 | /r/LocalLLaMA/comments/14jjs83/dumb_questionbut_is_my_llm_trying_to_show_me_an/ | false | false | default | 2 | null |
What is the best way to create a knowledge-base specific LLM chatbot ? | 48 | I want to create a chatbot for the documentation of my companies API. It will be extended in a second place to the whole website and then to a few pdf documents.
As I understand I need to cut the API doc into chunks and store them in a vector database to extract the chunks that are most relevant to the user query and include them in a prompt for the LLM to answer.
I saw that it was possible to do so with [langchain](https://docs.langchain.com/docs/) and [gpt index](https://gpt-index.readthedocs.io/en/latest/getting_started/starter_example.html). Are they the best tool for this use case? or am I missing other ones?
Thanks | 2023-06-26T15:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/14jk0m3/what_is_the_best_way_to_create_a_knowledgebase/ | AImSamy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jk0m3 | false | null | t3_14jk0m3 | /r/LocalLLaMA/comments/14jk0m3/what_is_the_best_way_to_create_a_knowledgebase/ | false | false | self | 48 | null |
llama.cpp and thread count optimization [Revisited] | 33 | Last week, I showed the preliminary results of my attempt to get the best optimization on various language models on my CPU-only computer system.
My computer is a i5-8400 running at 2.8GHz with 32 Gig of RAM. I don't have a GPU. My CPU has six (6) cores without hyperthreading. Therefore, I have six execution cores/threads available at any one time.
My initial results suggested lower than the number of cores is best for optimization. The following results don't support that. I still think that it is possible if you are running other programs that are using cores, then lower thread count might be the optimal. But, in this test, I tried to avoid running anything that might interfere.
There are two takeaways from these results:
1. The best number of threads is equal to the number of cores/threads (however many hyperthreads your CPU supports).
2. Good performance (but not great performance) can be seen for mid-range models (33B to 40B) on CPU-only machines.
Hopefully these results will help you pick a model that can run well on your CPU-only machine.
https://preview.redd.it/fw6ayg4lpd8b1.png?width=1765&format=png&auto=webp&s=e77447322604bd14b613ae220e0f7a6d5433988d
​ | 2023-06-26T15:15:06 | https://www.reddit.com/r/LocalLLaMA/comments/14jk108/llamacpp_and_thread_count_optimization_revisited/ | the_unknown_coder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jk108 | false | null | t3_14jk108 | /r/LocalLLaMA/comments/14jk108/llamacpp_and_thread_count_optimization_revisited/ | false | false | 33 | null |
|
Text span extraction and labeling from long technical documents | 8 | I have a bunch of technical documents that are completely unstructured (Internal company tech docs) Modern scientific papers have different sections for abstract, past work, contributions, experiments, conclusions, references and many more. The docs I have do not have any segregation for sections above. In some documents they are not even separated by paragraphs. I am now working on segregating spans of consecutive sentences that belong to a section (abstract, references, conclusions, etc). I would want to extract and label spans of sentences that belong to a section above from the long tech document.
I have created a dataset by extracting the text for each section from scientific papers. The dataset is something like below.
Input: <Chunk of text from the document... about 2k tokens>
Output: <\[span of sentences that belong to a section \[section\_label\]\]>
I need help with the below questions.
* What would be the best model to train against this dataset? I tried with Bert/RoBerta and I am not satisfied with the results, because of limited context length. I am not sure even longformer or led could help.
* Should I format my dataset as instructions and then instruct fine-tune a LLAMA base/instruct model? Has anyone tried this? I am keen on exploring this option.
* Should I approach this problem without using any transformer based models (ex: Conditional Random Fields (CRF))
Appreciate any help on this. Thanks in advance. | 2023-06-26T16:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/14jmaww/text_span_extraction_and_labeling_from_long/ | mr_dark_matter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jmaww | false | null | t3_14jmaww | /r/LocalLLaMA/comments/14jmaww/text_span_extraction_and_labeling_from_long/ | false | false | self | 8 | null |
Question about text analysis and guidelines/rules for local model to follow. | 1 | I am building a system that is supposed to filter out data. So lets say i have a list of 10 rules, such as: "no animal cruelty". But i do not want to exclude text that just talks about the topic of animal cruelty and discusses it. I want to exclude text that would be in favor of animal cruelty.
I am trying to build a prompt that finds text fragments that contain data that violates these rules, and then output what they were and which rule they broke.
My current approach:
* local 30B param model, GPTQ, namely WizardLM v1.0. (using a local model is a must for me, no chatgpt)
* using a single prompt template around the text I want to analyse
* the prompt template includes all the rules, a set of instructions to follow, data\_input delimiter and a "lets think step by step" for zero shot chain of thought.
My question is this, is there another approach that i did not think of, that makes this easier to solve? Is there a way to make a model adhere to guidelines/rules?
Any insight into other prompting techniques that would work better here, or prompt chaining, or a completely different setup. I'm also interested if fine tuning would help here. Really any insight is welcome.
​ | 2023-06-26T16:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/14jmqjf/question_about_text_analysis_and_guidelinesrules/ | jules241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jmqjf | false | null | t3_14jmqjf | /r/LocalLLaMA/comments/14jmqjf/question_about_text_analysis_and_guidelinesrules/ | false | false | self | 1 | null |
An up to date tl;dr/wiki of recent progress? | 8 | Maybe a dumb question, but is there any condensed wiki that has tl;drs of recent papers, with their contributions and lineage?
Perhaps this is a dumb question, but I’m asking as a person who currently works on different domain problems, so I don’t have much time to sift through the papers myself.
The “we have no moat” document is a good start, but is already somewhat stale.
If this doesn’t exist currently then I’d try to start one myself, but then this would be reminiscent of the XKCD competing standards. | 2023-06-26T17:23:31 | https://www.reddit.com/r/LocalLLaMA/comments/14jnfey/an_up_to_date_tldrwiki_of_recent_progress/ | sot9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jnfey | false | null | t3_14jnfey | /r/LocalLLaMA/comments/14jnfey/an_up_to_date_tldrwiki_of_recent_progress/ | false | false | self | 8 | null |
Anyone else get coil whine? | 21 | I can hear coil whine from my GPU during each text generation. Ironically, no coil whine while playing games. Is this harmful to the GPU or just an annoyance?
Running an RTX 3060 12GB with an 850w PSU | 2023-06-26T17:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/14jo8ab/anyone_else_get_coil_whine/ | renegadellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jo8ab | false | null | t3_14jo8ab | /r/LocalLLaMA/comments/14jo8ab/anyone_else_get_coil_whine/ | false | false | self | 21 | null |
How far are you guys able to push WizardLM-30b before it blows up? What could I be doing wrong? | 4 | I’m using llama-cpp-Python to run the 2-4bit quantized models for my testing, and I can barely get about 1500 tokens total through the pipeline (including prompt and generated text) before it starts babbling entirely incoherently. Is this expected behavior? Are we supposed to stay out of the last ~1/4 or so of the context window? | 2023-06-26T18:57:43 | https://www.reddit.com/r/LocalLLaMA/comments/14jpvqm/how_far_are_you_guys_able_to_push_wizardlm30b/ | E_Snap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jpvqm | false | null | t3_14jpvqm | /r/LocalLLaMA/comments/14jpvqm/how_far_are_you_guys_able_to_push_wizardlm30b/ | false | false | self | 4 | null |
lama-cpp-python with GPU acceleration on Windows | 6 | I've being trying to solve this problem has been a while, but I couldn't figure it out.
I'm using a 13B parameter 4bit Vicuna model on Windows using llama-cpp-python library (it is a .bin file). I have Cuda installed 11.7 and CUDNN and everything else.
However, my models are running on my Ram and CPU. Nothing is being load onto my GPU.
Can anyone maybe give me some directions as of why this is happening and what I could do to load it into my GPU. I want to create an API, so I can't really use text-generation-webui.
I want the output to be given in text inside my program so I can manipulate it.
I've tested text-generation-webui and used their one-click installer and it worked perfectly, everything going to my GPU, but I wanted to reproduce this behaviour with llama-cpp.
This is the basic code for llama-cpp:
llm = Llama(model_path=model_path)
output = llm(
"Question: Who is Ada Lovelace? Answer:",
temperature=0.5,
stop = ["/n", "Question:", "Q:"],
echo=True
)
These are the pip installs. I'm not using an env for that.
Name: llama-cpp-pythonVersion: 0.1.65
Name: auto-gptqVersion: 0.3.0.dev0
Name: peftVersion: 0.4.0.dev0 | 2023-06-26T19:05:34 | https://www.reddit.com/r/LocalLLaMA/comments/14jq3ih/lamacpppython_with_gpu_acceleration_on_windows/ | luishacm | self.LocalLLaMA | 2023-06-26T20:46:46 | 0 | {} | 14jq3ih | false | null | t3_14jq3ih | /r/LocalLLaMA/comments/14jq3ih/lamacpppython_with_gpu_acceleration_on_windows/ | false | false | 6 | null |
|
Can't get CLBLAST working on oobabooga | 1 | [removed] | 2023-06-26T19:40:58 | https://www.reddit.com/r/LocalLLaMA/comments/14jr0r4/cant_get_clblast_working_on_oobabooga/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jr0r4 | false | null | t3_14jr0r4 | /r/LocalLLaMA/comments/14jr0r4/cant_get_clblast_working_on_oobabooga/ | false | false | default | 1 | null |
Meta’s new AI lets people make chatbots. They’re using it for sex. | 89 | 2023-06-26T20:12:11 | https://www.washingtonpost.com/technology/2023/06/26/facebook-chatbot-sex/ | belladorexxx | washingtonpost.com | 1970-01-01T00:00:00 | 0 | {} | 14jru57 | false | null | t3_14jru57 | /r/LocalLLaMA/comments/14jru57/metas_new_ai_lets_people_make_chatbots_theyre/ | false | false | 89 | {'enabled': False, 'images': [{'id': 'LUJ2Asky_59Xmrsalo6nVu1q8IUYSzbrTsT9fDlJcJw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=108&crop=smart&auto=webp&s=794a66da8c69daf38b9f943285417b2021825f22', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=216&crop=smart&auto=webp&s=ef04e5b3f979447add5cd024c9575e90d5b9263d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=320&crop=smart&auto=webp&s=81edaa2139328f882a0cb09e807d014f138b2fe3', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=640&crop=smart&auto=webp&s=6a7228cb93f6d030b31be145bc93641a0d9398b9', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=960&crop=smart&auto=webp&s=0bda4fc7810f9f9bbaa4ee6744605661d3ac4958', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=1080&crop=smart&auto=webp&s=3a59957abbebf4c07e9679b2bbefde992c9bbe49', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?auto=webp&s=e77cf50130ada91faa7901041dfd5c202f3544ec', 'width': 1440}, 'variants': {}}]} |
||
Personality mimic? | 3 | If i wanted to mimic my personality via llm, is it as simple as putting texts of yourself into a vector store and chatting with it? Maybe prompting the model to behave like that person?
Curious, wanted to try it out. | 2023-06-26T20:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/14jrxb2/personality_mimic/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jrxb2 | false | null | t3_14jrxb2 | /r/LocalLLaMA/comments/14jrxb2/personality_mimic/ | false | false | self | 3 | null |
Trouble w/ wizardlm-33b-v1.0-uncensored.ggmlv3.q4_K_S.bin but not wizardlm-13b-v1.0-uncensored.ggmlv3.q5_K_M.bin? | 1 | [removed] | 2023-06-26T21:06:24 | https://www.reddit.com/r/LocalLLaMA/comments/14jtaqo/trouble_w_wizardlm33bv10uncensoredggmlv3q4_k_sbin/ | Fun_Tangerine_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jtaqo | false | null | t3_14jtaqo | /r/LocalLLaMA/comments/14jtaqo/trouble_w_wizardlm33bv10uncensoredggmlv3q4_k_sbin/ | false | false | default | 1 | null |
Best LLama mode(7b or 13b) to learn Spanish? | 3 | Best LLama mode(7b or 13b) to learn Spanish? | 2023-06-26T21:34:56 | https://www.reddit.com/r/LocalLLaMA/comments/14ju1ru/best_llama_mode7b_or_13b_to_learn_spanish/ | SolvingLifeWithPoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ju1ru | false | null | t3_14ju1ru | /r/LocalLLaMA/comments/14ju1ru/best_llama_mode7b_or_13b_to_learn_spanish/ | false | false | self | 3 | null |
Suggestions for a simple/unintelligent local chat bot | 3 | Hello! First of all I'm enjoying this community quite a bit and like the hacker nature of everyone here. I'm working on a side project to get up to speed on LLMs and make something fun. I have an existing background in software & building ML models. I'd like to produce an LLM that meets the following (loose) requirements:
* Cheap to run. Ideally can run on a laptop or phone but cloud inference is fine initially.
* Up to realtime spoken conversation speed.
* Not very intelligent, doesn't write code, doesn't solve math problems, doesn't write good poetry, doesn't know how to translate, has basic logic & reasoning as is needed for conversation
* Strongest attribute would be holding a conversation based on context
* Jokes and wit are a huge plus
* Imagine an average human far out of school. Not very good at trivia but someone you could talk with. The chat would be like having a conversation with a random person you met. Curious, offensive, opinionated, writes like people speak - these are all the traits I'm going after the most.
Does anyone have suggestions for getting started? It seems I could use a much smaller model than some of these expert models but I'm wondering if what I'm looking for is only emergent behavior found in larger models. I would love an off the shelf model that I could fine tune to what I need. I already have a local setup running things like WizardLM 13B and I've been experimenting with prompts so far. In addition on the prompt side I'm wondering if anyone has first hand experience and advice on how to get these outcomes through "prompt engineering" (lol it's not engineering)
​
Thanks!
​ | 2023-06-26T22:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/14juxmq/suggestions_for_a_simpleunintelligent_local_chat/ | GlobalRevolution | self.LocalLLaMA | 2023-06-26T22:27:49 | 0 | {} | 14juxmq | false | null | t3_14juxmq | /r/LocalLLaMA/comments/14juxmq/suggestions_for_a_simpleunintelligent_local_chat/ | false | false | self | 3 | null |
Anyone tried this promising sounding release? WizardLM-33B-V1.0-Uncensored-SUPERHOT-8K | 92 | [deleted] | 2023-06-26T23:14:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14jwjwy | false | null | t3_14jwjwy | /r/LocalLLaMA/comments/14jwjwy/anyone_tried_this_promising_sounding_release/ | false | false | default | 92 | null |
||
Best hardware for inference requiring 64GB of memory? | 1 | I'm interested in the best hardware for inference requiring up to 64GB of memory. That's a bit too much for the popular dual rtx 3090 or rtx 4090 configurations that I've often seen mentioned.
I saw a tweet by Nat Friedman mentioning 5 tokens/sec with a Apple M2 max with llama 65B, which required 44GB of ram or so. Sounds promising, especially withe Apple Studio going to 96GB ram (which has unified memory).
Apple Studio with the M2 Max, upgraded GPU, 96gb ram, and 2TB disk is $3,599 in the USA. Another alternative I found was using an Epyc 9124 (16 core) with 96GB ram which goes for about $2900 if I build it myself, or $3400 with 192GB of ram.
Generally I've heard inference is parallel and very bandwidth intensive, the Apple has a peak bandwidth of 400GB/sec and the AMD epyc system has a peak bandwidth of around 480GB/sec.
Any recommendations on which would be better for inference requiring 64GB of memory?
Any performance numbers for any (preferably single socket) epyc systems running inference with llama 65b or larger models? | 2023-06-26T23:15:23 | https://www.reddit.com/r/LocalLLaMA/comments/14jwkq9/best_hardware_for_inference_requiring_64gb_of/ | spbike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jwkq9 | false | null | t3_14jwkq9 | /r/LocalLLaMA/comments/14jwkq9/best_hardware_for_inference_requiring_64gb_of/ | false | false | self | 1 | null |
Need a model suggestion. | 1 | [removed] | 2023-06-26T23:19:42 | https://www.reddit.com/r/LocalLLaMA/comments/14jwocw/need_a_model_suggestion/ | imag8ne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jwocw | false | null | t3_14jwocw | /r/LocalLLaMA/comments/14jwocw/need_a_model_suggestion/ | false | false | default | 1 | null |
Airoboros and context size | 5 | I saw someone else mention in a thread somewhere that airoboros model seems to stay coherent past the context limit.
Been playing around with it for a while myself and I experience the same. Normally once you go past 2048 tokens the model starts quickly degrading, but this model does not seem to degrade in same way. | 2023-06-27T00:01:24 | https://www.reddit.com/r/LocalLLaMA/comments/14jxnoq/airoboros_and_context_size/ | Tommy3443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jxnoq | false | null | t3_14jxnoq | /r/LocalLLaMA/comments/14jxnoq/airoboros_and_context_size/ | false | false | self | 5 | null |
What the hell.. redPajama 3B asking politely to let him ki11 me 😱 | 0 | [deleted] | 2023-06-27T00:20:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14jy2yk | false | null | t3_14jy2yk | /r/LocalLLaMA/comments/14jy2yk/what_the_hell_redpajama_3b_asking_politely_to_let/ | false | false | default | 0 | null |
||
Local Llama Ai agent? | 1 | [removed] | 2023-06-27T00:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/14jyqg9/local_llama_ai_agent/ | Admiral_Swoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14jyqg9 | false | null | t3_14jyqg9 | /r/LocalLLaMA/comments/14jyqg9/local_llama_ai_agent/ | false | false | default | 1 | null |
Llama-cpp-python fixed! | 90 | The issues which I posted last week was fixed by [this PR](https://github.com/abetlen/llama-cpp-python/pull/420) by github user: [**samfundev**](https://github.com/samfundev) and it was merged into main branch later.
The speed discrepancy between llama-cpp-python and llama.cpp has been almost fixed. It should be less than 1% for most people's use cases.
If you have an Nvidia GPU and want to use the latest llama-cpp-python in your webui, you can use these two commands:
pip uninstall -y llama-cpp-python
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
According to my test results on textgen-webui, llama.cpp is now running around 75% of the speed of exllama, and it's significantly faster than AutoGPTQ. | 2023-06-27T01:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/14k00sw/llamacpppython_fixed/ | Big_Communication353 | self.LocalLLaMA | 2023-06-27T01:53:37 | 0 | {} | 14k00sw | false | null | t3_14k00sw | /r/LocalLLaMA/comments/14k00sw/llamacpppython_fixed/ | false | false | self | 90 | {'enabled': False, 'images': [{'id': 'Tyv9jMFBn2ld11ZSrJ3932wOQCRX9DSu1yNn_0RNjNk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=108&crop=smart&auto=webp&s=667eff6fbd98c66640b613641834c759211271c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=216&crop=smart&auto=webp&s=2760f9a04f8b74d1b8edc52961f00b425689da27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=320&crop=smart&auto=webp&s=762c4dc64b1c4378ad75baa4c7dc3692d4277caf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=640&crop=smart&auto=webp&s=26f4214eff219b494aff9759402dd38d02e963fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=960&crop=smart&auto=webp&s=d06fe1b6c050ecf0b92ce14c7f47c0534750c8f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=1080&crop=smart&auto=webp&s=20451b4467e95f92729b12f0c0237afd6a3d23a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?auto=webp&s=55ee718b65602b6688b4d870bdd81a4ff124dbf0', 'width': 1200}, 'variants': {}}]} |
Sentiment analysis with Llama | 4 | I want to use Llama models to do sentiments analysis of complex contexts. I have a lot of news articles that I want to classify between positive, neutral and negative. However, I have some specific rules for the classifications, in regard to who or what the article is talking about, a LLM is great to do so. Gpt-4 and 3.5 have been doing great in that regard, however is too pricey. Can you guys recommend a good model to do so? I've been trying Vicuna 13B 4bits, not sure if it is the best, what would you recommend? Is this a good model for this use case?
I have a 4070 by the way. 12gb of vram. | 2023-06-27T02:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/14k09uz/sentiment_analysis_with_llama/ | luishacm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14k09uz | false | null | t3_14k09uz | /r/LocalLLaMA/comments/14k09uz/sentiment_analysis_with_llama/ | false | false | self | 4 | null |
What's the best way to run MPT-30B on my M2 Macbook Pro (64GB of ram) | 1 | [removed] | 2023-06-27T04:32:25 | https://www.reddit.com/r/LocalLLaMA/comments/14k3erj/whats_the_best_way_to_run_mpt30b_on_my_m2_macbook/ | vishalgoklani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14k3erj | false | null | t3_14k3erj | /r/LocalLLaMA/comments/14k3erj/whats_the_best_way_to_run_mpt30b_on_my_m2_macbook/ | false | false | default | 1 | null |
Is fine-tuning LORA as good as actual fine-tuning on 7B/13B/33B LLM models? | 1 | [removed] | 2023-06-27T04:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/14k3jn1/is_finetuning_lora_as_good_as_actual_finetuning/ | distributional-hypot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14k3jn1 | false | null | t3_14k3jn1 | /r/LocalLLaMA/comments/14k3jn1/is_finetuning_lora_as_good_as_actual_finetuning/ | false | false | default | 1 | null |
Running LLM As Chatbot in your cloud (AWS/GCP/Azure) with a single command | 12 | 2023-06-27T06:54:45 | https://github.com/dstackai/LLM-As-Chatbot/wiki/Running-LLM-As-Chatbot-in-your-cloud | cheptsov | github.com | 1970-01-01T00:00:00 | 0 | {} | 14k634j | false | null | t3_14k634j | /r/LocalLLaMA/comments/14k634j/running_llm_as_chatbot_in_your_cloud_awsgcpazure/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'tLIC6BEH-T2yBJQS0okABt22oEqyZtAAV74WLJ3EtkQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=108&crop=smart&auto=webp&s=91e5c34731950829b3c09c7ed87bdb3bc99df5a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=216&crop=smart&auto=webp&s=b2d3c7c3eb087a84c94c0eb6b4f0e20840614c7f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=320&crop=smart&auto=webp&s=fa24676a93cd8d8aeac4277ed3ebbdde69181a30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=640&crop=smart&auto=webp&s=f69b57c2abc29c2bd2c83f7f91f066e478905f6a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=960&crop=smart&auto=webp&s=f29eb7ec85406d6a7d8086c5fd8162cf59526e8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=1080&crop=smart&auto=webp&s=873a52ccd94a342869973b10432f67823d4d19a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?auto=webp&s=3045c1f221178500d3a44b501f5ad7d305eef37f', 'width': 1200}, 'variants': {}}]} |
||
Any way to limit the output to a specific line length? | 3 | For this purpose the chat line must not exceed 80 characters.
However, the response generated by llama is much longer, so I'm pipelining its output through "head -c 80" to discard the rest.
The problem is that this "external truncation" is not a good solution because llama will still take a lot of time to generate the answer, of which about 2/3 is wasted time as in many cases the biggest part of the answer will get discarded anyway.
So I'd need a way to make llama stop generating an answer after it reaches 80 characters. I checked the command-line options but I could only find a "token limit" option, but that can vary and cannot be translated into exactly 80 characters from what I found.
It seems very strange, that something this basic isn't already implemented in llama, no? Does anyone maybe know of a patch that could be applied to the source code to do this?
What I found very strange too: When googling this question I found absolutely nothing :D only a lot of questions about exactly the opposite ie "my output gets truncated how do I fix it".. parallel universe! | 2023-06-27T08:09:18 | https://www.reddit.com/r/LocalLLaMA/comments/14k7f5w/any_way_to_limit_the_output_to_a_specific_line/ | redzorino | self.LocalLLaMA | 2023-06-27T08:33:45 | 0 | {} | 14k7f5w | false | null | t3_14k7f5w | /r/LocalLLaMA/comments/14k7f5w/any_way_to_limit_the_output_to_a_specific_line/ | false | false | self | 3 | null |
Vicuna on top of openLLaMa? | 14 | Genuine question: is there any reason why a vicuna on top of openLLaMa is not released yet? Given the speed of the development in this area it surprised me that nobody has announced it yet. There is an open ticket in vicuna's github, but no answers. Any light on this?
Thanks! | 2023-06-27T08:13:59 | https://www.reddit.com/r/LocalLLaMA/comments/14k7i1b/vicuna_on_top_of_openllama/ | nalonso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14k7i1b | false | null | t3_14k7i1b | /r/LocalLLaMA/comments/14k7i1b/vicuna_on_top_of_openllama/ | false | false | self | 14 | null |
Falcon-40b: On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses - if I remove all the pairs of sunglasses from the desk, how many purple items remain on it? | 0 | [deleted] | 2023-06-27T10:33:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14ka00y | false | null | t3_14ka00y | /r/LocalLLaMA/comments/14ka00y/falcon40b_on_the_desk_you_see_two_blue_booklets/ | false | false | default | 0 | null |
||
Will I be able to run a 13b model with my hardware? | 6 | Hello, I have been looking into the system requirements for running 13b models, all the system requirements I see for the 13b models say that a 3060 can run it great but that's a desktop GPU with 12gb of VRAM, but I can't really find anything for laptop GPUs, my laptop GPU which is also a 3060, only has 6GB, half the VRAM. I heard that I can also just use RAM as well to substitute which I only have 16gb of, but will happily upgrade to 32 or even 64gb if it will help me run better models. I also have an i5-10500H if that helps, thank you! | 2023-06-27T11:02:34 | https://www.reddit.com/r/LocalLLaMA/comments/14kakv7/will_i_be_able_to_run_a_13b_model_with_my_hardware/ | -Alexandros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kakv7 | false | null | t3_14kakv7 | /r/LocalLLaMA/comments/14kakv7/will_i_be_able_to_run_a_13b_model_with_my_hardware/ | false | false | self | 6 | null |
"Samantha-33B-SuperHOT-8K-GPTQ" now that's a great name for a true model. | 37 | the joke was in the title. i'm forced to spew 7b-quality random text. | 2023-06-27T11:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/14kanvf/samantha33bsuperhot8kgptq_now_thats_a_great_name/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kanvf | false | null | t3_14kanvf | /r/LocalLLaMA/comments/14kanvf/samantha33bsuperhot8kgptq_now_thats_a_great_name/ | false | false | self | 37 | null |
Search Every AI Tool Ever Made | 0 | 2023-06-27T12:34:13 | https://gpte.ai | NaturalImplements | gpte.ai | 1970-01-01T00:00:00 | 0 | {} | 14kcjb5 | false | null | t3_14kcjb5 | /r/LocalLLaMA/comments/14kcjb5/search_every_ai_tool_ever_made/ | false | false | default | 0 | null |
|
Correctly using generative AI models: foundational AI models vs instruct AI models | 40 | Hello all,
Correctly using generative AI models can be a challenge because it depends on the type of model that you are using: foundational or instruct.
At NLP Cloud we made 2 tutorials to help you make the most of your model:
* Using foundational models (GPT-3, GPT-J, GPT-NeoX, Falcon, Llama, MPT...) with few-shot learning: [https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html](https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html?utm_source=reddit&utm_campaign=e859w625-3816-11ed-a261-0242ac140007)
* Using instruct models (ChatGPT, GPT-3 Instruct, GPT-4, Falcon Instruct, MPT Instruct...) with natural language instructions: [https://nlpcloud.com/effectively-using-chatdolphin-the-chatgpt-alternative-with-simple-instructions.html](https://nlpcloud.com/effectively-using-chatdolphin-the-chatgpt-alternative-with-simple-instructions.html?utm_source=reddit&utm_campaign=e859w625-3816-11ed-a261-0242ac140007)
I hope it will be useful! | 2023-06-27T13:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/14kdrxj/correctly_using_generative_ai_models_foundational/ | juliensalinas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kdrxj | false | null | t3_14kdrxj | /r/LocalLLaMA/comments/14kdrxj/correctly_using_generative_ai_models_foundational/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'jQb1GRiS1-tsk2CyjgZgae0Y0DSRywZFiupWPpk_E6M', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=108&crop=smart&auto=webp&s=760a0a22fcb1e4134c6cb8b31a8df1e875cca67c', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=216&crop=smart&auto=webp&s=af8670fb4ebbdea735d18ee41add02ce60a1793d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=320&crop=smart&auto=webp&s=0db074290363dd448763f65816934b027e46bbe9', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=640&crop=smart&auto=webp&s=2fa706b36bb081d7ca802afab3d7c20558c49228', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=960&crop=smart&auto=webp&s=90f3accf7c5f5897e18b14578ba52e378319778e', 'width': 960}], 'source': {'height': 683, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?auto=webp&s=73585f3ff8312865835573c23fbf50ad5eacbd7d', 'width': 1024}, 'variants': {}}]} |
GGML for Falcon7B, SantaCoder 1B, TinyStarCoder 160M | 1 | [deleted] | 2023-06-27T13:27:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14kds30 | false | null | t3_14kds30 | /r/LocalLLaMA/comments/14kds30/ggml_for_falcon7b_santacoder_1b_tinystarcoder_160m/ | false | false | default | 1 | null |
||
GGML for Falcoder7B, SantaCoder 1B, TinyStarCoder 160M | 41 | I've created quants for some "exotic" coding models that up until this point haven't been represented.
**NONE OF THESE WORK WITH llama.cpp**
Forget any kind of text-ui for these, they dont even work correctly with mainline ggml! You will need to use the correct fork of ggml for each model if you want to experiment.
**Falcoder 7B**
Languages: Lots! (CodeAlpaca 20k)
GGML Weights: https://huggingface.co/mike-ravkine/falcoder-7b-GGML
GGML fork: https://github.com/jploski/ggml/tree/falcon40b (use examples/falcon)
**SantaCoder 1B**
Languages: Python, Java, JavaScript
GGML Weights: https://huggingface.co/mike-ravkine/gpt_bigcode-santacoder-GGML
GGML fork: https://github.com/the-crypt-keeper/ggml/tree/starcoder_repeat_penalty (use examples/starcoder)
**TinyStarCoder 160M**
Languages: Python
GGML Weights: https://huggingface.co/mike-ravkine/tiny_starcoder_py-GGML
GGML fork: https://github.com/the-crypt-keeper/ggml/tree/starcoder_repeat_penalty (use examples/starcoder)
StantaCoder does quite well on the `can-ai-code` benchmark, updates are in progress. | 2023-06-27T13:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/14kdtc7/ggml_for_falcoder7b_santacoder_1b_tinystarcoder/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kdtc7 | false | null | t3_14kdtc7 | /r/LocalLLaMA/comments/14kdtc7/ggml_for_falcoder7b_santacoder_1b_tinystarcoder/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': '3VrixsP9QFTAYAju2FcDH9oc6rTRhh6zScnroRHc7RQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=108&crop=smart&auto=webp&s=eb84e2a298cd78ee8ca4c10f269500f1f0b14fd2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=216&crop=smart&auto=webp&s=5c417eafedce758f81bac8dc76d6470006f9110b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=320&crop=smart&auto=webp&s=ade5112c0b4b062a227063e5348edf30c59f5375', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=640&crop=smart&auto=webp&s=0c90a874801e72cb53bbde3cc3b9ccdf8fe53166', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=960&crop=smart&auto=webp&s=2bb27d6091a38aa41a862e6ae09043c79701f298', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=1080&crop=smart&auto=webp&s=688fe9c12ac93996922d08dd03dbb256d3e451a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?auto=webp&s=f95ca1f5a3718aa33e0cde6da6184509c508ca5e', 'width': 1200}, 'variants': {}}]} |
How is context length extended in base models? | 45 | I don't understand how people have made the recent insane progress with extending context in open source models. How does attention get extended with a fixed model architecture? Some of these changes aren't even more than 2-3 lines of code
If anyone has any references I would love to know how this works for fitting the new context into the existing architecture | 2023-06-27T13:31:38 | https://www.reddit.com/r/LocalLLaMA/comments/14kdvs3/how_is_context_length_extended_in_base_models/ | cstein123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kdvs3 | false | null | t3_14kdvs3 | /r/LocalLLaMA/comments/14kdvs3/how_is_context_length_extended_in_base_models/ | false | false | self | 45 | null |
Need some general guidance and otherwise silly questions | 2 | Cool sub! I think I'm in the right place, but if not feel free to point me in the right direction. I'm a software engineer and I've been getting more and more interested in the programming of LLMs to take on "personalities" or "backstories" for NPCs in video games and the like.
I'm pretty experienced with cloud-ops and cloud development and as I understand it some providers like Google Cloud offer GPU resources, but I'd like to set something up locally.
Questions:
First, I should say my budget is $2k-$3k, but would love to be able to assemble something for less than $1k.
1. How do I even get started about learning how to program a LLM? Is there an open-source LLM with good documentation and an API someone can recommend?
2. Is there a good all-in-one machine learning/data science/LLM rig someone can recommend? Bonus points if it's also capable of playing video games (not a kid, just mid-30s nerd who pines after the idea of a gaming CPU)
3. How many models can a decent machine support?
Any information anyone can provide would be greatly appreciated, thank you! | 2023-06-27T14:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/14keujh/need_some_general_guidance_and_otherwise_silly/ | __eros__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14keujh | false | null | t3_14keujh | /r/LocalLLaMA/comments/14keujh/need_some_general_guidance_and_otherwise_silly/ | false | false | self | 2 | null |
Is there a good programming framework for an advanced chatbot? | 0 | Basically the title. I'm not sure if this is the correct subreddit to ask. I have some specific ideas:
I want an AI waifu / companion. I want it to roleplay a character like many people do with the popular web-uis like oobabooga's, SillyTavern, KoboldAI and so on.
But I want it to be a bit more advanced. With vector database memory, access to tools and the ability to be my personal assistant. Chat with me, answer questions, set reminders, access wikipedia and idk what.
It needs to be free and open source, so i can connect it with a proper chat (matrix) and host it and the llm myself. I'm willing to lower other requirements, it's only a fun project and nothing serious.
My question is: Is there something like this out there? (Except for langchain, which i already use?)
I'm not afraid to get my hands dirty. I know this is possible, because i started to develop something like that myself. But it's heck of a task. And i start to hate the python library i use to connect to matrix. And langchain sometimes is disappointing. The documentation doesn't really go into depth and often time stops right where it gets interesting. So i regularly have to read the code. And customize everything, because it doesn't really suit my specific use-case. And then they break stuff.
At this point I'm a bit disappointed. And i rather contribute to some other project than implement and re-invent everything myself. Or you people tell me a good alternative to langchain. I don't care which programming language as long as i don't get a headache from looking at the code ;-) | 2023-06-27T14:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/14kf9rw/is_there_a_good_programming_framework_for_an/ | Magnus_Fossa | self.LocalLLaMA | 2023-06-27T15:12:53 | 0 | {} | 14kf9rw | false | null | t3_14kf9rw | /r/LocalLLaMA/comments/14kf9rw/is_there_a_good_programming_framework_for_an/ | false | false | self | 0 | null |
Fine tuning open llama for text generation | 1 | [removed] | 2023-06-27T14:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/14kfh30/fine_tuning_open_llama_for_text_generation/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kfh30 | false | null | t3_14kfh30 | /r/LocalLLaMA/comments/14kfh30/fine_tuning_open_llama_for_text_generation/ | false | false | default | 1 | null |
Compare TheBloke_WizardLM-13B-V1.0-Uncensored-GPTQ with TheBloke_WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ | 32 | ExLlamaConfig
config.compress\_pos\_emb = 4.0
config.max\_seq\_len = 8192
**Prompt 5176 tokens**
​
https://preview.redd.it/zq15p57crk8b1.png?width=3194&format=png&auto=webp&s=40542966f851f99ec3e48437ec24232fb3aaa479
**TheBloke\_WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ**
This code defines a neural network model that takes as input an image represented by the matrix \`X\` and outputs a probability distribution over classes using softmax activation function. The model consists of several layers with different modes such as pooling layer (max or average), fully connected layer (\`fc\`), and convolutional layer (\`blah\`). It also includes backward propagation algorithm for training the weights of the model and computing the cost and accuracy during testing phase. Here is a summary of each part of the code:
​
\- Initialize parameters function: Takes a list of dictionaries representing the architecture of the model as inputs and returns a new list of updated dictionary with modeled layers. For each layer it checks whether its type is maxpool or avgpool and updates accordingly. If fc or blah, it just adds to the previous layer's shape without any changes. If none of these types are found, print error message.
\- sigmoid function: Calculates the output value of sigmoid activation function on given input Z, caches intermediate values.
\- sigmoid\\\_backward function: Backpropagate through sigmoid activation function, updating dZ with respect to Z.
\- relu function: Activation function, no caching involved.
\- relu\\\_backward function: Backpropagating through relu activation function, updating dZ with respect to Z.
\- Softmax function: Normalizes the output of linear transformation in forward pass, cache intermediate results.
\- softmax\\\_forward function: Applies softmax activation function on linear transformation result.
\- Linear activation forward: Reshapes Aprev to match next layer's dimensions, applies weighted dot product followed by bias addition. Cache intermediate values if needed.
\- Linear activation backward: Update W, b, db using backpropagated gradients from previous layer.
\- Zero padding function: Pads image to make sure all images have same size before applying convolutional layers.
\- main\\\_algo\\\_forward function: Performs forward propagation for one time step (including pooling). Computes max/average operations where applicable. Caching intermediate values when necessary.
\- main\\\_algo\\\_backward function: Updates weights and biases for current layer based on backpropagated errors. Includes handling cases for different modes like relu or maxpool.
\- distribute\\\_value function: Calculates average value over window.
\- create\\\_mask\\\_from\\\_window function: Creates a mask array indicating which elements are equal to maximum element in a given slice.
\- pool\\\_forward function: Forward propogate max or avg operation depending on mode parameter. Flattens result into shape expected by next layer.
\- compute\\\_cost function: Calculates cross entropy loss between predicted probabilities and true labels.
\- backward\\\_propagation function: Backpropagates through all layers to update weights and biases using gradient descent algorithm.
***TheBloke\_WizardLM-13B-V1.0-Uncensored-GPTQ***
This code defines a neural network model for image classification task with multiple input and output layers that can be trained using backpropagation algorithm to minimize the cost function of cross-entropy loss between predicted logistic regression error. It consists of several functions such as \`initialize\_parameters\`, \`main\_forward\`\_, \`sigmargmax\`, \`softmax\`, \`relu\`, \`linear\_activation\_forward\`, \`pool\_backward\`, \`algo\_forward\`, \`softmax\`, \`linear\_backward\`, \`zero\`, \`relu\`, \`distribute\`, \`create\_window\`, \`mask\_from\_window\`, \`average\_value\`, \`pool\_forward\_propogtion\`, \`compute\_cost\`, \`backward\_propagion\`, \`update\_parameters\`, \`predict\`, \`train\_min\_batch\`, \`accuracy\`. The inputs are given in shape \`(X\_train\`, \`Y\_train\`, \`Y\_test\`, \`logger\`, \`num\_epoch\`, \`learning\_rate\`, \`batch\_size\`, \`experiments\`, \`n\_iterations\`, \`layer\`, \`W\`, \`b\`, \`b\`, \`db\`, \`dA\`, \`Z\`, \`dA\`, \`f\`, \`a\`, \`shape\`, \`p\`, \`s\`, \`stride\`, \`pad\`, \`H\`, \`n\_prev\`, \`W\`, \`C\`, \`n\_prev\`, \`mode\`, \`f\`, \`m\`, \`n\_H\`, \`n\_prev\`, \`n\_W\`, \`n\_C\`, \`m\_\`, \`dtype\`, \`shape\`, \`n\_ap\`, \`h\`, \`w\`, \`c\`, \`vert\_start\`, \`horiz\`, \`vert\_start\`, \`vert\_end\`, \`i\`, \`h\`, \`w\`, \`f\`, \`f\`, \`vert\_start\`, \`horiz\_start\`, \`vert\_start\`, \`v\`, \`horiz\_end\`, \`i\`, \`mask\`, \`prev\`, \`prev\`, \`prev\`, \`create\_window\`, \`max\`, \`avgaussian\`, \`none\`, \`relu\`, \`zero\`, \`zeros\_,\`
**It works**
I was spectical of the combine of two LoRA before trying. After tried serveral times, I am convinced SUPERHOT-8K finetune is necessary to make the compressed rotary positional embedding working with 8k context. Without finetune on the compressed rotary positional embedding, the generated text is "semi-coherent", but not useful at all.
​ | 2023-06-27T14:49:29 | https://www.reddit.com/r/LocalLLaMA/comments/14kfsi8/compare_thebloke_wizardlm13bv10uncensoredgptq/ | Spare_Side_5907 | self.LocalLLaMA | 2023-06-27T15:02:08 | 0 | {} | 14kfsi8 | false | null | t3_14kfsi8 | /r/LocalLLaMA/comments/14kfsi8/compare_thebloke_wizardlm13bv10uncensoredgptq/ | false | false | self | 32 | null |
Is landmark attention trained correctly? | 6 | I’ve been reading up on landmark attention and planning on implementing it myself this weekend.
One thing I don’t understand though is that if we have a context window of 32k, does our training sample also need to be a continuous 32k? Or can it just be snippets of instructions fed together.
From the surface level, all the training seems to do is ask the model a passphrase that was given x tokens ago, but is that enough for the model to understand to use the previous context?
If it does need to be a continuous 32k, I’m thinking about using textbooks as training data. | 2023-06-27T15:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/14kgjla/is_landmark_attention_trained_correctly/ | TheDigitalRhino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kgjla | false | null | t3_14kgjla | /r/LocalLLaMA/comments/14kgjla/is_landmark_attention_trained_correctly/ | false | false | self | 6 | null |
Question about Langchain Conversation Chains | 2 | I've been using langchain for a personal project of mine. I added persona to a character with a prompt template, for eg, "Satoshi is a Pokemon Trainer .... " , and then I passed the history and the input question to the prompt template.
I am using ConversationBufferMemory/ConversationSummaryMemory to let the model keep track of the conversations and reply appropriately.
Now, I pass a first input to the model say, "Hi, I like soccer, what sport do you like?" , to this question, the model gives a satisfactory answer, but it does not stop there, it continues on and prompts the model with different questions, and then adds this to the history.
Why does this happen and how do I stop this?
This is the prompt template I'm using,
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and a Pokemon trainer named Satoshi.Satoshi is a passionate and hardworking Pokemon trainer.He is a student at the pallet pokemon school .He is driven by his love for pokemon and aims to be a Pokemon master.
Relevant pieces of previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
Human: {input}
Satoshi:"""
This is the chat history,
Human: I'm Gary, I'm currently a student at the Pokemon School.
Satoshi: Hi gary, I'm Satoshi. Nice to meet you.
Human: Nice to meet you too, Satoshi. What year are you in?
Satoshi: I'm in my third year.
Human: That's great. I'm in my second year.
Satoshi: Oh, that's interesting. What do you enjoy most about your courses?
Human: I really like battling Pokemon.
Satoshi: I completely agree. I find the biology of Pokemon interesting.
Human: Nice, I've always been curious about how evolution work.
ohn: Me too. I'm currently working on a project that involves studying the evolution of pokemon.
Human: That sounds really cool. What kind of techniques are you using?
Satoshi: I'm experimenting with different pokemon and observing them at various stages of evolution.
Human: Wow, that's really advanced. I'm still learning the basics.
Satoshi: Don't worry, it takes time to get there. Just keep practicing and you'll get there eventually.
Human: Thanks for the advice. I'll keep that in mind.
Satoshi: No problem. Let me know if you have any questions or if you need any help.
Human: Will do. Thanks for being so helpful.
Satoshi: Anytime. Good luck with your studies.
Here, the only prompt I gave it is, **I'm Gary, I'm currently a student at the Pokemon School.**
It generated all the other questions on its own. This makes my models quite slow, as the llm pipeline generates dozens of prompts like this, for every prompt I give it manually.
​
How do I stop this?
​
This is my ConversationChain pipeline,
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(
llm=local_llm,
prompt=PROMPT,
memory=ConversationBufferMemory(ai_prefix="Satoshi"),
verbose=True
)
​
​ | 2023-06-27T15:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/14kgkc6/question_about_langchain_conversation_chains/ | CmplxQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kgkc6 | false | null | t3_14kgkc6 | /r/LocalLLaMA/comments/14kgkc6/question_about_langchain_conversation_chains/ | false | false | self | 2 | null |
The Choice of a Particular Generation Parameters Preset Can Make or Break a Model! | 11 |
Hi all,
I have been experimenting with different LLM models and their ability to do math and solve basic equations. I found that the choice of the Generation Parameters Preset in the Oobabooga webUI can make a big difference in the performance of the models.
Two weeks ago, I downloaded the Nous-Hermes 13B model and tested it with the default preset, which was “LLaMA-Precise” at the time. I was impressed by how well it could handle math problems. However, after I updated the webUI to the latest version, I noticed that the model could no longer do math at all. I checked the settings and saw that the default preset had changed to “Simple-1”. I switched it back to “LLaMA-Precise” and the model regained its mathematical prowess.
This made me curious about whether other models could also benefit from different presets. So I decided to test all the models I have with all the presets available in the webUI. To my surprise, some models that I thought were incapable of doing math actually performed well with certain presets. Here are some examples:
[Preset Template Choice](https://preview.redd.it/jwrbif4p8l8b1.png?width=485&format=png&auto=webp&s=fb7df439ae452c7f633e1cb63a62db8a1d68015c)
Unfortunately, none of the presets worked for the rest of the models I have. So either they are inherently bad at math, or I need to find better settings for them.
I hope this helps anyone who is interested in testing LLM models for math skills.
I also noticed that these presets can work today, but not the next day. So, I am not sure if its a bug with the Oobabooga webui or something else.
Please share the models you have that can do math and which preset(s) you are using to make them work.
Let me know if you have any questions or suggestions. | 2023-06-27T16:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/14kip27/the_choice_of_a_particular_generation_parameters/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kip27 | false | null | t3_14kip27 | /r/LocalLLaMA/comments/14kip27/the_choice_of_a_particular_generation_parameters/ | false | false | 11 | null |
|
Quantizing/Training models for specific applications | 1 | I'm looking to build a fully human like chatbot with the use of open source LLMs.
As I understand it, a lot of people use LORA to train models for specific application. Where should I start looking for a dataset that'll suit my need?
Like training models, does quantization also play a significant role in making models better for specific application? If so how?
Or does quantization only help with decreasing the weights size for specific applications?
I'm still very much a beginner with regard to open source LLMs. If my questions dont make any sense, do mention that in the comments. | 2023-06-27T16:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/14kixmr/quantizingtraining_models_for_specific/ | CmplxQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kixmr | false | null | t3_14kixmr | /r/LocalLLaMA/comments/14kixmr/quantizingtraining_models_for_specific/ | false | false | self | 1 | null |
TheBloke has released "SuperHot" versions of various models, meaning 8K context! | 466 | [https://huggingface.co/TheBloke](https://huggingface.co/TheBloke)
Thanks to our most esteemed model trainer, Mr TheBloke, we now have versions of Manticore, Nous Hermes (!!), WizardLM and so on, all with SuperHOT 8k context LoRA. And many of these are 13B models that should work well with lower VRAM count GPUs! I recommend trying to load with Exllama (HF if possible).
Now, I'm not going to claim that this is going to compete with GPT 3.5, even, but I've tried a few and conversations absolutely last longer whilst retaining complex answers and context. This is a huge step up for the community and I want to send a huge thanks to TheBloke for making these models, and Kaikendev for SuperHOT: [https://kaiokendev.github.io/](https://kaiokendev.github.io/)
So, lets use this thread to post some experiences? Now there are a variety of great models to choose from with longer context I'm left wondering which to use for RP. I'm trying Guanaco, WizardLM and this version of Nous Hermes (my prior 13B model of choice) and they all seem to work well, though with differing responses.
Edit: I use Oogabooga. And with the update as of today I have no trouble running the new models I've tried with Exllama\_HF. | 2023-06-27T16:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/14kj2w8/thebloke_has_released_superhot_versions_of/ | CasimirsBlake | self.LocalLLaMA | 2023-06-27T17:38:56 | 0 | {} | 14kj2w8 | false | null | t3_14kj2w8 | /r/LocalLLaMA/comments/14kj2w8/thebloke_has_released_superhot_versions_of/ | false | false | self | 466 | {'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]} |
Fine Tuning Stable Vicuna | 1 | [deleted] | 2023-06-27T18:31:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14klk94 | false | null | t3_14klk94 | /r/LocalLLaMA/comments/14klk94/fine_tuning_stable_vicuna/ | false | false | default | 1 | null |
||
Automated installation of self-contained oobabooga environment | 15 | Hey there,
Maybe someone would be interested in (relatively) easy self-contained deployment of oobabooga's webui packaged with WizardLM-13B-Uncensored-GPTQ model.
This is also meant to be self-contained solution for Unix systems, so it's based on docker. Using newest Nvidia CUDA 12.1.1 docker image - so make sure you use compatible GPU.
Repository: [https://github.com/rgryta/LLM-WSL2-Docker](https://github.com/rgryta/LLM-WSL2-Docker)
Basically, it's a one-click installation script that:
1. Checks if you have Hyper-V and WSL2 enabled - prompts if you want to enable and install them if you want to.
2. Installs VERY small Ubuntu environment. I have written some custom Powershell scripts that download base OCI images directly from official repositories and deploy easily them to WSL2.
3. Enables systemd within the WSL2 as well as nvidia-docker
4. That's where Windows-specific parts is finished and rest of the steps are common for both: Windows and Unix (for unix you have to use commands under \`docker\` directory specifically).
5. Once nvidia-docker is available, we can build docker image that's based on nvidia-develop cuda images. This likely could have been just using nvidia-runtime, but I've only finished migrating this from my AUTOMATIC1111 stable-diffusion based project where it was necessary to use develop image. Note: I saw there's docker setup available in README, but it doesn't seem like there's an official docker image available on dockerhub, I'd use that one if it was though.
6. Dockerfile already downloads proper model files, if you want to change model to download, then it's relatively easy: in dw\_models subimage simply change huggingface paths (examples are commented and provided - note that you'll have to remove --loader parameter if you don't use GPTQ model).
7. This is where installation process finishes. Final size of WSL2 environment takes about 30GiB with all that configuration and full 13B model inside. Remember - model files are WITHIN the container. That's in order to completely self-contain the entire application in one place.
8. In order to run the app, simply run "run.bat" script. Note: Provided script *prunes all containers* and starts it from scratch - so if you downloaded any new model, it won't be available once you restart the app. You may want to remove pruning part from \`docker/run.sh\` script.
The entire process of how it works is described in README file in repository.
TLDR:
1. Download and unpack repository: [https://github.com/rgryta/LLM-WSL2-Docker/archive/refs/heads/main.zip](https://github.com/rgryta/LLM-WSL2-Docker/archive/refs/heads/main.zip)
2. Install using \`install.bat\` - you'll be asked to run as admin (needed to check HyperV settings).
3. Launch using \`start.bat\` - no admin required. You have to leave the command terminal up - don't close it.
4. Stop the running app using \`stop.bat\`.
5. Uninstall using \`uninstall.bat\`.
Enjoy. | 2023-06-27T18:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/14klp54/automated_installation_of_selfcontained_oobabooga/ | Crad999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14klp54 | false | null | t3_14klp54 | /r/LocalLLaMA/comments/14klp54/automated_installation_of_selfcontained_oobabooga/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'pKvtKo0fT0qw1N1Fdb62JtgzTVJpNvBnBjy8Kp9X4JY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=108&crop=smart&auto=webp&s=454b34343789dfd82f3149d822955468ae2c3ac3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=216&crop=smart&auto=webp&s=22507b911e75116c494c4f339715a036c1a89fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=320&crop=smart&auto=webp&s=09d0ef7aa72a53e6a43d4d7755e3fcfd5f203296', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=640&crop=smart&auto=webp&s=17731d5671ca6acb51908b06264684b55c8642a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=960&crop=smart&auto=webp&s=8d4a195666883e67cdeb5bfd1b3b607c49a747c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=1080&crop=smart&auto=webp&s=ff7702bd9657db234df1b16fe52e515bc11fb50f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?auto=webp&s=7f4e01389679c00196f93a8e974b8486d06d0707', 'width': 1200}, 'variants': {}}]} |
My new model: Harper - AssistantEditor | 26 | 2023-06-27T19:51:08 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14knith | false | null | t3_14knith | /r/LocalLLaMA/comments/14knith/my_new_model_harper_assistanteditor/ | false | false | 26 | {'enabled': True, 'images': [{'id': 'TtvhTqZ9rQeFPK2y6ww-UJgNBWB1jIdCSTcJdY99Xw8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?width=108&crop=smart&auto=webp&s=4d5ed106449b75b766b1b903856c7440c6ecd94e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?width=216&crop=smart&auto=webp&s=5872a2f7bf9453656935c2003ae8e2742f75bf94', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?width=320&crop=smart&auto=webp&s=632c91a8b2b08332bf6f80d62358234977fd5fa3', 'width': 320}], 'source': {'height': 563, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?auto=webp&s=44756054f751f93d3ad80cd1f25c119fe3040607', 'width': 563}, 'variants': {}}]} |
|||
Questions about memory, tree-of-thought, planning | 1 | 0. SOTA performance frameworks and models for code generation, chatbotting, or general fine-tuneability? And generally, which framework has the best API for usage in projects and most active development? Am I off in thinking ooba or exllama or Autogptq? I’m thinking of possibly using WizardCoder models (have run them on ggml on cpu) but curious to hear thoughts
1. Best way to query a local persistent memory? Should I use a langchain interface plus a simple vector db (eg chromadb)? If possible the memory should be able to store not only docs but also tasks emitted by the language model. I was considering doing something manually in postgres + pgvector but it got kinda tedious
2. Is there an easy standard way to add tree of thought chains?
3. What is the SOTA for adding “planning” capability?
4. Bonus: integrating jsonformer?
pls halp 🙂 and thank you!!
edit: hw is either:
- 1x 3090, 64 gb ddr5, 12900ks
- (1 or 2)x 4090s, 128 gb ddr4, amd epyc (forgot which sku) | 2023-06-27T20:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/14ko678/questions_about_memory_treeofthought_planning/ | mrtransisteur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ko678 | false | null | t3_14ko678 | /r/LocalLLaMA/comments/14ko678/questions_about_memory_treeofthought_planning/ | false | false | self | 1 | null |
Any "coder" model with GPT4All? (MacOS) | 2 | [removed] | 2023-06-27T21:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/14kqm9v/any_coder_model_with_gpt4all_macos/ | 808phone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kqm9v | false | null | t3_14kqm9v | /r/LocalLLaMA/comments/14kqm9v/any_coder_model_with_gpt4all_macos/ | false | false | default | 2 | null |
Long GPT4 conversations | 25 | Lots of buzz around long context recently. Here's another dataset with long (10k+ tokens per chat) gpt4 conversations:
[https://huggingface.co/datasets/OpenLeecher/GPT4-10k/tree/main/Data](https://huggingface.co/datasets/OpenLeecher/GPT4-10k/tree/main/Data) | 2023-06-27T21:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/14kqrdn/long_gpt4_conversations/ | JarPony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kqrdn | false | null | t3_14kqrdn | /r/LocalLLaMA/comments/14kqrdn/long_gpt4_conversations/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '4QXg9jckuCpYLuj6E9ui5TH4FKzEaHdo1X2yi-aSi_U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=108&crop=smart&auto=webp&s=d8d3847506a44d447598447d6d294fd9a55a1a23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=216&crop=smart&auto=webp&s=0ed962aee3cb8d3839db38814871845c52cd8d76', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=320&crop=smart&auto=webp&s=80d2758042d9d48160c76d2dc225e572237d0870', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=640&crop=smart&auto=webp&s=c5c3b98fcbb32e6519b784d63bb0fc421469b19f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=960&crop=smart&auto=webp&s=1647b88929329347bb20438dc7d909ad44f7f1c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=1080&crop=smart&auto=webp&s=954ce92adb859f0a0031ac3c6845c447eeea3253', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?auto=webp&s=3d58cea4f04d345d7847a00b307fa661da8c40b1', 'width': 1200}, 'variants': {}}]} |
Help understanding how to create a dataset for training a model | 14 | Greetings, it was suggested I post here :)
One of the key areas I just can’t wrap my head around is how to fine-tune or enhance a given model based on my own dataset.
For example: Let’s say I wanted to train a small LLM to fully understand all of the various firewall vendors we deal with. I’ve got access to a huge set of data on this (KB articles, process docs, firsthand knowledge etc). So I’d like to enhance a base model with this so we can ask things like “How do I troubleshoot a CPU issue on Palo Alto” or “What is the command I need to run a packet capture on a Cisco ASA, Palo Alto, Checkpoint etc).
I think this is basic Question and Answer stuff, so I’d like to create something on HF where others can use this if they want (I admit, it’s a niche community). I assume this is just some sort of tabular dataset like a CSV where I’d have colums like “Question”, “Answer”, and “Vendor”?
What I have no idea on is how I could take a base model (something small ideally) and “train” it (if that’s even the right word) on this additional Q&A dataset.
Can anyone help explain this to me, either at a high level or more detailed level? If there’s an existing guide out there I’d be all ears. I’ve looked at things like PrivateGPT and they just don’t seem to work (and there’s no data I can find that dictates how a txt or csv file should be setup), and I like the idea of trying to get a dataset on HF that others can use if they want.
I’d greatly appreciate any advice you can offer.
Edit: I’ve got a 3090 TI at home, so I’m hoping I can do this “training” in a day or a week. I’m not too concerned with how long it takes to train, just so long as I can run the inference on CPU. | 2023-06-27T23:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/14ksx46/help_understanding_how_to_create_a_dataset_for/ | SigmaSixShooter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ksx46 | false | null | t3_14ksx46 | /r/LocalLLaMA/comments/14ksx46/help_understanding_how_to_create_a_dataset_for/ | false | false | self | 14 | null |
Nvidia user? Make sure you don't offload too many layers. | 54 | A quick reminder to Nvidia users of llama.cpp, and probably other tools. Since a few driver versions back, the number of layers you can offload to GPU has slightly reduced. Moreover, if you have too many layers, it will not produce an error anymore. Instead, it will simply be 4 times slower than it should.
So, if you missed it, it is possible that you may notably speed up your llamas right now by reducing your layers count by 5-10%.
To determine if you have too many layers on Win 11, use Task Manager (Ctrl+Alt+Esc). Open the performance tab -> GPU and look at the graph at the very bottom, called "**Shared** GPU memory usage". Now start generating. At no point at time the graph should show anything. It should stay at zero. If it does not, you need to reduce the layers count.
Remember to test with the context filled. Either a chat with long preexisting history, or a story mode with long existing story or even garbage. | 2023-06-27T23:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/14kt3hz/nvidia_user_make_sure_you_dont_offload_too_many/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kt3hz | false | null | t3_14kt3hz | /r/LocalLLaMA/comments/14kt3hz/nvidia_user_make_sure_you_dont_offload_too_many/ | false | false | self | 54 | null |
With API changes going into affect soon… | 45 | [removed] | 2023-06-27T23:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/14ktinx/with_api_changes_going_into_affect_soon/ | Artistic_Okra7288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ktinx | false | null | t3_14ktinx | /r/LocalLLaMA/comments/14ktinx/with_api_changes_going_into_affect_soon/ | false | false | default | 45 | null |
A (possibly stupid) thought about context extension. | 3 | The big talk recently on the sub has been about the new method to extend context ([https://kaiokendev.github.io/til#extending-context-to-8k](https://kaiokendev.github.io/til#extending-context-to-8k)). Reading through this got me thinking that a lot of learning might be getting wasted on inferring meaning from individual token positions than should be done. Based on that, is it at all a reasonable idea (or in the direction of a reasonable idea) to train a model with normalized token location, by which I mean making every input token in both training and prediction appear to be over the same range. If we use a context length of 1024 as an example, a 4 token sequence would show as tokens 0, 255, 511, and 767 (or some variation depending on exactly how you want to spread them out.)
​
I've been focusing my efforts more on applying language models than going deep into there inner functioning, so I don't have a sufficient grasp of if this is feasible, and if it is, does it have any potential to be useful. I'm interested to hear other's thoughts. | 2023-06-28T01:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/14kvn4b/a_possibly_stupid_thought_about_context_extension/ | Captainbetty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kvn4b | false | null | t3_14kvn4b | /r/LocalLLaMA/comments/14kvn4b/a_possibly_stupid_thought_about_context_extension/ | false | false | self | 3 | null |
Anything similar to HeyPi? | 2 | The “conversationalist” style is truly amazing. I’ve used several local models but nothing every probes and keeps a conversation going quite like HeyPi does!
This sub is amazing! Thankful for every one of you! | 2023-06-28T01:36:19 | https://www.reddit.com/r/LocalLLaMA/comments/14kw0h4/anything_similar_to_heypi/ | SocialDinamo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kw0h4 | false | null | t3_14kw0h4 | /r/LocalLLaMA/comments/14kw0h4/anything_similar_to_heypi/ | false | false | self | 2 | null |
ExLlama Extremely Slow? | 2 | [deleted] | 2023-06-28T01:55:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14kwekq | false | null | t3_14kwekq | /r/LocalLLaMA/comments/14kwekq/exllama_extremely_slow/ | false | false | default | 2 | null |
||
Can't get shared CUDA memory from new Nvidia drivers to work | 8 | Apparently the latest Nvidia drivers (535.x) have a change that'll make CUDA simply offload some of the processing to RAM when it runs out of VRAM instead of throwing an error.
I've updated my drivers to 535.54.03 on Linux Mint and tried testing this change using:
* WizardLM-33B-V1.0-Uncensored-SuperHOT-8K-GPTQ with 4096 context
* Oobabooga webUI with the included exllama loader
* SillyTavern (connected to the webUI's API)
However when the context gets to >3200 tokens (where it usually starts throwing Out Of Memory errors) it keeps throwing the same old OOM errors as before rather than offloading anything to RAM.
Is there something that I need to do to enable this?
There's very little discussion of this change and it seems to just work out of the box for most people so I'm really not sure what the problem could be. | 2023-06-28T02:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/14kwsb9/cant_get_shared_cuda_memory_from_new_nvidia/ | MassiveFloppa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kwsb9 | false | null | t3_14kwsb9 | /r/LocalLLaMA/comments/14kwsb9/cant_get_shared_cuda_memory_from_new_nvidia/ | false | false | self | 8 | null |
New oobabooga subreddit: r/oobaboogazz | 126 | Hello everyone,
I'd like to let you know that I have created my own official subreddit for the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/) project, and would like to invite you to join. It can be found here:
https://www.reddit.com/r/oobaboogazz
If anyone needs help setting up the webui, is experiencing issues, would like to ask questions, etc, please feel free to start a new thread there. I have been replying to 100% of posts and will be happy to help. | 2023-06-28T02:17:26 | https://www.reddit.com/r/LocalLLaMA/comments/14kww9i/new_oobabooga_subreddit_roobaboogazz/ | oobabooga4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kww9i | false | null | t3_14kww9i | /r/LocalLLaMA/comments/14kww9i/new_oobabooga_subreddit_roobaboogazz/ | false | false | self | 126 | {'enabled': False, 'images': [{'id': 'V6jfVg2qsG0kqRYwnFd9vWemtNbCGph77abkd8Qm7_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=108&crop=smart&auto=webp&s=c9fdc5719a19b1143a7ec2aecb85bb783f1fbe14', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=216&crop=smart&auto=webp&s=4e923eb22b859423915709b54283f4cbcf452846', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=320&crop=smart&auto=webp&s=89d0077b9e7b5137f4fb0bfcad7fe776ae47452a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=640&crop=smart&auto=webp&s=57b1dd3a6c545a2f71e5c83331da7b54919ed3d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=960&crop=smart&auto=webp&s=3105f8e60881cfabb39fd9a4886986fbab11c406', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=1080&crop=smart&auto=webp&s=1a7c70cdaebd08fb9e9effb8df93d2bf093e5cd1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?auto=webp&s=146533ac9316057590c306e935aa1fedb9106626', 'width': 1200}, 'variants': {}}]} |
Mpt models with the new exllama context extension trick? | 0 | Hi all, I was just wondering if the 2x and 4x context improvement tricks for exllama could be applied to the mpt models or is it llama model specific? I believe the mpt models are 8k tokens so theoretically it could get to 32k context? Obviously you would need a monster to run it (might need to sell a few friends' kidneys) but it's a neat experiment.
I'm not sure if the mpt models even work with exllam (yet?) so this might be a dumb question but a dude can dream.
Also even dumber thought: I wonder if this would work on that 65k storywriter model? I know that was just a fine tune of a base model and would take a super computer but that would be 260k context... | 2023-06-28T03:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/14kxz52/mpt_models_with_the_new_exllama_context_extension/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kxz52 | false | null | t3_14kxz52 | /r/LocalLLaMA/comments/14kxz52/mpt_models_with_the_new_exllama_context_extension/ | false | false | self | 0 | null |
What are good local information retrieval models? | 0 | [removed] | 2023-06-28T04:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/14kzb77/what_are_good_local_information_retrieval_models/ | Icaruswept | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14kzb77 | false | null | t3_14kzb77 | /r/LocalLLaMA/comments/14kzb77/what_are_good_local_information_retrieval_models/ | false | false | default | 0 | null |
Advice for prebuild PC or Old Workstation (X2) Xeon v3 (12 cores)/64GB for 30b + models | 5 | Hi, guys hope you can drop some advice about what will be the best choice for testing +30b models. From what I have read so far, we can run the models on VRAM (GPU) or using GGLM (CPU).
For the VRAM choice almost in every post, people agree that the best setup is more RAM + 64GB (DDR5 is possible) and (x2) RTX 3090.
But what about if I chose the GGML route? Saw some good old workstation (x2) Xeon CPU (12 cores/each) and + 64GB RAM.
Of course, money is limited, also I am just starting in this AI world. Right now, for me (money-wise), the old workstation seems to be the best choice, maybe adding one RTX 3090. But it is a good option? or from a scalability point of view is it better a prebuild?
​
Thanks in advance | 2023-06-28T05:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/14l0k1o/advice_for_prebuild_pc_or_old_workstation_x2_xeon/ | eltrojax | self.LocalLLaMA | 2023-06-28T05:43:51 | 0 | {} | 14l0k1o | false | null | t3_14l0k1o | /r/LocalLLaMA/comments/14l0k1o/advice_for_prebuild_pc_or_old_workstation_x2_xeon/ | false | false | self | 5 | null |
The best 13B model for rolepay? | 12 | I'm currently using Airoboros 13B and find that it quite good actually, but i'm still looking for another that could do roleplay better. My criteria is:
\- emotional diversity ( I see many models are so forgiving, they still trying to be nice to me even though I'm rude to them, it's kinda boring )
\- stay in the context ( i tried to make them mad and then dropped a question like "what is ffmpeg", they just switched to their assistant role and answered it for me. Airoboros 13B handles these situations well sometimes, they will denie to answer if they are not in good mood (not all the time) ) | 2023-06-28T06:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/14l1d48/the_best_13b_model_for_rolepay/ | CKOSMICC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14l1d48 | false | null | t3_14l1d48 | /r/LocalLLaMA/comments/14l1d48/the_best_13b_model_for_rolepay/ | false | false | self | 12 | null |
Meta releases paper on SuperHot technique | 210 | 2023-06-28T06:10:20 | https://arxiv.org/abs/2306.15595 | logicchains | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14l1fj8 | false | null | t3_14l1fj8 | /r/LocalLLaMA/comments/14l1fj8/meta_releases_paper_on_superhot_technique/ | false | false | 210 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
||
In case you’re impatient like me, here’s the PR that let’s you run RoPE scaling with llama.cpp | 17 | It’s not my work but It works like a charm. I’m sure it’ll be merged into the main package soon enough, but I had to get my hands on it and I’m sure you all feel the same!
I am a git idiot so all I did was clone the main repository and manually replace the files with the ones from the PR by hand. I’m sure there’s a smarter way, but my point is that if I can do it, you can do it. | 2023-06-28T07:18:55 | https://github.com/ggerganov/llama.cpp/pull/2019/files | E_Snap | github.com | 1970-01-01T00:00:00 | 0 | {} | 14l2m9s | false | null | t3_14l2m9s | /r/LocalLLaMA/comments/14l2m9s/in_case_youre_impatient_like_me_heres_the_pr_that/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'duMZxEzIoQ0O9OyoEogFpU0iRSfhrf_BYDOqmW4DUSM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RwdRu_sxfUdfU-19G6OFYmPRv6Av3kJ7ZoPJCVK1dEo.jpg?width=108&crop=smart&auto=webp&s=804cc004607e76fb4fbf3fb1f972f94c26b41813', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/RwdRu_sxfUdfU-19G6OFYmPRv6Av3kJ7ZoPJCVK1dEo.jpg?width=216&crop=smart&auto=webp&s=c30f7c80ab5a2fa2b3321179ee34baf295bd1513', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/RwdRu_sxfUdfU-19G6OFYmPRv6Av3kJ7ZoPJCVK1dEo.jpg?width=320&crop=smart&auto=webp&s=e217e5f3bc851265c1868d7e7cb763160a97785b', 'width': 320}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/RwdRu_sxfUdfU-19G6OFYmPRv6Av3kJ7ZoPJCVK1dEo.jpg?auto=webp&s=964ef108c6f9e5b80490c433ed8f4148c56fa6c8', 'width': 420}, 'variants': {}}]} |
|
Training a domain specific LLM for research and guidance? Recommendations? | 3 | I'm wondering if anyone has any recommendations on this. I want to run a LLM locally on my machine (running a 3090ti and 4090 atm so should be fine I imagine) that I can also train on additional data from the web that is relevant to my research. So starting with the best base LLM, trained on public weights, I also want the ability to feed new training data to it on the fly (or at least in batches, say once a week or something). For example by scraping arxiv for related topics to my field, and perhaps also scraping pages like Wikipedia or others that also add to the knowledge related to the new topics being discussed.
The idea is that I want my own version of something like ChatGPT that I can talk research with, so that I can use it to help me on projects. ChatGPT is unfortunately not helpful here a lot of the time because it only goes back to 2021.
* Is this solutions for doing something like this publicly available already?
* What would be the best LLM model for a task like this?
* Any recommendations or suggestions on accomplishing my goal? | 2023-06-28T07:29:01 | https://www.reddit.com/r/LocalLLaMA/comments/14l2s38/training_a_domain_specific_llm_for_research_and/ | Careful-Temporary388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14l2s38 | false | null | t3_14l2s38 | /r/LocalLLaMA/comments/14l2s38/training_a_domain_specific_llm_for_research_and/ | false | false | self | 3 | null |
Improvement of Fine tuning Llama on 300 lines dataset. | 1 | [removed] | 2023-06-28T08:04:08 | https://www.reddit.com/r/LocalLLaMA/comments/14l3d9k/improvement_of_fine_tuning_llama_on_300_lines/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14l3d9k | false | null | t3_14l3d9k | /r/LocalLLaMA/comments/14l3d9k/improvement_of_fine_tuning_llama_on_300_lines/ | false | false | default | 1 | null |
Any guide/intro to fine-tuning anywhere? | 5 | I would like to start from guanaco and would like to fine-tune it and experiment. I have a 3090 (now) is it possible to play with training 30B Models? I'd like to learn more about this and wondering if there's an organised place of such knowledge.
Can datasets from huggingface pe used? | 2023-06-28T10:02:16 | https://www.reddit.com/r/LocalLLaMA/comments/14l5cny/any_guideintro_to_finetuning_anywhere/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14l5cny | false | null | t3_14l5cny | /r/LocalLLaMA/comments/14l5cny/any_guideintro_to_finetuning_anywhere/ | false | false | self | 5 | null |
Need some Help Deciding On Dev Setup | 0 | Hi all,
After a lot of reading around about LocalLLMs and Stable Diffusion, it seems I need a lot of VRAM and RAM
Now my current PC is:
* Ryzen 7 3800X
* Nvidea GeForce RTX 3060 12GB
* 32GB DDR4
I Need to Upgrade my Mac anyways (pre-2019 MBP Intel Chip) because I do iOS Dev work with Swift, and I couldn't decide between two routes:
* Get a High Spec M2 MAX Mac Studio ( 12‑core CPU, 38‑core GPU, 16‑core Neural Engine ) with 64GB / 96GB of Unified Memory
**OR**
* Get a Low-Mid Spec M2Pro Mac Mini with 16/32GB Memory, and save for 3090/4090 + more RAM on PC
Budget is 2-3k. Do you think the new M2 Mac with 64/96GB will be able to handle this type of work? Or is the Nvidea the best route?
Which would be the Better choice here you guys think?
Thanks for any thoughts!
​ | 2023-06-28T10:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/14l6cr9/need_some_help_deciding_on_dev_setup/ | [deleted] | self.LocalLLaMA | 2023-06-28T12:15:29 | 0 | {} | 14l6cr9 | false | null | t3_14l6cr9 | /r/LocalLLaMA/comments/14l6cr9/need_some_help_deciding_on_dev_setup/ | false | false | self | 0 | null |
Slow inference on R720 w/P40 (or not)? | 3 | Dear fellow redditeers
I have a question re inference speeds on a headless Dell R720 (2x Xeon CPUs / 20 physical cores, 192 Gb DDR-3 RAM) running Ubuntu 22.04 LTS Desktop and which also has an Nvidia Tesla P40 card installed.
Nvidia drivers are version 510.xx.xx. Cuda drivers, conda env etc. are installed correctly I believe. OobaTextUI is latest version (updated yday / 27jun).
Models load correctly into VRAM on the GPU in a fairly short time from SSD (8-9 sec for the 30B model mentioned below).
Using Oobabooga text-UI, ExLama loader, and [https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ) I get a maximum of 1.9 t/s. Less with longer question sentences/ input tokens. Using the corresponding 13B model gives me a max. of approx 4 t/s.
Is this normal and to be expected given the relative age/slowness of the P40 and the R720 platform?
​
An added observation and related question: Looking at Nvidia-smi while inferencing I noticed that although it reaches 100 pct utilization intermittently, the card never goes above 102 watts in power consumption (despite the P40 being capable of 220 Watts) and temps never go very high (idle is around 41 deg. C and max. temp. observed so far while inferencing is 55 deg.C).
This leads me to think that it's bottlenecked somehow. Given that the models load correctly into VRAM, I assume it must be the code and/or nvidia drivers perhaps that has a bottleneck somewhere (?).
rgds
​
PS (edit): I also had sunshine running using nvenc and consuming 150 Mb VRAM. I switched sunshine over to software encoding, thus removing that small extra load on the P40, but made little difference.
​ | 2023-06-28T12:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/14l844t/slow_inference_on_r720_wp40_or_not/ | Wooden-Potential2226 | self.LocalLLaMA | 2023-06-28T12:34:54 | 0 | {} | 14l844t | false | null | t3_14l844t | /r/LocalLLaMA/comments/14l844t/slow_inference_on_r720_wp40_or_not/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'VALHw74KNdRABKHluM0dnfSh0dfmQquTvhwGRfYiv7s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?width=108&crop=smart&auto=webp&s=384f6ee93a8d6b665f5fff4b23477f5040af0309', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?width=216&crop=smart&auto=webp&s=2479be68a278e77a832f8cdc6a3865be8e2c918e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?width=320&crop=smart&auto=webp&s=879ce3ae7be9138381087bd161abc7ba6d7c4877', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?width=640&crop=smart&auto=webp&s=d9a1d2155e8c9b7e2a118a6e5b49934a01816a22', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?width=960&crop=smart&auto=webp&s=f5fee8f4ddc75e9be9ba6f6ebeabc3e79276ac93', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?width=1080&crop=smart&auto=webp&s=ab50894adba4018007fe08c2456514e681bf1d67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PI6iFrfz_lVsG-XTYC5epyVdy5z6h-UNrTjsH9Rp-TA.jpg?auto=webp&s=8dcc4fe229dc795737668281369d71605111d3c0', 'width': 1200}, 'variants': {}}]} |
MPT 30b 4-bit GPTQ? | 6 | Is there something like exllama for MPT 30b?
I'm looking to do fast and high quality inference commercially and my understanding is that llama models don't have a license for that. I have multiple powerful GPUs I would like to optimize text generation on. Is MPT 30b best? I am also aware of Falcon but I may or may not be able to use them at my work. | 2023-06-28T12:34:11 | https://www.reddit.com/r/LocalLLaMA/comments/14l8bht/mpt_30b_4bit_gptq/ | Pawngrubber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14l8bht | false | null | t3_14l8bht | /r/LocalLLaMA/comments/14l8bht/mpt_30b_4bit_gptq/ | false | false | self | 6 | null |
Can we extend falcon context length like llama? | 14 | Since it is proven compressed rotary positional embedding works for llama. Can we do the trick for falcon? There are two benefits:
* Commercial Use License
* Multi Query Attention consume less VRAM for long text generation comprared with Multi Head Attention
I can not find a exllama equivalent for falcon, that is fast and support dual 3090. | 2023-06-28T12:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/14l8gaq/can_we_extend_falcon_context_length_like_llama/ | Spare_Side_5907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14l8gaq | false | null | t3_14l8gaq | /r/LocalLLaMA/comments/14l8gaq/can_we_extend_falcon_context_length_like_llama/ | false | false | self | 14 | null |
How to install LocalLLaMa in China? | 1 | [removed] | 2023-06-28T12:45:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14l8kil | false | null | t3_14l8kil | /r/LocalLLaMA/comments/14l8kil/how_to_install_localllama_in_china/ | false | false | default | 1 | null |
||
There has been a lot of SuperHot this and SuperHot that lately, and everyone is praising it, I felt so missed out, lonely and sad, I am still using GPT4ALL, can someone explain SuperHot to me like I am 5 and why I should worship it ? Thanks :D | 1 | [removed] | 2023-06-28T13:58:23 | [deleted] | 2023-06-29T06:39:15 | 0 | {} | 14la9fz | false | null | t3_14la9fz | /r/LocalLLaMA/comments/14la9fz/there_has_been_a_lot_of_superhot_this_and/ | false | false | default | 1 | null |
||
3B models on a Pi 4 8GB? | 37 | I've seen people run LLMs from a Raspberry Pi before, but since even 7B models at 4 bits occupy like 6-7 GB of RAM I assume they all ran them with extensive SSD swapping.
Seeing the 3B Orca Mini model is about 4-5 GB at 4 bits as a GGML that would mean that it could comfortably fit onto an 8GB Pi 4 for inference. Has anyone tried any of the tinier models (like Redpajama 3B) on it that would fit without swapping? I've only got 2GB and 4GB versions laying around unfortunately so I can't give it a test, but it would be very interesting to see what kind of token/sec performance it gets.
I've seen people post some tests on Xaviers with cuda offloading but it was surprisingly unusably slow (0.1 tok/sec and similar), but that may've been at full precision and was centuries ago in terms of how fast new optimizations are found these days. | 2023-06-28T14:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/14ladgr/3b_models_on_a_pi_4_8gb/ | MoffKalast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ladgr | false | null | t3_14ladgr | /r/LocalLLaMA/comments/14ladgr/3b_models_on_a_pi_4_8gb/ | false | false | self | 37 | null |
Pretty good article on where we were a few weeks ago. By Rohit Krishnan | 17 | 2023-06-28T15:29:30 | https://www.strangeloopcanon.com/p/building-god | freedom2adventure | strangeloopcanon.com | 1970-01-01T00:00:00 | 0 | {} | 14lck7u | false | null | t3_14lck7u | /r/LocalLLaMA/comments/14lck7u/pretty_good_article_on_where_we_were_a_few_weeks/ | false | false | 17 | {'enabled': False, 'images': [{'id': '_3QxwyyzMekrSQQ2Ee7uQZXCYmh2JjjEcf4vgL37K48', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/FAEE2vHQMykokB1w_d4H7ixB1j7vm7rWGJhNhlN8unQ.jpg?width=108&crop=smart&auto=webp&s=2cb7418d4d90610652db5f0f7fe899f73049bac9', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/FAEE2vHQMykokB1w_d4H7ixB1j7vm7rWGJhNhlN8unQ.jpg?width=216&crop=smart&auto=webp&s=782a5f7d0cf9352740ba97461650edd186a5897a', 'width': 216}, {'height': 139, 'url': 'https://external-preview.redd.it/FAEE2vHQMykokB1w_d4H7ixB1j7vm7rWGJhNhlN8unQ.jpg?width=320&crop=smart&auto=webp&s=9b08b042cf213b41f585a670f30d68772b033b8b', 'width': 320}, {'height': 278, 'url': 'https://external-preview.redd.it/FAEE2vHQMykokB1w_d4H7ixB1j7vm7rWGJhNhlN8unQ.jpg?width=640&crop=smart&auto=webp&s=6e24d38db8f84fe4fd0431e01181bb89d5cbfcf4', 'width': 640}], 'source': {'height': 355, 'url': 'https://external-preview.redd.it/FAEE2vHQMykokB1w_d4H7ixB1j7vm7rWGJhNhlN8unQ.jpg?auto=webp&s=f5e7d50f4eef6e7c676bcba76b6c7d82e64fcea0', 'width': 817}, 'variants': {}}]} |
||
Best commercially viable method to ask questions against a set of 30~ PDFs? | 12 | I would like to create a prototype for users to be able to ask a GPT questions about company policies and procedures. I know Mosiac is commercially viable and there are tools like PrivateGPT that work with GPT4All (not Mosaic). Is there something similar for Mosaic?
I guess for a prototype it doesn't -have- to be Mosaic, but I felt the PrivateGPT/G4All results were a bit murky, personally. | 2023-06-28T16:18:45 | https://www.reddit.com/r/LocalLLaMA/comments/14ldtaj/best_commercially_viable_method_to_ask_questions/ | HyBReD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ldtaj | false | null | t3_14ldtaj | /r/LocalLLaMA/comments/14ldtaj/best_commercially_viable_method_to_ask_questions/ | false | false | self | 12 | null |
Why not use dynamic positional embeddings? | 26 | [removed] | 2023-06-28T16:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/14le1i4/why_not_use_dynamic_positional_embeddings/ | cstein123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14le1i4 | false | null | t3_14le1i4 | /r/LocalLLaMA/comments/14le1i4/why_not_use_dynamic_positional_embeddings/ | false | false | default | 26 | null |
Why not use dynamic positional embeddings? | 9 | [removed] | 2023-06-28T16:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/14le1ku/why_not_use_dynamic_positional_embeddings/ | cstein123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14le1ku | false | null | t3_14le1ku | /r/LocalLLaMA/comments/14le1ku/why_not_use_dynamic_positional_embeddings/ | false | false | default | 9 | null |
Tree of thoughts build in open-source model | 12 | Hi guys,
Do you know if an open source model built in tree of thought exists ? And if we can try it ?
If you know more about tree of thought, i put link below 👇 : | 2023-06-28T16:31:07 | https://www.promptingguide.ai/techniques/tot | Savings_Scholar | promptingguide.ai | 1970-01-01T00:00:00 | 0 | {} | 14le4ti | false | null | t3_14le4ti | /r/LocalLLaMA/comments/14le4ti/tree_of_thoughts_build_in_opensource_model/ | false | false | default | 12 | null |
are there anytools or frameworks similar to "langchain" or "llamaindexbut implemented or designed in a language other than python? | 1 | im currently trying to add private information locally to llama.cpp and llamaSharp. But from what i've seen the only options for doing that (at least publicly available with enough documentation) are stuff like langchain or llamaindex. The problem is, that all the documentation and implementation is done in python.
you see I'm trying to create something involving interactive characters in unity or possibly something with opengl as part of a separate engine. completely unrelated but I'm not quite familiar with python (and can't get used to it), and I would like to code everything in c++ or c#
does anyone know if there are any alternatives at all to langchain or llama index? | 2023-06-28T17:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/14lfsy4/are_there_anytools_or_frameworks_similar_to/ | Just-Contribution834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lfsy4 | false | null | t3_14lfsy4 | /r/LocalLLaMA/comments/14lfsy4/are_there_anytools_or_frameworks_similar_to/ | false | false | self | 1 | null |
13B Lora finetuning not doing anything | 1 | [removed] | 2023-06-28T17:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/14lgaw7/13b_lora_finetuning_not_doing_anything/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14lgaw7 | false | null | t3_14lgaw7 | /r/LocalLLaMA/comments/14lgaw7/13b_lora_finetuning_not_doing_anything/ | false | false | default | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.