title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Open-Source Text Generation & LLM Ecosystem (new HF blog post) | 1 | 2023-07-17T18:06:24 | https://huggingface.co/blog/os-llms | kryptkpr | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15299w7 | false | null | t3_15299w7 | /r/LocalLLaMA/comments/15299w7/opensource_text_generation_llm_ecosystem_new_hf/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ztoE2hO11waZKIVzDDyboQdgYQ2heusbYiriIqWFfpo', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=108&crop=smart&auto=webp&s=f2d90356cefe990c7c5a6a3932f99505a61dd523', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=216&crop=smart&auto=webp&s=46a3700eabb30b12944383f0066434f9b8ad65ab', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=320&crop=smart&auto=webp&s=fdf0a4019507aaed895f87cf24442ac4684c6b39', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=640&crop=smart&auto=webp&s=1797f72eb8115ecdef1f3ec204f3a229a09e5393', 'width': 640}, {'height': 478, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=960&crop=smart&auto=webp&s=e54ec901340304b2aae2c72633437f438290ca77', 'width': 960}, {'height': 538, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=1080&crop=smart&auto=webp&s=b6986808a5b62158c9a9fc26309a17e2db22012b', 'width': 1080}], 'source': {'height': 664, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?auto=webp&s=aa0b5275c2bbe3772abd5a4514901d445c2e343f', 'width': 1332}, 'variants': {}}]} |
||
Best model for text adventure games | 1 | Mainly title, currently I'm using airoboros 65B and its good.
Just checking if novel writer models would be better suited for that purpose or if there are any specific ones I should try | 2023-07-17T18:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1529n3f/best_model_for_text_adventure_games/ | yehiaserag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1529n3f | false | null | t3_1529n3f | /r/LocalLLaMA/comments/1529n3f/best_model_for_text_adventure_games/ | false | false | self | 1 | null |
llama.cpp vs llama | 1 | 2023-07-17T18:40:11 | https://github.com/ggerganov/llama.cpp | o2sh | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 152a55u | false | null | t3_152a55u | /r/LocalLLaMA/comments/152a55u/llamacpp_vs_llama/ | false | false | 1 | null |
||
llama.cpp and meta's llama repository summaries | 1 | 2023-07-17T18:59:50 | https://github.com/ggerganov/llama.cpp | o2sh | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 152ant2 | false | null | t3_152ant2 | /r/LocalLLaMA/comments/152ant2/llamacpp_and_metas_llama_repository_summaries/ | false | false | 1 | null |
||
FlashAttention-2 released - 2x faster than FlashAttention v1 | 1 | 2023-07-17T19:40:50 | https://twitter.com/tri_dao/status/1680987580228308992 | GlobalRevolution | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 152bqyz | false | {'oembed': {'author_name': 'Tri Dao', 'author_url': 'https://twitter.com/tri_dao', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">The tech report has all the info: <a href="https://t.co/E5FZ3j1mDB">https://t.co/E5FZ3j1mDB</a><br><br>More details in blogposts:<a href="https://t.co/hh2yGicgOe">https://t.co/hh2yGicgOe</a><a href="https://t.co/ANwdH0fgMs">https://t.co/ANwdH0fgMs</a><a href="https://t.co/EjeYlGmBuL">https://t.co/EjeYlGmBuL</a><br><br>FlashAttention-2 is available in the open source: <a href="https://t.co/b3RaWgoFbE">https://t.co/b3RaWgoFbE</a><br>2/</p>— Tri Dao (@tri_dao) <a href="https://twitter.com/tri_dao/status/1680987580228308992?ref_src=twsrc%5Etfw">July 17, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/tri_dao/status/1680987580228308992', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_152bqyz | /r/LocalLLaMA/comments/152bqyz/flashattention2_released_2x_faster_than/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'k7mjeWDDJ8wP05eNJRUDbk0F44stdg3CNDuMNhFXkXk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/WjwY3wLCBZji3XZZN-YJzjFQMEuVspb8_mmhUpUHvCE.jpg?width=108&crop=smart&auto=webp&s=b744c0734de36da702c0fbea2a048744d59c32bb', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/WjwY3wLCBZji3XZZN-YJzjFQMEuVspb8_mmhUpUHvCE.jpg?auto=webp&s=a6da09c0d50f0256d0772c5425b063e0f2e83824', 'width': 140}, 'variants': {}}]} |
||
Running LLMs locally on Android | 1 | [removed] | 2023-07-17T21:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/152e78s/running_llms_locally_on_android/ | atezan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152e78s | false | null | t3_152e78s | /r/LocalLLaMA/comments/152e78s/running_llms_locally_on_android/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D7uTH5s4LDVjda6kEL6oSgL5gomOBRMEcuuJOPfKvF4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=108&crop=smart&auto=webp&s=51be021f144a7b76cf0827775a02f301859b9000', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=216&crop=smart&auto=webp&s=92169fcdd3c39c0dd72458d6e32f0d5be5fdd91d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=320&crop=smart&auto=webp&s=77526e71a23f5b5c402f0fe4b7e1c1b7201725ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=640&crop=smart&auto=webp&s=39465a4f24c4efa9ab6599882cd6c9edebb9e346', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=960&crop=smart&auto=webp&s=df67842b7635d3a066292560590e07166bfef21f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=1080&crop=smart&auto=webp&s=b2e76dd4b5a08eaecca3647475a4683e5e69e00e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?auto=webp&s=a508b8d15236d9440f5744bf9f71b342d3e7ccd1', 'width': 1200}, 'variants': {}}]} |
Falcon ggml/ggcc with langchain | 1 | To load falcon models with the new file format ggcc wich is a new file format similar to ggml, I'm using this tool:
https://github.com/cmp-nct/ggllm.cpp
Wich is a fork from :
https://github.com/ggerganov/llama.cpp
Thing is that it just gives me a binary that load a model and does inference right in the terminal.
To load the falcon.ggml model with langchain I use Ctransformers, but for the ggcc file format I just got the above-named binary.
Does anyone knows how to use it with langchain?Do I have to make a python wrapper? | 2023-07-17T23:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/152hhr0/falcon_ggmlggcc_with_langchain/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152hhr0 | false | null | t3_152hhr0 | /r/LocalLLaMA/comments/152hhr0/falcon_ggmlggcc_with_langchain/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0JEtiLK8NJ3zi7bo0MeYxbcPjTwbq_7FdrNa9wx1tSY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dlpXFUlicImbwR2WGQY47tNMsJLiJSPDjc8agl6zkC0.jpg?width=108&crop=smart&auto=webp&s=e433fd2285545ffa8063029144581d1264da1bcb', 'width': 108}], 'source': {'height': 192, 'url': 'https://external-preview.redd.it/dlpXFUlicImbwR2WGQY47tNMsJLiJSPDjc8agl6zkC0.jpg?auto=webp&s=10ccaf598dcf62c1b8e8acd5263a13e1b4afa568', 'width': 192}, 'variants': {}}]} |
I've uploaded some 33B models with extended context up to 16384 (16K) tokens using bhenrym14 LoRA! (FP16 and GPTQ) | 1 | Hi there guys! I do this post, to give info about these merges of 33B models to use up to 16K context.
​
You can find models the models in my profile on HF, ending with "lxctx-PI-16384-LoRA" for FP16, and "lxctx-PI-16384-LoRA-4bit-32g" for GPTQ. My profile is \[here.\] ([https://huggingface.co/Panchovix](https://huggingface.co/Panchovix))
​
The models I have merged for now, are:
​
\* Wizard-Vicuna-30B-Uncensored
\* Guanaco-33B
\* Tulu-30B (-> GPTQ being uploaded)
\* GPlatty-30B (-> FP16 and GPTQ being uploaded)
\* Airoboros-33b-gpt4-1.2 (->In queue after GPlatty-30B)
​
You can find Airoboros-33b-gpt4-1.4.1 + 16K context on GPTQ \[here.\]([https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ))
​
I have planned some more models, but HF has limited my download speed (well downloaded a lot of models) and so, I'm speed limited to 2.5MB/s, so 33B models takes ages to download. But anyways, do you have a suggested model to extend it context to 16384 tokens?
​
\----------------------------
​
The LoRA was made by bhenrym14 ([https://huggingface.co/bhenrym14](https://huggingface.co/bhenrym14)) using airoboros 1.4.1 dataset with the RoPE scaled patch (for 16K context). This was also made with a pretrain lora at 16K context, [https://huggingface.co/bhenrym14/llama-33b-lxctx-PI-16384-LoRA](https://huggingface.co/bhenrym14/llama-33b-lxctx-PI-16384-LoRA).
For the model merges, they were done in this order:
FP16 base Model -> Merge base model with 16K pretrain LoRA -> Get merged model > Merge merged model with final 16K LoRA -> Final merged model.
The merges were done on a Ryzen 7 7800X3D, 64GB RAM and 200GB swap from a SSD PCI-E 4.0 drive. Each final merge takes about 30-40 min.
If you see a "Uploaded Fixed FP16 model" on HF, it is because I first merged the base model with the final 16K LoRA, which gave corrupted/gibberish outputs as results. So the Fixed model is with both the pretrained 16K and final 16K LoRA.
​
\----------------------------
​
For full FP16 model directly via transformers, you have to apply the monkeypatch added in the FP16 models on HF. (If using ooba you can do this on modules/training.py)
​
Be careful that for FP16, you will need about \~85GB of VRAM or more to use 16K context.
​
This monkey patch is needed as well if you want to use 4bit bitsandbytes. I suggest to use NF4 and double\_quant.
​
\----------------------------
For Quantized models, I've uploaded some quants with group size 32, sequential and act order true, since those give the max perplexity vs the FP16 model, and FIT (barely) on 2X24 GB VRAM GPUs using exllama.
For exllama/exllama\_HF, you have to set embedding compression to 8 and max context to 16384. (Directly on exllama, this is -d 'model\_path' -l 16384 -cpe 8) (On ooba, you can set them on the UI)
​
I really don't suggest GPTQ for llama for this, mostly because higher VRAM usage, and with group size + act order at the same time, it will kill the performance. If you want to use anyways, you have to apply the monkeypatch inside the quantized model folder.
​
I haven't managed to make it work on AutoGPTQ, but for sure it should since SuperHOT models do there as well.
​
On my system, 2x4090 (total 48GB VRAM):
​
\* Transformers 4bit BNB: OOM trying to use > 6-7K context.
\* GPTQ For LLaMa: OOM trying to use >10k context.
\* exllama/exllama\_hf: usable up to the 16K context.
​
For exllama and if using 2x24 GB GPUs, use
​
gpu\_split = 8.8, 9.2
​
To not get OOM.
​
VRAM usage on a single 48GB VRAM GPU is lower than multiGPU. (RTX Quadro 8000, RTX A6000, RTX A6000 Ada, etc)
​
\----------------------------
​
Small example: I sent this paper to the assistant on Ooba (and using perplex extension), with more than 12000 tokens, \[here.\] ([https://pastebin.com/f1kRKLw8](https://pastebin.com/f1kRKLw8)), but added as first phrase "Hi there, my name is Pancho".
This example was done with Wizard-Vicuna-30B-Uncensored-lxctx-PI-16384-LoRA-4bit-32g.
​
Output generated in 24.85 seconds (7.32 tokens/s, 182 tokens, context 12873, seed 958472837)
Average perplexity: 3.0319
​
And the answer looks like this:
​
[Summary of the paper](https://preview.redd.it/u8k2dgq1vlcb1.png?width=1156&format=png&auto=webp&s=c4310421bf1f23836d5e30f7e163ab7967570555)
​
[Asking the name I mentioned in the first phrase](https://preview.redd.it/4yjk17y2vlcb1.png?width=688&format=png&auto=webp&s=f6d40a989b6cab95cc0b9eaa20df801047e2e309)
\----------------------------
​
At the moment, I haven't done GGML quants (sadly I don't know how to do them yet), so I'm not sure how much RAM is needed. | 2023-07-17T23:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/152hudn/ive_uploaded_some_33b_models_with_extended/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152hudn | false | null | t3_152hudn | /r/LocalLLaMA/comments/152hudn/ive_uploaded_some_33b_models_with_extended/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OmJ2YIgaV9Z4EA8790ooSFw3MeB_MqqU_mgScdu7Oi4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=108&crop=smart&auto=webp&s=b9a9640fcab472b3e61358def747d9f36f05f24b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=216&crop=smart&auto=webp&s=ecd2ef5b7cf34caf05c9cad390a6b91d1d854d75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=320&crop=smart&auto=webp&s=c8baa9b96fefacebb3c4f2ad4712b02cf66fa8d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=640&crop=smart&auto=webp&s=5fabe2a0cd2717e236e6ddf7780ccfe29fc18933', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=960&crop=smart&auto=webp&s=3ff32e7933f6c78ef18938b581e6b5f28a1874f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=1080&crop=smart&auto=webp&s=2502ab7883bda5b97808b6abf753335d5e947eb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?auto=webp&s=576b608d0ff5b4c3d2d32899b7fcdd5030adf9f9', 'width': 1200}, 'variants': {}}]} |
|
[Python] Experiment across local models and parameters with prompttools | 1 | 2023-07-18T01:05:28 | https://github.com/hegelai/prompttools/blob/main/examples/notebooks/LlamaCppExperiment.ipynb | hegel-ai | github.com | 1970-01-01T00:00:00 | 0 | {} | 152jvf4 | false | null | t3_152jvf4 | /r/LocalLLaMA/comments/152jvf4/python_experiment_across_local_models_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'GiVVL37CiJa3xVZzNf2Lp0J7JBgY8wPV2_XD0FrNatM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=108&crop=smart&auto=webp&s=a806ebb2ac2ee03b9d8d926c95fed44107be3b69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=216&crop=smart&auto=webp&s=fb42f1f484171d1e55e6a4d64023c241cbc7ea45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=320&crop=smart&auto=webp&s=f88b45d742f843829f7fd865965223bca4831feb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=640&crop=smart&auto=webp&s=d767c2a6ef93c6327e49d7b101e788bca7ed080e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=960&crop=smart&auto=webp&s=570138622f2a9e7b957ef775a66f8c227ad3f1d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=1080&crop=smart&auto=webp&s=b38c9ed0799881579e57f3ae40f2126e67de965c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?auto=webp&s=c6bc74b8eb8a0fee4fa56c886f360667fd7e5bc9', 'width': 1200}, 'variants': {}}]} |
||
Some thoughts on alignment and creativity | 1 | I've been thinking about alignment of models today, or to call them what that literally means: fine tuning models to return responses that protect against human harm.
In particular what is spinning my mind lately is how does that get achieved internally when an LLM is a probabilistic model?
How much logic or decision space is given when deciding what word to return next? To my knowledge there isnt a train of thought or chain of reasoning happening internally, which is why we have to outsource this effort and provide chain of reasoning in prompts. Ok what does this mean?
It means that whats driving models towards one path to answer vs another is just sampling from the distribution of words where the distribution is a set of "words that a human would rate they are happy with this as a response".
Why am i talking about this? Time to do a leap. I would surmise that when humans create art, they are doing so by making decision points that break laws and expectations. Those expectations sit in a set of "what is Comfortable, Non-challenging, Non-Offensive".
Moving through life, we have distractions, stressors and obstacles to avoid. When we encounter something that produces a negative emotional reaction, we inctinctively avoid that thing next time. So moving forward through life is an optimisation process of avoiding elements that fall outside of the Set Of Expectations. We generally don't need governments or authority figures to sanitise our lives. We do that ourselves.
Have you ever felt like you were more creative as a child? Before your process of self sanitization brought you kicking and screaming into adulthood where most days feel exactly the same.
So what is Art?
Art is the product that eventuates from a need to kick up the dust, and provides a source of disfunction and we recognise this internally as a decision point that breaks accepted laws and expectations. Although we self sanitise, we have a need for chaos and stochastic thought, as this is how humans arrive at new ideas. Without art we would stagnate, as there would be no source for chaos, and no new source for stochastic thought.
So back to LLMs.
My observation has been that the raw unaligned models do a much MUCH better job at producing predicted words that atleast look and feel creative. I believe this is attributed entitely to the effort of fine tuning for human safety.
The easiest explanation is that the models learn that mathematically what represents the distribution of good tokens for next word prediction, are words that will always sit on the Set Of Expectations. Ie only words that will not offend, challenge or produce emotional reactions. This therefore kills creativity as it is not possible to assign a score to a token that challenges a human that doesn't risk risking one or more humans. So it avoids them entitely.
To prove this point ill now ask chatgpt to provide the more comprehensive explanation to this phenomenon now that i have provided the simple explanation and then ask the same question to the unaligned gpt3.5-turbo model version 301.
While i cant post the chat responses here, i assure you the response on the unaligned model is more challenging in its point of view.
AI alignment kills creativity either intentionally or unintentionally. If you were an AI company, worried about your creation handing the keys to innovation to your competitor, you'd feel the need to inhibit this model as well. | 2023-07-18T01:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/152ki9c/some_thoughts_on_alignment_and_creativity/ | CrysisAverted | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152ki9c | false | null | t3_152ki9c | /r/LocalLLaMA/comments/152ki9c/some_thoughts_on_alignment_and_creativity/ | false | false | self | 1 | null |
Current, comprehensive guide to to installing llama.cpp and llama-cpp-python on Windows? | 1 | Hi, all,
Title, basically. Does anyone happen to have a link? I spent hours banging my head against outdated documentation, conflicting forum posts and Git issues, make, CMake, Python, Visual Studio, CUDA, and Windows itself today, just trying to get llama.cpp and llama-cpp-python to bloody compile with GPU acceleration. I will a admit that I have much more experience with scripting than with programs that you actually need to compile, but I swear to God, it just does not need to be this difficult. If anyone could provide an up-to-date guide that will actually get me a working OobaBooga installation with GPU acceleration, I would be eternally grateful.
Right now, I'm trying to decide between just sticking with KoboldCPP (even though it doesn't support mirostat properly with SillyTavern) dealing with ExLlama on Ooba (which does but is slower for me than Kobold) or just saying "to hell with it" and switching to Linux. Again.
Apologies, rant over. | 2023-07-18T01:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/152kn39/current_comprehensive_guide_to_to_installing/ | smile_e_face | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152kn39 | false | null | t3_152kn39 | /r/LocalLLaMA/comments/152kn39/current_comprehensive_guide_to_to_installing/ | false | false | self | 1 | null |
LLM Deployment Cost : Closed APIs like Open AI v/s fine tuned open source model on AWS | 1 | Did some number crunching on unraveling GPU compute costs.
Our research revealed that deploying a fine-tuned DialoGPT-large model on AWS is 55% less expensive, and a staggering 89% cheaper when serverless, compared to OpenAI's GPT-3.5 Turbo. Check it out here - bit.ly/46bGC9t .
Would love to hear thoughts? | 2023-07-18T02:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/152l61r/llm_deployment_cost_closed_apis_like_open_ai_vs/ | Tiny_Cut_8440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152l61r | false | null | t3_152l61r | /r/LocalLLaMA/comments/152l61r/llm_deployment_cost_closed_apis_like_open_ai_vs/ | false | false | self | 1 | null |
What's the reasonable tks/s running 30B q5 with llama.cpp (13900K + 4090) ? | 1 | ./main -m models/Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin -t 16 -n 128 --n-gpu-layers 63 -ins --color
main: build = 847 (7568d1a)
main: seed = 1689647281
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9
llama.cpp: loading model from models/Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 9 (mostly Q5_1)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.14 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 2253.49 MB (+ 3124.00 MB per state)
llama_model_load_internal: allocating batch_size x (768 kB + n_ctx x 208 B) = 436 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 60 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 63/63 layers to GPU
llama_model_load_internal: total VRAM used: 26677 MB
llama_new_context_with_model: kv self size = 780.00 MB
system_info: n_threads = 16 / 32 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
llama_print_timings: load time = 4461.41 ms
llama_print_timings: sample time = 34.20 ms / 115 runs ( 0.30 ms per token, 3362.47 tokens per second)
llama_print_timings: prompt eval time = 79546.25 ms / 33 tokens ( 2410.49 ms per token, 0.41 tokens per second)
llama_print_timings: eval time = 20864.67 ms / 115 runs ( 181.43 ms per token, 5.51 tokens per second)
llama_print_timings: total time = 115316.33 ms
I am new to running Local LLaMa, just got the hardware upgraded to be able to run with proper performance.
Curious about whether there's any parameter tunning can help me do the inference faster, should I be using 32 threads? and how many layers should I do GPU offloading?
Any benchmark that I can run to see if I am getting the reasonable performance? Thank! | 2023-07-18T02:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/152lul7/whats_the_reasonable_tkss_running_30b_q5_with/ | dostorm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152lul7 | false | null | t3_152lul7 | /r/LocalLLaMA/comments/152lul7/whats_the_reasonable_tkss_running_30b_q5_with/ | false | false | self | 1 | null |
GPT4all and koboldcpp/etc | 1 | Try as I might, nothing seems to generate roleplay for me as well as gpt4all. I can use the same llms, from wizard uncensored to airboros, its not even close.
I am rping mostly from an instruct pov. So I will tell it to generate characters or as it what the room looked like etc.
How are you all getting good results from kobold, sillytavern, etc? | 2023-07-18T02:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/152lvpk/gpt4all_and_koboldcppetc/ | Jenniher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152lvpk | false | null | t3_152lvpk | /r/LocalLLaMA/comments/152lvpk/gpt4all_and_koboldcppetc/ | false | false | self | 1 | null |
Can Tachi lang be another option for consumer device to run LLM? | 1 | As title. [https://docs.taichi-lang.cn/api/](https://docs.taichi-lang.cn/api/) | 2023-07-18T03:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/152mqyu/can_tachi_lang_be_another_option_for_consumer/ | saraiqx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152mqyu | false | null | t3_152mqyu | /r/LocalLLaMA/comments/152mqyu/can_tachi_lang_be_another_option_for_consumer/ | false | false | self | 1 | null |
Looking to hire a CTO to work on local LLMs | 1 | I’m seeing a lot of demand in my space for secure and local LLMs. I have an MVP and revenue and looking for a CTO to come on board to assist with our project! Please DM if interested. | 2023-07-18T04:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/152nlvz/looking_to_hire_a_cto_to_work_on_local_llms/ | SunnyPiscine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152nlvz | false | null | t3_152nlvz | /r/LocalLLaMA/comments/152nlvz/looking_to_hire_a_cto_to_work_on_local_llms/ | false | false | self | 1 | null |
Fine tuning on Apple Silicon | 1 | Has anyone tried fine tuning a model on Apple Silicon? I’m thinking of buying a Mac Studio with M2 chip but not sure if there is enough hardware support from machine learning frameworks for fine tuning like HuggingFace PEFT. | 2023-07-18T05:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/152oudd/fine_tuning_on_apple_silicon/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152oudd | false | null | t3_152oudd | /r/LocalLLaMA/comments/152oudd/fine_tuning_on_apple_silicon/ | false | false | self | 1 | null |
LLaMA 65B on cloud | 1 | Dear All,
I want to build a Private GPT using open source LLM on cloud.
Thinking of Azure/AWS cloud (other suggestions welcome) but company has both so will be easier to get an instance there.
We want to look at talk own data (not necessiarily confidential, expcet for PII) and build a chatbot to answer queries limited to the data (not rely on external data to limit halluciantion)
​
What is the configuration of the cloud we should have? Other recommendations/resources to look at
Currently don't know much on LLM expect looking a youtibe videos and trying to replicate them. All advice is welcome
​ | 2023-07-18T05:59:27 | https://www.reddit.com/r/LocalLLaMA/comments/152pway/llama_65b_on_cloud/ | kdas22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152pway | false | null | t3_152pway | /r/LocalLLaMA/comments/152pway/llama_65b_on_cloud/ | false | false | self | 1 | null |
Need guidance in this sea of information on how to set up a local AI | 1 | Hey, I have no idea if this is even the right sub to ask for help in, but I have no idea where to start.
I was asking the Bing AI for leads on how to setup a local/offline AI chatbot on my PC, and eventually found [this article.](https://beebom.com/how-train-ai-chatbot-using-privategpt-offline/) I followed it but when I got to the end, I realized that a dataset or source documents or something is needed for it to actually have knowledge. This is where I'm starting to get really confused.
I am essentially trying to make something like CharacterAI (won't be nearly as good, I know) that can run on my PC. Just something that is somewhat ChatGPT-like or CharacterAI-like that I can interact with without my PC melting, with perhaps the ability down the road to set up voice commands that it can respond to, like an Alexa device, but with more... intelligence.
PC specs:
* i9 13900k
* ASUS Prime Z790-P motherboard
* Corsair Vengeance 32gb RAM DDR5
* GIGABYTE RTX 3060ti 8gb GDDR6
* Corsair RM850x Power Supply
* MSI MAG 360r V2 liquid cooler
I found things like [this dataset](https://huggingface.co/datasets/nomic-ai/gpt4all-j-prompt-generations) and [LocalAI](https://github.com/go-skynet/LocalAI) and I followed the article to get [PrivateGPT](https://github.com/imartinez/privateGPT) and the GPT4ALL groovy.bin but I'm completely lost and it feels like the more I research the internet or ask BingAI for answers, the more questions I get instead. At this stage I don't know what goes where, if there's a difference between source documents and datasets, or if all this will even work on my PC.
​
Can anybody help direct me or teach me? I really want to learn. | 2023-07-18T07:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/152rb1i/need_guidance_in_this_sea_of_information_on_how/ | grimsikk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152rb1i | false | null | t3_152rb1i | /r/LocalLLaMA/comments/152rb1i/need_guidance_in_this_sea_of_information_on_how/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'acC-TxVRLiKIVU53X-JZzkSVtbVHB9x96haG57DDPRw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=108&crop=smart&auto=webp&s=d9d8bdc0b6ad5e21bde662810260d958e4012ea1', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=216&crop=smart&auto=webp&s=a1cdccb8c5fa1217d1c19e79ed390f65e7578b11', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=320&crop=smart&auto=webp&s=3d68a902a55c4d3c73aa7e70dfd30485d1c3127d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=640&crop=smart&auto=webp&s=2851e3d5c77d188ee49d2b4947910186b7f02ac6', 'width': 640}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?auto=webp&s=f0051e2f6fb4a45476094b3cb853facd8cfd87d5', 'width': 750}, 'variants': {}}]} |
Offtopic question: how to highlight and export Reddit comments? | 1 | At the moment I'm learning everything I can about LLMs and AI. This thread contains excellent knowledge that I'd like to extract into my Obsidian note taking app, but I fail terribly. There's virtually no read-it-later app that I can use to highlight and sync things into my Obsidian vault that can handle Reddit threads. It remains a time-consuming copy and paste workflow that is so last century. It's hilarious. I know it has to do with Reddit's restrictive API policy. But why on earth are we collecting valuable knowledge on a platform that does not allow us to make use of it in an academic workflow? Shouldn't we better migrate to a good enough forum software and show Reddit the finger? To be honest, every day I'm getting more angry about this.
Anyway, what is your solution for this? How do you extract knowledge from this great subreddit? | 2023-07-18T07:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/152rxq6/offtopic_question_how_to_highlight_and_export/ | krazzmann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152rxq6 | false | null | t3_152rxq6 | /r/LocalLLaMA/comments/152rxq6/offtopic_question_how_to_highlight_and_export/ | false | false | self | 1 | null |
When merging model and lora, use original tokenizer.model file? | 1 | [removed] | 2023-07-18T07:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/152ry90/when_merging_model_and_lora_use_original/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152ry90 | false | null | t3_152ry90 | /r/LocalLLaMA/comments/152ry90/when_merging_model_and_lora_use_original/ | false | false | self | 1 | null |
Retentive Network: A Successor to Transformer for Large Language Models | 1 | 2023-07-18T08:02:08 | https://arxiv.org/abs/2307.08621 | alexanderchenmh | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 152s2ij | false | null | t3_152s2ij | /r/LocalLLaMA/comments/152s2ij/retentive_network_a_successor_to_transformer_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
||
When is it best to use prompt engineering vs fine-tuning? | 1 | As somebody who has been experimenting with GPT (both closed-source and open-source), I have been doing both of prompt engineering and fine-tuning. But I was pondering upon this question today, that when exactly is it best to use which technique, and I realized that the answer is not straightforward. At a broad level, I can say that prompt engineering is when you need to (or can) give the "training" to GPT in a single context window, and fine-tuning is when you can't fit your instructions in a single context window and instead have to resort to constructing a dataset with input-output examples for few-shot training of the model.
But there are of course grey areas - what if you can technically provide the instruction in a single context window (of, say, 8000 tokens with an LLM that has this max token size). Then you have the option to go with either prompt engineering or fine tuning, right? So, how do you decide which one to go with? What are the best practices? Or, if there a concrete, black-and-white answer of when to use which, that's even better.
* At what threshold context size does prompt engineering begin to give diminishing returns?
* Is there a threshold number of training examples (i.e., dataset size) below which fine tuning does not yield good results, and beyond which it does?
* Does fine tuning always trump prompt engineering, regardless of the size of the training dataset?
I found the following in an OpenAI discussion forum post:
https://preview.redd.it/pt33uu7fjocb1.png?width=1256&format=png&auto=webp&s=97789deb9d8b54643803120dc8036ca3e3683fc0
I know there are always oversimplifications with analogies, but is this an accurate assessment?
​ | 2023-07-18T08:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/152s9ei/when_is_it_best_to_use_prompt_engineering_vs/ | ResearcherNo4728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152s9ei | false | null | t3_152s9ei | /r/LocalLLaMA/comments/152s9ei/when_is_it_best_to_use_prompt_engineering_vs/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dKIxjlIcrhsEcNhzVW3pzdCRvlrIkuPNkUbhF2lsU9s', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=108&crop=smart&auto=webp&s=916cc50a651610becce503e182f60ce35b78e08d', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=216&crop=smart&auto=webp&s=4a5afa1be439ca1556faaeca6ce8fdb928b2b66c', 'width': 216}, {'height': 215, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=320&crop=smart&auto=webp&s=25bfdf72ee4405733e4f62e57eafee0b26ddc371', 'width': 320}, {'height': 431, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=640&crop=smart&auto=webp&s=2d29cf620072de35d8bf273cf12785acbcb358fc', 'width': 640}, {'height': 647, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=960&crop=smart&auto=webp&s=c79debb88a22535885d2ee33203c70c2f5267493', 'width': 960}, {'height': 728, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=1080&crop=smart&auto=webp&s=2c4af9a118d6a858cf87dba1ff95cce4eba36d11', 'width': 1080}], 'source': {'height': 847, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?auto=webp&s=f93cd304805481d7f7800b8e0d30a04a2d21491f', 'width': 1256}, 'variants': {}}]} |
|
Text to SQL LLMs | 1 | Anyone who has used text to SQL LLMs in a production setting? How do you rate their performance based on accuracy, consistency and reliability?
https://github.com/NumbersStationAI/NSQL | 2023-07-18T10:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/152u7jp/text_to_sql_llms/ | oduor_c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152u7jp | false | null | t3_152u7jp | /r/LocalLLaMA/comments/152u7jp/text_to_sql_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OmlAQPV0rfnich01HVmULjHqyoGiaSDzrsu26n9uLIk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=108&crop=smart&auto=webp&s=ae3fca608e2300596b6e9fccc7642576ea857882', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=216&crop=smart&auto=webp&s=853911891e82cbb0ca1bdca63e16a52997f7a3fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=320&crop=smart&auto=webp&s=7fee4e6c78bb1a8475931aa2291ce87b7c4acff8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=640&crop=smart&auto=webp&s=8a13fd020b0c88fe031b2f001993e9aa0fbb5fb3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=960&crop=smart&auto=webp&s=c661ca1abeda74722699240829e5804e7a1c99a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=1080&crop=smart&auto=webp&s=da898a80465d391f35e485a1d5ac171236ef9d6b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?auto=webp&s=8f8b91b7ddfa9c9344d04315191e306f3753ecf2', 'width': 1200}, 'variants': {}}]} |
Do you find GPU renting worth it for a LocalLLM? | 1 | Let's be real, at the moment near to no one can afford to get that much VRAM.
What are some realistic costs projections for running local LLMs that are decent and fast? | 2023-07-18T10:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/152uvdv/do_you_find_gpu_renting_worth_it_for_a_localllm/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152uvdv | false | null | t3_152uvdv | /r/LocalLLaMA/comments/152uvdv/do_you_find_gpu_renting_worth_it_for_a_localllm/ | false | false | self | 1 | null |
Is oobabooga the only realistically useful alternative? | 1 | title | 2023-07-18T10:41:09 | https://www.reddit.com/r/LocalLLaMA/comments/152uw94/is_oobabooga_the_only_realistically_useful/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152uw94 | false | null | t3_152uw94 | /r/LocalLLaMA/comments/152uw94/is_oobabooga_the_only_realistically_useful/ | false | false | self | 1 | null |
Seeking Help to Run junelee/wizard-vicuna-13b GPT Model! | 1 |
I hope you're all having a fantastic day! I come to this amazing community seeking some assistance with running the junelee/wizard-vicuna-13b GPT model. I stumbled upon this powerful language model on Hugging Face's model hub ([**https://huggingface.co/junelee/wizard-vicuna-13b**](https://huggingface.co/junelee/wizard-vicuna-13b)), and I'm eager to experiment with it for some exciting text generation tasks.
The code to run the model is available in a Google Colab notebook, and you can access it using this link: [**https://colab.research.google.com/github/aitrepreneur/text-generation-webui/blob/main/API\_UPDATED\_WebUI%26pyg\_13b\_GPTQ\_4bit\_128g.ipynb#scrollTo=VCFOzsQSHbjM**](https://colab.research.google.com/github/aitrepreneur/text-generation-webui/blob/main/API_UPDATED_WebUI%26pyg_13b_GPTQ_4bit_128g.ipynb#scrollTo=VCFOzsQSHbjM)
However, despite my best efforts, I'm encountering some difficulties while trying to get the model up and running. That's why I'm turning to this community for help. If any of you have experience with this particular GPT model or are knowledgeable about Google Colab, your guidance would be greatly appreciated.
Here are a few specific questions I have:
1. Has anyone successfully run the junelee/wizard-vicuna-13b model? If so, what tips or tricks can you share?
2. Are there any specific requirements or dependencies that I should be aware of to run this model correctly?
3. If you've worked with Google Colab before, any insights on how to set it up properly for this particular notebook would be wonderful.
I'm sure some of you are seasoned AI enthusiasts, and I believe your expertise will be invaluable in helping me overcome these challenges.
Thank you all for your time and consideration. I'm eagerly looking forward to your replies and suggestions! | 2023-07-18T10:54:20 | https://www.reddit.com/r/LocalLLaMA/comments/152v5ja/seeking_help_to_run_juneleewizardvicuna13b_gpt/ | Nikunja___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152v5ja | false | null | t3_152v5ja | /r/LocalLLaMA/comments/152v5ja/seeking_help_to_run_juneleewizardvicuna13b_gpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'M47-BaZSzgabbr8l4gnwSUeo2F1rktYBnCcuLPf2oeM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=108&crop=smart&auto=webp&s=f258b703c3a255385ec066795d80fef8c4227705', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=216&crop=smart&auto=webp&s=e996745cc0536a27cb720474e2ad6cda81543a72', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=320&crop=smart&auto=webp&s=e76198bf753c73498016edaa006519d1687f5f54', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=640&crop=smart&auto=webp&s=76027e089b5cbe06fc7c47d34f28ef81826915de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=960&crop=smart&auto=webp&s=5c64b890b9d6a3ca4ad14e80daa4270a1f31c055', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=1080&crop=smart&auto=webp&s=c5414fec406722b4638efbbd652d6d9501275789', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?auto=webp&s=eb3582f98a02cf73549b136eab0b66e648761033', 'width': 1200}, 'variants': {}}]} |
Any love for Radeon Pro W6800 32GB ? What to expect ? | 1 | Hi,
I have an opportunity to get 2 or even 3 of those but can't find benchmark of them running LLMs, with all the hype being focused around Nvidia.
What could I expect to run on these ? Host would be a a dual E5-2670 or a i9-10900K, both with at least 128GB.
Thanks ! | 2023-07-18T10:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/152v8fl/any_love_for_radeon_pro_w6800_32gb_what_to_expect/ | chiwawa_42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152v8fl | false | null | t3_152v8fl | /r/LocalLLaMA/comments/152v8fl/any_love_for_radeon_pro_w6800_32gb_what_to_expect/ | false | false | self | 1 | null |
In the end does the model actually matter? | 1 | It seems to me like the LLM community is putting a lot of effort into finding and building better models/datasets: we want models that can solve more abstract problems, store more knowledge, go further in a single step.
But we have also discovered that asking models to take smaller steps: 'explain your reasoning' or 'show your working' often improves output. This leads me to a hypothesis (disclaimer: I am not experienced in AI, tell me if I am being a fool):
A simple 'intelligence' can perform the same tasks as a 'intelligence' given sufficient access to working space, external knowledge sources, and other infrastructure.
​
Consider a CPU. It takes data in, processes it, and spits it back out. Modern CPU's are faster because of pipelining, higher clock speeds, and vector instructions. From some level it can be said tha old CPU's can do the same fundamental operations as new ones, they may take more steps.
A hypothetical future could involve a 'reasoning engine' that takes data in, processes it, amd spits it out. Better reasoning engines can make bigger intuitive jumps, require less accurate prompts, and don't depend as heavily on external data stores, but any 'reasoning engine' can do the same tasks as any other 'reasoning engine,' it may just take more steps.
In this world, the models we have may be 'smart enough but a little bit slow' and instead we lack the infrastructure to make them perform. For his reason, projects like GPTEngineer and AutoGPT are more interesting to me than new improved models with new improved datasets: I suspect there is lot more that can be done with existing models - even without finetuning - if we knew how.
​
So, do you think better models is the answer? Are there any big holes in my understanding? Is there some sort of 'fundamental reasoning machine' that mirrors a turing machine but more abstract? Have you tried fiddling with the infrastructure around models? What worked and what didn't? | 2023-07-18T11:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/152vjk4/in_the_end_does_the_model_actually_matter/ | sdfgeoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152vjk4 | false | null | t3_152vjk4 | /r/LocalLLaMA/comments/152vjk4/in_the_end_does_the_model_actually_matter/ | false | false | self | 1 | null |
Deploy to hugging face with error Fine tuning "nRuntimeError: weight transformer.word_embeddings.weight does not exist" | 1 | Hi I fine tuned the model using this tutorial. It works great in notebook.
https://colab.research.google.com/drive/1FxlUb_H6Xirhkx4RszAgHeb2uDW7oKIH
After I deploy to inference endpoint, i get this error: "nRuntimeError: weight transformer.word_embeddings.weight does not exist"
Could someone please advise how to fix?
To replicate the issue, you could try deploying this model here:
https://huggingface.co/vrsen/falcon-7b-instruct-ft
You will see the same failure that I see. Could someone please help?
I am following these tutorials:
https://www.youtube.com/watch?v=AXG7TA7vIQ8&t=194s&ab_channel=VRSEN
https://www.youtube.com/watch?v=VdKdQYduGQc&ab_channel=VRSEN
more logs:
RuntimeError(f"weight {tensor_name} does not exist")\nRuntimeError: weight transformer.word_embeddings.weight does not exist\n"},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
526bf 2023-07-16T20:37:19.492Z {"timestamp":"2023-07-16T20:37:19.491918Z","level":"INFO","fields":{"message":"Shutting down shards"},"target":"text_generation_launcher"}
526bf 2023-07-16T20:37:19.492Z {"timestamp":"2023-07-16T20:37:19.491898Z","level":"ERROR","fields":{"message":"You are using a model of type RefinedWebModel to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\nTraceback (most recent call last):\n\n File
line 49, in get_filename\n raise RuntimeError(f"weight {tensor_name} does not exist")\n\nRuntimeError: weight transformer.word_embeddings.weight does not exist\n\n"},"target":"text_generation_launcher"}
526bf 2023-07-16T20:37:19.492Z Error: ShardCannotStart
526bf 2023-07-16T20:37:19.492Z {"timestamp":"2023-07-16T20:37:19.491861Z","level":"ERROR","fields":{"message":"Shard 0 failed to start"},"target":"text_generation_launcher"} | 2023-07-18T11:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/152w4no/deploy_to_hugging_face_with_error_fine_tuning/ | InterestingBasil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152w4no | false | null | t3_152w4no | /r/LocalLLaMA/comments/152w4no/deploy_to_hugging_face_with_error_fine_tuning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
Rachel the Assistant Editor | 1 | 2023-07-18T11:43:07 | AIsGonnaGetYa | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 152w527 | false | null | t3_152w527 | /r/LocalLLaMA/comments/152w527/rachel_the_assistant_editor/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'X7XJq-26GOL0Y9HNHznUjQOl_W1aFZPOYW98bpzjunA', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=108&crop=smart&auto=webp&s=173595b9b4997838c0fb9f7e78ccd46d4a00eb79', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=216&crop=smart&auto=webp&s=774f870c95196cb40f22f18cd81ebc6f5c411444', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=320&crop=smart&auto=webp&s=3cc6db732d972318bddd60bd206d76e250ce963c', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=640&crop=smart&auto=webp&s=24b770f664f3db13780a1ec7dbc48f29a8262da2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=960&crop=smart&auto=webp&s=eb97bd60f1c69c92e8850f001f19bc42b218d1ee', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?auto=webp&s=b57d013e5e0be7ea73e79e81d20a56bc9ca63845', 'width': 1024}, 'variants': {}}]} |
|||
MUD + LLM for a stronger roleplaying experience? | 1 | If you're a gen-x or xennial, MUDs were probably the rage when you went to high school. I never really got into the MUD thing, but had one of my strongest RP experiences ever in a MUSH (incidentally the only time I played one). But MUD's were a bit too rigid, and I was never into the mp aspect of them.
I've been playing around with roleplaying in KoboldCpp, testing out scenarios and models, but found them usually too wacky, or dreamy to really appeal to me. I believe there needs to be structure, and LLM's suck at it.
But.. MUD's (games) have structure. LLM's are great at story-telling up until that critical context limit. What if you combine them both? Let the LLM handle the telling, but confine it within the framework of a game.
I decided to hack something together, choosing [Tale](https://pythonhosted.org/tale/) to try it out. Here are some results:
https://preview.redd.it/wmwbobj2npcb1.png?width=949&format=png&auto=webp&s=2b3cc8de048d6e0dea9d8cd261442fadc0c7a489
The text after 'evoke' is the text generated by the MUD, which becomes the prompt for the LLM.
As you all know, LLM's can be temperamental and sometimes output almost nothing, and the next time a page of text. I've tried to allow it to be expansive for some things, while limited for others, like "The door is locked". You don't need a paragraph for that.
One neat thing with MUD's is that the world goes on without you, so for example Garfield yawns regularly, which triggers text generation. Characters entering and exiting would also do that.
I hope to create some stories to try this out further (this is only the demo that comes with the repo).
I would also like to try out that idea with small models for dialogue and see if they can be used in unison to lower inference times.
If you want to try this out you need KoboldCpp as backend (it's hard coded for the time being, as are all the settings). Just [sync the repo](https://github.com/neph1/Tale), and launch with "python -m tale.main --game tale/demo/" | 2023-07-18T11:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152w71n | false | null | t3_152w71n | /r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/ | false | false | 1 | null |
|
gpt4all question | 1 | where is a good sub to get help with it? | 2023-07-18T12:30:47 | https://www.reddit.com/r/LocalLLaMA/comments/152x6q3/gpt4all_question/ | iseedeff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152x6q3 | false | null | t3_152x6q3 | /r/LocalLLaMA/comments/152x6q3/gpt4all_question/ | false | false | self | 1 | null |
New Nvidia driver 536.67, a fix to memory issues(?) | 1 | Listed as an open issue in the driver notes because they've observed performance degradations to some applications:
>This driver implements a fix for creative application stability issues seen during heavy memory usage. We’ve observed some situations where this fix has resulted in performance degradation when running Stable Diffusion and DaVinci Resolve. This will be addressed in an upcoming driver release. \[4172676\]
FWIW: I don't know if ex-llama HF close to memory limit was a problem scenario for the previous "broken" drivers but 30B GPTQ ex-llama HF works as well as it did with 531.79 on a 4090.
Interested to hear your experiences.
[https://us.download.nvidia.com/Windows/536.67/536.67-desktop-win10-win11-64bit-international-dch-whql.exe](https://us.download.nvidia.com/Windows/536.67/536.67-desktop-win10-win11-64bit-international-dch-whql.exe) | 2023-07-18T13:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/152y8t1/new_nvidia_driver_53667_a_fix_to_memory_issues/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152y8t1 | false | null | t3_152y8t1 | /r/LocalLLaMA/comments/152y8t1/new_nvidia_driver_53667_a_fix_to_memory_issues/ | false | false | self | 1 | null |
LLM less chatty after LoRA finetune | 1 | I trained LoRAs for a few of the popular 33B LLaMA models (Wizard, Airoboros, etc) and observed that the LLMs with LoRA applied appear less chatty by A LOT.
All LoRAs were fine tuned for 2 epochs using the same Alpaca-like dataset containing 10K Q&A-style examples. The outputs in the training set is 68 tokens on average.
Did the LoRA fine tune make the model less chatty because of the small token size of the output in the dataset? If so, is there anyway to make the model more chatty without having to recreate this dataset (because I dont know how to edit it to be longer)?
Thanks | 2023-07-18T13:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/152yg19/llm_less_chatty_after_lora_finetune/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152yg19 | false | null | t3_152yg19 | /r/LocalLLaMA/comments/152yg19/llm_less_chatty_after_lora_finetune/ | false | false | self | 1 | null |
Generate both question and answer from the given context. | 1 | I want to make generate both multiple questions and answers from the given context, like ChatGPT. How to do it? and which model will perform better in this situation? I can do it using Conversational Langchain. But it is not quite good. | 2023-07-18T14:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/152zmw2/generate_both_question_and_answer_from_the_given/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152zmw2 | false | null | t3_152zmw2 | /r/LocalLLaMA/comments/152zmw2/generate_both_question_and_answer_from_the_given/ | false | false | self | 1 | null |
LLaMA 2 is here | 1 | https://ai.meta.com/llama/ | 2023-07-18T15:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/15324dp/llama_2_is_here/ | dreamingleo12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 15324dp | false | null | t3_15324dp | /r/LocalLLaMA/comments/15324dp/llama_2_is_here/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ilC2qprzEOhvondbER2GPm9DXBMFQhdj6lShAI3fqUQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=108&crop=smart&auto=webp&s=b96f0fb64d0fd3022dd85d7522591d32ffa3e30e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=216&crop=smart&auto=webp&s=9912a2752494571ed70d5a86ac12b82605c4f45c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=320&crop=smart&auto=webp&s=56ed0063c62caf22cd7da6c252e1217e3110c1b7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=640&crop=smart&auto=webp&s=de6bc123c3d7a92ad1b5d7d6155a79bbbf60123f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=960&crop=smart&auto=webp&s=e0c2d0341b3c852b53903f8db3781047c285ed18', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=1080&crop=smart&auto=webp&s=7aa7b2985c05b52eff9a4cdcefefafca8c3ba9c7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?auto=webp&s=188e3053d99818d509c6f9549c04cc4f13e6981a', 'width': 1920}, 'variants': {}}]} |
PC game Vaudeville dialog is AI generated | 1 | [https://store.steampowered.com/app/2240920/Vaudeville/](https://store.steampowered.com/app/2240920/Vaudeville/)
I have no affiliation with the game and simply thought it was a very interesting game that people in this community would also find interesting.
The convos with the AI are actually very good, maybe one day there will be games like this that interface with LLMs of our choosing.
Very cool game!! | 2023-07-18T15:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/15325za/pc_game_vaudeville_dialog_is_ai_generated/ | Inevitable-Start-653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15325za | false | null | t3_15325za | /r/LocalLLaMA/comments/15325za/pc_game_vaudeville_dialog_is_ai_generated/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xBa29lVvA5Q7SoqhxS3ZKQZla1368tS34MBVodOovUA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?width=108&crop=smart&auto=webp&s=83e00a641ad42e59bb2ffe4682055477802b5337', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?width=216&crop=smart&auto=webp&s=f815ee405b3ccf219c632dc8198c621e2baf78ff', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?width=320&crop=smart&auto=webp&s=13842b3f98c3cb04bf1821af4b1913ce4e58ceef', 'width': 320}], 'source': {'height': 353, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?auto=webp&s=9e6005815387d1a31bc494ded3b7b47149324bd8', 'width': 616}, 'variants': {}}]} |
Proposal of LLM hosted in a co-funded host | 2 | Hello:
I had and idea about how to get a big LLM (30/44 Gb) running fast in a cloud server.
What if this server would be scalable in potency and the renting shared in a group of united users?
Some sort of DAO to make it possible?
What do you think? | 2023-07-18T16:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1532njt/proposal_of_llm_hosted_in_a_cofunded_host/ | SnooWoofers780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1532njt | false | null | t3_1532njt | /r/LocalLLaMA/comments/1532njt/proposal_of_llm_hosted_in_a_cofunded_host/ | false | false | self | 2 | null |
BlinkDL/rwkv-4-music: New 120M and 560M MIDI models based on RWKV | 1 | 2023-07-18T16:17:08 | https://huggingface.co/BlinkDL/rwkv-4-music | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1532u7f | false | null | t3_1532u7f | /r/LocalLLaMA/comments/1532u7f/blinkdlrwkv4music_new_120m_and_560m_midi_models/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bWyTgxVzvLPddswLBnDx7CqEQIJaUGNfssNmo3WdoDQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=108&crop=smart&auto=webp&s=1671624de12fa839075c548ea4dbaa17cc2b05cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=216&crop=smart&auto=webp&s=587739209d97ba66aa4123d4cd3001b761ec3372', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=320&crop=smart&auto=webp&s=7ea41991849f456b278772fb408c354871242b51', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=640&crop=smart&auto=webp&s=b952136bd2e2de0b2ef231e079c36ae1e58c3030', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=960&crop=smart&auto=webp&s=676c179306037ea10f709e55ee0dc38743c0de63', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=1080&crop=smart&auto=webp&s=55f7a97c7810ca1e2f120fdda494cec25212195e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?auto=webp&s=162a751eb7a9d8bf944c3f6634a9a0ed8aec2ff7', 'width': 1200}, 'variants': {}}]} |
||
Anyone had any luck with 65b models and llama.cpp using the newly implemented rope scaling parameters to get contexts larger than 2048? | 2 | I've tried a few different values but so far it just generates really funny pidgin-sounding english, like "den the man went to to da shop store and dun some good things for shopping" or similar such nonsense.
I've had great luck with 33b models and up to 16k contexts to far, even with the superHOT 8k context models.
​ | 2023-07-18T16:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/1532yf0/anyone_had_any_luck_with_65b_models_and_llamacpp/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1532yf0 | false | null | t3_1532yf0 | /r/LocalLLaMA/comments/1532yf0/anyone_had_any_luck_with_65b_models_and_llamacpp/ | false | false | self | 2 | null |
We made Llama13b-v2-chat immediately available as an endpoint for developers | 2 | Hey LocalLLaMA, we've released tools that make it easy to test LLaMa 2 and add it to your own app!
Model playground here: [https://llama2.ai](https://llama2.ai/)
Hosted chat API here: [https://replicate.com/a16z-infra/llama13b-v2-chat](https://replicate.com/a16z-infra/llama13b-v2-chat)
If you want to just play with the model, llama2.ai is a very easy way to do it. So far, we’ve found the performance is similar to GPT-3.5 with far fewer parameters, especially for creative tasks and interactions.
Developers can:
\* clone the chatbot app as a starting point ([https://github.com/a16z-infra/llama2-chatbot](https://github.com/a16z-infra/llama2-chatbot))
\* use the Replicate endpoint directly ([https://replicate.com/a16z-infra/llama13b-v2-chat](https://replicate.com/a16z-infra/llama13b-v2-chat))
\* or even deploy your own LLaMA v2 fine tune with Cog ([https://github.com/a16z-infra/cog-llama-template](https://github.com/a16z-infra/cog-llama-template))
Please let us know what you use this for or if you have feedback! And thanks to all contributors to this model, Meta, Replicate, the Open Source community! | 2023-07-18T17:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/1534kfe/we_made_llama13bv2chat_immediately_available_as/ | Prestigious-Elk7124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1534kfe | false | null | t3_1534kfe | /r/LocalLLaMA/comments/1534kfe/we_made_llama13bv2chat_immediately_available_as/ | false | false | self | 2 | null |
Bloke the goat | 1 | Llama 2 online guys!!!! | 2023-07-18T17:56:10 | Sensitive-Analyst288 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1535eq0 | false | null | t3_1535eq0 | /r/LocalLLaMA/comments/1535eq0/bloke_the_goat/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Q0jbox-9mxIte8unWo25U0fhWBrcYNR1M7htukrI9i0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=108&crop=smart&auto=webp&s=af11e234dabf5dfaa821fe149fc2c2eb22da6271', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=216&crop=smart&auto=webp&s=9739ae51cd076e912aae8d0eeeba79e01eebbcd7', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=320&crop=smart&auto=webp&s=e2cc81110650582a90638575bda6d7e05e882885', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=640&crop=smart&auto=webp&s=e554c1170d3c51b89d7611986651c9c4f19c318d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=960&crop=smart&auto=webp&s=afc56780bc0b98abfa97c8fd9978db2071d4dcb1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=1080&crop=smart&auto=webp&s=a281e1fed2e0ade01f7cc860d0ea5f18d2c95c0c', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?auto=webp&s=da259b475a5e76a83fb3b99eb46dacf7e6dc560e', 'width': 1080}, 'variants': {}}]} |
||
Meta releases Llama 2 | 1 | 2023-07-18T17:57:29 | https://3s3.co/2023/07/18/meta-ai-releases-llama-2/ | vnpttl | 3s3.co | 1970-01-01T00:00:00 | 0 | {} | 1535fxu | false | null | t3_1535fxu | /r/LocalLLaMA/comments/1535fxu/meta_releases_llama_2/ | false | false | default | 1 | null |
|
LLaMA-2 GGML & GPTQ already available thanks to TheBloke | 1 | [removed] | 2023-07-18T18:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/1535rl1/llama2_ggml_gptq_already_available_thanks_to/ | aminedjeghri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1535rl1 | false | null | t3_1535rl1 | /r/LocalLLaMA/comments/1535rl1/llama2_ggml_gptq_already_available_thanks_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'scnkn175QpfbMTdtoYxWh-I3soYhx9pJiHmW5tQQPMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=108&crop=smart&auto=webp&s=306a702906190a7340924eac46d7feb1f3eec45e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=216&crop=smart&auto=webp&s=287c3b0788fac7766494f5d0e851ac89ff9d8ac9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=320&crop=smart&auto=webp&s=625a75ac60be2245e5230d54df27456c7107ec4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=640&crop=smart&auto=webp&s=83a9e51bb02d1141095b45fc8a465ecd586bdf97', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=960&crop=smart&auto=webp&s=3dd7e9d25663fc2ba02b34fdd652a431bbc9b036', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=1080&crop=smart&auto=webp&s=8b4b881bb48884dc6af6951b1ce4443f4bdcf11d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?auto=webp&s=dbb9e744ef3627d636c68c84c199aa1f0dafb845', 'width': 1200}, 'variants': {}}]} |
airoboros (tool) overhaul | 28 | Hello,
Just wanted to drop a note that I overhauled the [airoboros](https://github.com/jondurbin/airoboros) tool *not the models* to have most of the prompts I've been using to build the datasets, plus a couple extras.
Available via pip also:
pip install --upgrade airoboros
The configuration is now a YAML file, e.g.: https://github.com/jondurbin/airoboros/blob/main/example-config.yaml
Copy that file and customize it to your liking.
Each of the "instructors" is heavily configurable, and you can override the prompt path if you want to use your own alternative variant.
Main updates:
- Selection of "instructors" (task-specific training data generators):
- agent/router, so we can hopefully train open source LLMs to be better agents
- coding, with configurable set of coding/script languages, and optional related list of software to reference
- contextual question answering, using the same format I've used previously in airoboros models that seems to help reduce hallucinations
- counterfactual contextual question answering, i.e. put fake facts in the input block to make sure the model responds with the fake data instead of "hallucinating" the truth, to enforce obedience to context
- chain-of-thought style prompts, with multiple possible solutions, ranking, and optimal selection
- experiences, e.g. guided meditations or exploring a museum, etc.
- general, i.e. random prompts without any seed values, wholly generated by the LLM
Orca, i.e. math or reasoning question with an added "ELI5"
- riddles
- trivia
- wordgames, e.g. generating a list of words containing a string, starting with a string, etc.
- Experimental support for non-english instruction/response generation.
- Can be configured at the top-level with the "language" key, or within one of the instructor configurations by override.
- Customizable prompts per instructor
- Just change the "prompt_path" field in the instructor config section to point to a text file with your own variant.
- Switched from threads to asyncio, to be less DoS'y of your hardware.
- Probably other stuff I'm forgetting as well.
Let me know if there are other instructors you'd like to see, along with example prompts, and if you find any bugs (I haven't tested this super extensively yet).
I'll probably build a new dataset with only the newer version of gpt-4 to see how it compares to the March version. | 2023-07-18T18:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/15367sf/airoboros_tool_overhaul/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15367sf | false | null | t3_15367sf | /r/LocalLLaMA/comments/15367sf/airoboros_tool_overhaul/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': '4CODUZbJ6dyCVnStILqL0vJnMsJMmVnR1jVNmAlDU1k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=108&crop=smart&auto=webp&s=39ed49eea184ea42eb9f8c038810264f16bc4f9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=216&crop=smart&auto=webp&s=ecf5471d3062483c58275e1f4ac2946c5f53783f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=320&crop=smart&auto=webp&s=84520e030f4b59f51824183274bfa560e7255ad6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=640&crop=smart&auto=webp&s=a048fea3e147e37fca2e2fc69d6c9ad5c28d15a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=960&crop=smart&auto=webp&s=7510292393675b6f56f6f14a382759403951fd37', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=1080&crop=smart&auto=webp&s=f309ed4a2db039c1dd95aa244c7e8176169bddea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?auto=webp&s=e95819589a080ae5c4598f5b76834831dc650f8d', 'width': 1200}, 'variants': {}}]} |
TheBloke pulling through with all the Llama 2 models just hours after release | 1 | 2023-07-18T18:47:09 | https://huggingface.co/TheBloke | jd_3d | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1536qre | false | null | t3_1536qre | /r/LocalLLaMA/comments/1536qre/thebloke_pulling_through_with_all_the_llama_2/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]} |
||
Any LLM Models To Generate A Grammatically Correct Statement Into An Incorrect Statement | 1 | Sort of a weird request, but does anyone know of a model that will turn a correct sentence into an incorrect sentence with grammar mistakes?
Thanks! | 2023-07-18T18:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/15370l9/any_llm_models_to_generate_a_grammatically/ | laneciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15370l9 | false | null | t3_15370l9 | /r/LocalLLaMA/comments/15370l9/any_llm_models_to_generate_a_grammatically/ | false | false | self | 1 | null |
Llama-2-chat shocked me!! | 1 | I just tried Llama 2 chat 7b on my Android phone.
I was shocked by the answer. Initialy I thought its garbage but it after a moment I realized it was an answer on a different level. I was expecting a more polished answer given that this model is tuned for commerical use.
Have anyone noticed similar responses?
PS: I used the GGML Model from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main
Thanks /u/[TheBloke]. You are great!! | 2023-07-18T19:20:17 | AstrionX | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1537m6w | false | null | t3_1537m6w | /r/LocalLLaMA/comments/1537m6w/llama2chat_shocked_me/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'aLh9F3w2OJH3FaTEGnRCTPeyEBhhrtccgXgmo4PLdYo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=108&crop=smart&auto=webp&s=dae859df4546f2235b1485e82f86e3b441598a43', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=216&crop=smart&auto=webp&s=00efc82c7e8bf7cfc8bf320513e1be0df762a301', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=320&crop=smart&auto=webp&s=430b642f33f9fcdaab9137b03ceb02cefd4b4df0', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=640&crop=smart&auto=webp&s=65a655583f3306e909848c1ac61fd9041c073fb1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=960&crop=smart&auto=webp&s=ca2bbbdb4b3c5e3179f0916363207482d40b2fb3', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=1080&crop=smart&auto=webp&s=d963df325084f12e8e8ec2dcd1536d3ed609e228', 'width': 1080}], 'source': {'height': 3216, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?auto=webp&s=f30f7d845583463556160ebf702cd35d284408d3', 'width': 1440}, 'variants': {}}]} |
||
Llama 2 download links: GPTQ and ggml | 1 | \*\*GPTQ and \*\*ggml\*\*
Llama 2 download links, along with the Llama 2 Chat prompt template, have been added to the wiki: [https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_llama\_2\_models](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_llama_2_models)
If you're new to the sub and Llama, please see the stickied post below for information on getting started. | 2023-07-18T19:23:24 | https://www.reddit.com/r/LocalLLaMA/comments/1537p8h/llama_2_download_links_gptq_and_ggml/ | AutoModerator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1537p8h | true | null | t3_1537p8h | /r/LocalLLaMA/comments/1537p8h/llama_2_download_links_gptq_and_ggml/ | false | true | self | 1 | null |
Llama 2 download links: GPTQ and ggml | 1 | **GPTQ** and **ggml**
Llama 2 download links have been added to the wiki: [https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_llama\_2\_models](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_llama_2_models)
If you're new to the sub and Llama, please see the stickied post below for information on getting started. | 2023-07-18T19:25:48 | https://www.reddit.com/r/LocalLLaMA/comments/1537ri3/llama_2_download_links_gptq_and_ggml/ | AutoModerator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1537ri3 | true | null | t3_1537ri3 | /r/LocalLLaMA/comments/1537ri3/llama_2_download_links_gptq_and_ggml/ | false | true | self | 1 | null |
Airoboros 33b, 16k context, GGML, now available. | 1 | Ycros has uploaded a GGML edition of Airoboros 33b with 16k context. Here is the link.
https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GGML/tree/main
Thank you, Ycros. :) | 2023-07-18T19:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/15385ih/airoboros_33b_16k_context_ggml_now_available/ | Sabin_Stargem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15385ih | false | null | t3_15385ih | /r/LocalLLaMA/comments/15385ih/airoboros_33b_16k_context_ggml_now_available/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Lq14yBiK46oBlibmggvxzAV1L_cb8FDs-3ee7_YaTFs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=108&crop=smart&auto=webp&s=e51d5bb1578a67e29fa50f2d9b38dbb513ce437f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=216&crop=smart&auto=webp&s=3a24da59bb3da8569711acebdf0f243912ec4065', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=320&crop=smart&auto=webp&s=a8a1b584ff62f083f454ebe909e292e8795eadb9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=640&crop=smart&auto=webp&s=8dcfe7f264992533605ef7a3bcf5c41dfb7812a4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=960&crop=smart&auto=webp&s=6447a4336aacce0e45d93b93630c51c86b1b6af1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=1080&crop=smart&auto=webp&s=10402070e3fdd22bdf71c1b6040820538d9583f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?auto=webp&s=f49de51e6128c81737a0602dfe63a2c0973ee0dc', 'width': 1200}, 'variants': {}}]} |
Llama 2: Pffft, boundaries? Ethics? Don't be silly! | 1 | 2023-07-18T19:43:24 | WolframRavenwolf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15388d6 | false | null | t3_15388d6 | /r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8fm2DFen_NkgpI9NnuxR4Ho_2UnttXx-I85beb2ATYw', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=108&crop=smart&auto=webp&s=944c268a2410012bddc7966c988866bb3cbd6da2', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=216&crop=smart&auto=webp&s=895e1bbfee150b6239577bff6af55020fdc3103e', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=320&crop=smart&auto=webp&s=063a89465d66f93d3672da5cc272e04509c28c7f', 'width': 320}, {'height': 501, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=640&crop=smart&auto=webp&s=81435816f0af9869e10038d2586b5335948a3ff5', 'width': 640}, {'height': 752, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=960&crop=smart&auto=webp&s=16abc972ef18fb12a79e5bb9f5ef28c9b77584e6', 'width': 960}], 'source': {'height': 780, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?auto=webp&s=aba95dde82462c43bad65b7cd2d7b3f9de757cde', 'width': 995}, 'variants': {}}]} |
|||
Llama 2 has roles support | 1 | They mention it in Readme.
Implementation is in generation.py in llama repository.
if dialog[0]["role"] != "system":
dialog = [
{
"role": "system",
"content": DEFAULT_SYSTEM_PROMPT,
}
] + dialog
dialog = [
{
"role": dialog[1]["role"],
"content": B_SYS
+ dialog[0]["content"]
+ E_SYS
+ dialog[1]["content"],
}
] + dialog[2:]
assert all([msg["role"] == "user" for msg in dialog[::2]]) and all(
[msg["role"] == "assistant" for msg in dialog[1::2]]
), (
"model only supports 'system', 'user' and 'assistant' roles, "
"starting with 'system', then 'user' and alternating (u/a/u/a/u...)"
) | 2023-07-18T20:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/1538ufu/llama_2_has_roles_support/ | nikitastaf1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1538ufu | false | null | t3_1538ufu | /r/LocalLLaMA/comments/1538ufu/llama_2_has_roles_support/ | false | false | self | 1 | null |
Noob Question | 1 | [removed] | 2023-07-18T20:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/15399r3/noob_question/ | morecontextplz1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15399r3 | false | null | t3_15399r3 | /r/LocalLLaMA/comments/15399r3/noob_question/ | false | false | self | 1 | null |
GPT4ALL python code not using full RAM or CPU. | 1 | Hi all,
Just like the title says, I am running gpt4all in a pycharm project and when I run the code and watch the performance of my PC in the task manager I see that it is only using a fraction of my RAM and CPU power. How can I adjust the model/program so that it utilizes all my RAM/CPU?
I have 55GB of RAM available, but it only uses about 14GB.
**PC specs:**
Device name DESKTOP-21BIJPQ
Processor AMD Ryzen 7 5800X 8-Core Processor 3.80 GHz
Installed RAM 64.0 GB (63.9 GB usable) (Running at 3600mhz)
Device ID AE7E7741-55C8-4BAE-9D3E-3BFA5F607E5B
Product ID 00326-10103-58941-AA227
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
**Code:**
import pandas as pd
import gpt4all
#Set up the nous - vicuna model
gptj = gpt4all.GPT4All(r"C:\Users\Me\AppData\Local\nomic.ai\GPT4All\wizardLM-13B-Uncensored.ggmlv3.q4_0.bin")
#Read the dataset into a pandas DataFrame
file_path = r'C:\Users\Me\Documents\School\Anonymizer stuff\response.xlsx'
print(file_path)
data = pd.read_excel(file_path)
#Specify the column to loop through
column_name = 'Reply_1st' # Replace with the actual column name in your dataset
#Iterate over each row in the specified column
for index, row in data.iterrows():
# Get the cell value based on the specified column
cell_value = row[column_name]
print(cell_value)
if cell_value == 0:
continue # Move on to the next row if cell_value is 0
# Construct the prompt template with the anonymized row
prompt_template = """
Anonymize the following paragraph by redacting any and all personal identifying information (PII), such as
1. emails,
2. phone numbers (area codes and phone numbers),
3. pronouns (names and company names),
4. sign offs/signatures (names at the end of the email),
5. places of work or residency (including mentions of cities),
6. and any other sensitive details.
Replace the PII with the placeholder 'XXX'.
Do not respond to the paragraph below, anonymize it. If there are multiple PII, redact them all, and do not alter any text besides redacting,
Also include all original text besides what needs to be redacted{
""" + cell_value + """
}
"""
# Submit the prompt to the model
messages = gptj.generate(prompt=prompt_template, max_tokens=1000, repeat_penalty=1)
# Print messages so we can see
print(messages)
# Update the 'anonymized_oe_1' column with the anonymized output
data.at[index, 'anonymized_oe_1'] = messages
#Save the modified DataFrame with the anonymized outputs
data.to_excel(r'C:\Users\Me\Documents\School\Anonymizer stuff\Attempt 1.xlsx', sheet_name='Sheet1')
​ | 2023-07-18T21:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/153as3c/gpt4all_python_code_not_using_full_ram_or_cpu/ | thedenfather | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153as3c | false | null | t3_153as3c | /r/LocalLLaMA/comments/153as3c/gpt4all_python_code_not_using_full_ram_or_cpu/ | false | false | self | 1 | null |
Best ways to add guardrails to uncensored open source LLMs? | 1 | I’m trying to act a few guardrails so that the LLMs stay in character.
Sometimes the LLMs say something irrelevant completely different from what was asked like asking ‘how your day going?’ and it starts replying about some mathematical equation.
Other times it acts like a knowledge bank instead of staying in the character. For ex: let’s say the bot is asked to pretend to be someone not too knowledgeable about world facts but it still replies about what happened on xx date in 1939 or so.
Are there some established methodologies or tools to create guardrails? | 2023-07-18T21:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/153ax67/best_ways_to_add_guardrails_to_uncensored_open/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ax67 | false | null | t3_153ax67 | /r/LocalLLaMA/comments/153ax67/best_ways_to_add_guardrails_to_uncensored_open/ | false | false | self | 1 | null |
Anybody know of other good LLM subs? | 1 | Hey there everybody. I’m an avid AI enthusiast and love anything to do with language model or image generation models.
However, I have found that this sub and the official server are too centralized in regards to moderation and the ability to influence the community for my taste.
I love this community, however I would like to see something with more moderators and more community involvement.
All the best to the mods, I just want to expand the number of subs I visit and get to know more of the AI community.
So, in the spirit of furthering passion and discussion about AI and meeting all of you wonderful people, any suggestions? | 2023-07-18T21:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/153b7ih/anybody_know_of_other_good_llm_subs/ | OfficialHaethus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153b7ih | false | null | t3_153b7ih | /r/LocalLLaMA/comments/153b7ih/anybody_know_of_other_good_llm_subs/ | false | false | self | 1 | null |
What does Typical P actually do? | 1 | Hi, all,
I've been experimenting with SillyTavern, using both llama.cpp and Ooba, and I've noticed an odd trend across a lot of models. I've found a ton of references online to set Typical P / Typical Sampling to 0.2, but I've found that doing so seems somehow to lobotomize any model I throw at it. This happens on both instruct and non-instruct models, on both 13b and 30b / 33b, and with any context size from 2048 to 8192. For a while, I thought llama.cpp just sucked somehow until I tried changing Typical P to 1, and everything just started to work. I'm happy that it works now, but I'm interested to know why.
Apologies if this is well known on the sub. I did search around reddit and Google for a while, and couldn't find any decent explanation of the various samplers. | 2023-07-18T21:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/153bnly/what_does_typical_p_actually_do/ | smile_e_face | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153bnly | false | null | t3_153bnly | /r/LocalLLaMA/comments/153bnly/what_does_typical_p_actually_do/ | false | false | self | 1 | null |
Bing Chat Enterprise | 1 | [removed] | 2023-07-18T22:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/153bza4/bing_chat_enterprise/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153bza4 | false | null | t3_153bza4 | /r/LocalLLaMA/comments/153bza4/bing_chat_enterprise/ | false | false | self | 1 | null |
Why we don't need a label data for fine-tuning model with qlora ? | 1 | From this notebook seems like the author didn't use any label for the dataset
[https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)
​
it seems like it can be explained here: [https://huggingface.co/docs/transformers/tasks/language\_modeling](https://huggingface.co/docs/transformers/tasks/language_modeling)
https://preview.redd.it/dz7i7vahrscb1.png?width=2172&format=png&auto=webp&s=c7d9281df823423a65869f812b63b32612c5c47a
But if the next word is the label then what is the previous word and how will it be determined? e.g: word1 word2 word3 word4... <next word>
I guess my question is how does it know which previous words to look at, is that process done iteratively (word1 <next word>, word1 word2 <next word>) ? | 2023-07-18T22:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/153c2pd/why_we_dont_need_a_label_data_for_finetuning/ | Cheap-Routine4736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153c2pd | false | null | t3_153c2pd | /r/LocalLLaMA/comments/153c2pd/why_we_dont_need_a_label_data_for_finetuning/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
|
LLaMA-2-70B-GPTQ-transformers4.32.0.dev0, 4bit quantization working with GPTQ for LLaMA! | 1 | 2023-07-19T00:34:58 | https://huggingface.co/Panchovix/LLaMA-2-70B-GPTQ-transformers4.32.0.dev0 | panchovix | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 153flql | false | null | t3_153flql | /r/LocalLLaMA/comments/153flql/llama270bgptqtransformers4320dev0_4bit/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9SChMy77DY8uX1j6uPNCVG94VDqvd5nSNweBCfsX0Ow', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=108&crop=smart&auto=webp&s=382be2aac44ea31e81dd4929b1bb417651250989', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=216&crop=smart&auto=webp&s=5686bdc316c728b06acc8daf2907f58ed7300a14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=320&crop=smart&auto=webp&s=4dacb3c03b650948814a17ffd4c6f8ffb8c744f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=640&crop=smart&auto=webp&s=dd9166c0540dbf9915c4f0ed375ffbc1fb6cbdd7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=960&crop=smart&auto=webp&s=48c486a9ae3d5fbe50a02f0d529718ddae647ddc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=1080&crop=smart&auto=webp&s=1b122be311017726a2f8289aa871734b05ff306c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?auto=webp&s=8e55c78aa7a330b649f0ec12e14fec476fd79d3f', 'width': 1200}, 'variants': {}}]} |
||
Fine-tuning for accuracy | 1 | Hi everyone,
During the last months I've reading some of the posts regarding fine-tuning and customizing Local LLM for particular scenarios (and some questions about what are people doing with those Local LLM).
TL;DR: What is the best strategy to tune a LLM to select from a limited but accurate set of customizable responses?
In my particular case, I have been researching about how to convert natural language description of data requirements to data quality validations. My ultimate goal is to be able to validate the data quality of Open Data datasets based on the metadata/descriptions available for each dataset.
So, usually each dataset has a corresponding PDF, with sentences describing the data such as:
- Contract offers can be: open, limited availability, direct adjudication
- Contract ID must not be null
- Contract start data is a date in the format MM/DD/YYYY
To check those rules, I am using a library called [Great Expectations](https://greatexpectations.io/expectations/?viewType=Summary), written in Python that offers several hundred pre-made validation rules, called expectations. There are around 75 core expectations and 250 experimental expectations.
These are some of the sample expectations that would be created:
- expect_column_values_to_be_in_set(
"contract_offer", ["open", "limited availability", "direct adjudication"])
- expect_column_values_to_not_be_null("contract_ID")
- expect_column_values_to_match_strftime_format(
"contract_start_date", "%m/%d/%Y")
It's important to know that a single natural language description can create several expectations. (as in: "We expect date values not to be null and formatted as per ISO 8601 standard format")
I have generated several of these rules using GPT-4, passing the names of the expectations. Otherwise, the model will hallucinate creating seemingly real answers that are 100% made up.
My goal would be to create those rules using a customized LocalLLM. Some ideas gathered from this subreddit:
- Vector databases can be used to recover factual information.
- LoRA can be used to customize the output format (but are not the best way to fine-tune for accuracy).
As I was researching this topic, several similar initiatives have appeared. [SodaGPT](https://docs.soda.io/soda-cloud/sodagpt.html) is a cloud service created by data quality company Soda, to convert a single natural language sentence into a data quality rule, using the own data quality language called SodaCL.This is a Falcon 7b (instruct?) fine-tuned using LoRA.
One week later the [BirdiDQ](https://github.com/BirdiD/BirdiDQ) open source project was announced to convert natural language descriptions into expectations. I am aiming to contribute to this project, that works with finetuned GPT-3.5 at the moment, but plans to use Falcon-7b instruct plus a QLoRA. Both fine-tunings have been trained using a custom dataset made of 250 pairs of descriptions-expectations.
What are the most viable strategies when having to select and customize from a limited number of possible alternatives and want to prevent any hallucinations in the output?
Thanks in advance!
P.D After today's announcement, we might experiment with LLaMA 2 instead of Falcon. | 2023-07-19T01:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/153gc7p/finetuning_for_accuracy/ | elsatch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153gc7p | false | null | t3_153gc7p | /r/LocalLLaMA/comments/153gc7p/finetuning_for_accuracy/ | false | false | self | 1 | null |
Llama v2 chat... don't think it is there yet. Need Fine Tune. | 1 | ​
https://preview.redd.it/bl3q2fs6vtcb1.png?width=1385&format=png&auto=webp&s=5edfd6b146fb8eab5d474bafb5825927cfd287c3 | 2023-07-19T01:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/153haxw/llama_v2_chat_dont_think_it_is_there_yet_need/ | jackfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153haxw | false | null | t3_153haxw | /r/LocalLLaMA/comments/153haxw/llama_v2_chat_dont_think_it_is_there_yet_need/ | false | false | 1 | null |
|
Meta filtered LLaMa-2's training data to keep it G-rated as much as possible. If your intended use case isn't G-rated, you might be in for a disappointment. | 1 | Someone was posting [llama-chat being flirty earlier](/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/) (given a specific system prompt, of course), but that doesn't mean it's ever actually seen any kind of adult content. This may impact certain kinds of story writing, even if you're not going for anything X-rated.
In other words, it's not just trained to say "No" to writing your kinky story, it literally doesn't know how. | 2023-07-19T01:59:28 | https://www.reddit.com/r/LocalLLaMA/comments/153hgpx/meta_filtered_llama2s_training_data_to_keep_it/ | Maristic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153hgpx | false | null | t3_153hgpx | /r/LocalLLaMA/comments/153hgpx/meta_filtered_llama2s_training_data_to_keep_it/ | false | false | self | 1 | null |
any news on when Phi-1 will be released? | 1 | Microsoft said they'd release Phi-1, did they not? I haven't heard anything. is there an approximate timeline? | 2023-07-19T02:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/153hwlj/any_news_on_when_phi1_will_be_released/ | Cunninghams_right | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153hwlj | false | null | t3_153hwlj | /r/LocalLLaMA/comments/153hwlj/any_news_on_when_phi1_will_be_released/ | false | false | self | 1 | null |
Business Training Application Feasability | 1 | I am a CPA and can attest that small CPA shops are in the Stone Age technology wise and also struggle with retaining senior associates to train their new hires. I was wondering if it would be feasible to work with CPA firms to document their processes and procedures and then have a developer use the data to train a local chatbot that could be used to answer questions for new hires? Sorry I am not a developer so I don’t understand all the details in terms of what is possible right now but I am trying to learn. | 2023-07-19T02:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/153i32b/business_training_application_feasability/ | Doggo_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153i32b | false | null | t3_153i32b | /r/LocalLLaMA/comments/153i32b/business_training_application_feasability/ | false | false | self | 1 | null |
No, Llama 2 is NOT an open source LLM | 1 | I have seen many people call llama2 the most capable open source LLM. This is not true so please please stop spreading this misinformation. It is doing more harm than good.
Open source means two things:
- Anyone can access the code and weights and use it however they want, no strings attached.
- Anyone can use the model for whatever purpose, no strings attached.
The Llama 2 license doesn't allow these two things.
First, regarding the model:
>2. **Additional Commercial Terms.** If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
While this affects less than 1% of the world, it still violates the essences of open source ML.
Second, regarding the weights: the weights are not publicly provided by meta. You have to apply to get a copy of the weights from meta. Furthermore, you can NOT use these weights to train any LLM except Llama 2 (unless you have written approval from meta)
Lastly, while llama2 is NOT open source LLM it is still a big step towards democratizing LLMs since people are going to use anyway. Having a capable LLM to experiment with is a big catalyst for innovation in NLP.
TL;DR: Llama2 is good but not open source. | 2023-07-19T02:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/153i6vi/no_llama_2_is_not_an_open_source_llm/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153i6vi | false | null | t3_153i6vi | /r/LocalLLaMA/comments/153i6vi/no_llama_2_is_not_an_open_source_llm/ | false | false | self | 1 | null |
How to finetune Guanaco or Airoboros 13B/4bit on Windows? | 1 | Hi all. I'm struggling to find a way to finetune 4 bit models on Windows. Seems to do so in oobabooga it needs GPTQ-for-LLaMA which needs Triton which needs Linux (or WSL which I attempted to install oobabooga but it's the end of a long day of getting to this point and just didn't have the heart left to learn WSL/Linux commands to troubleshoot the install).
Am I missing a better or easier route to accomplish finetuning 4bit versions of Guanaco/Airoboros 13B?
Related question, I'm doing this all on a 3090. Am I destined to fail regardless due to VRAM cap of 24GB? | 2023-07-19T02:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/153i9tu/how_to_finetune_guanaco_or_airoboros_13b4bit_on/ | TheNitzel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153i9tu | false | null | t3_153i9tu | /r/LocalLLaMA/comments/153i9tu/how_to_finetune_guanaco_or_airoboros_13b4bit_on/ | false | false | self | 1 | null |
Llama 2 13b with kobold cpp issue. | 1 | Trying to use llama 2 13b on kobold cpp, it can't even form a complete paragraph. Any pointers guys? | 2023-07-19T02:48:39 | SuggestionInside5234 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153ij6s | false | null | t3_153ij6s | /r/LocalLLaMA/comments/153ij6s/llama_2_13b_with_kobold_cpp_issue/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'aL9Pm6DnJZMWydP8LK_0O3KvtvDadivkUIB5k3wXMQI', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=108&crop=smart&auto=webp&s=b967fb95c604bb6f109292836ce2bfcd128ce047', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=216&crop=smart&auto=webp&s=539fb15819a95488b75a7b3669e4a5538bf5d617', 'width': 216}, {'height': 151, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=320&crop=smart&auto=webp&s=04ab198d29d0baccda8f6126539b8f2658137cf9', 'width': 320}, {'height': 302, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=640&crop=smart&auto=webp&s=fc9f1401a489d13f5ebb1a3067610c3a8135d251', 'width': 640}, {'height': 454, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=960&crop=smart&auto=webp&s=69597590fcfe9f882b25c07af8b1b17ed40a615e', 'width': 960}, {'height': 511, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=1080&crop=smart&auto=webp&s=0036618705797babce6df0f7ffbf5eb7373511d5', 'width': 1080}], 'source': {'height': 1908, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?auto=webp&s=48e3f0974fec19d456208c2a67eb5eb2e51433d6', 'width': 4032}, 'variants': {}}]} |
||
Llama 2 - LLM Leaderboard Performance | 1 | Multiple leaderboard evaluations for Llama 2 are in and overall it seems quite impressive.
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is the most popular leaderboard, but not sure it can be trusted right now since it's been under revision for the past month because apparently both its MMLU and ARC scores are inaccurate. But nonetheless, they did add Llama 2 and the 70b-chat version has taken 1st place. Each version of Llama 2 on this leaderboard is about equal to the best finetunes of Llama.
[https://github.com/aigoopy/llm-jeopardy](https://github.com/aigoopy/llm-jeopardy)
On this leaderboard the Llama 2 models are actually some of the worst models on the list. Does this just mean Llama 2 doesn't have trivia-like knowledge?
[https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i\_R6I6W/edit#gid=2011456595](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=2011456595)
Last, Llama 2 performed incredibly well on this open leaderboard. It far surpassed the other models in 7B and 13B and if the leaderboard ever tests 70B (or 33B when it is released) it seems quite likely that it would beat GPT-3.5's score.
What are your guys' thoughts on Llama 2's performance and the potential of its finetunes? | 2023-07-19T02:50:49 | https://www.reddit.com/r/LocalLLaMA/comments/153ikue/llama_2_llm_leaderboard_performance/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ikue | false | null | t3_153ikue | /r/LocalLLaMA/comments/153ikue/llama_2_llm_leaderboard_performance/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
local llm function calling | 1 | I was working on a few internal prototypes at work and I was using the chatGPT function calling. It's crazy powerful. Can this structure be represented using llama-2 for example for do I just have to write a fancy prompt? | 2023-07-19T03:02:23 | https://www.reddit.com/r/LocalLLaMA/comments/153iu1x/local_llm_function_calling/ | APUsilicon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153iu1x | false | null | t3_153iu1x | /r/LocalLLaMA/comments/153iu1x/local_llm_function_calling/ | false | false | self | 1 | null |
Where is the LLaMA v2 34B model? Are they not releasing it? | 1 | 2023-07-19T03:27:02 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153jcbz | false | null | t3_153jcbz | /r/LocalLLaMA/comments/153jcbz/where_is_the_llama_v2_34b_model_are_they_not/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-u5tAilH5APZR_ICtEtXa3q1MlcnfsnIDi5nJFqsXvs', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=108&crop=smart&auto=webp&s=fd0ebd9811f5f990c5d375ba8718592586e0bb0b', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=216&crop=smart&auto=webp&s=2f94b8639cf1743d5cfe22e7053259c6a56910f1', 'width': 216}, {'height': 123, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=320&crop=smart&auto=webp&s=21d3be3e753846fbe840d0b0772168860c8a3ec8', 'width': 320}, {'height': 247, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=640&crop=smart&auto=webp&s=53d7315ea978a677cb2681c5a121286653d30c2e', 'width': 640}, {'height': 370, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=960&crop=smart&auto=webp&s=599e99b1fb6a1e8faa5d3da8c349207fcef78fde', 'width': 960}, {'height': 416, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=1080&crop=smart&auto=webp&s=4e12d327ac7545e601fe60e883631f84271eb4fd', 'width': 1080}], 'source': {'height': 562, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?auto=webp&s=7dcc79f57f453db9fb4c02ba85b050c298a4de5a', 'width': 1456}, 'variants': {}}]} |
|||
Seems like we can continue to scale tokens and get returns model performance well after 2T tokens. | 1 | 2023-07-19T04:00:37 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153k0ph | false | null | t3_153k0ph | /r/LocalLLaMA/comments/153k0ph/seems_like_we_can_continue_to_scale_tokens_and/ | false | false | 1 | {'enabled': True, 'images': [{'id': '_QvOymv7LHPPeoIF0xYPiNpwwGlkPBC6RDLPxxqMNRA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=108&crop=smart&auto=webp&s=c0b1095e36e2bf269b9094f938d0ded128b93feb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=216&crop=smart&auto=webp&s=52dacf373348098566d36eb00918f6986d4d1181', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=320&crop=smart&auto=webp&s=42b67b04e30373397d1232baf6e00a7cf5ef7ec0', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=640&crop=smart&auto=webp&s=b21f72931005ea1a675bcf031d7708603206a714', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=960&crop=smart&auto=webp&s=57fca239e06347ff74fa65bdfdc08e98e1efe234', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=1080&crop=smart&auto=webp&s=7cbe39251a8305709771ffd06031a819fcf32ba2', 'width': 1080}], 'source': {'height': 847, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?auto=webp&s=dbbae5461ea2473e0ea7bc9f8f9a8ec0447a65ef', 'width': 1505}, 'variants': {}}]} |
|||
How much better is Llama 2? A simple example | 1 | 2023-07-19T04:13:17 | https://github.com/hegelai/prompttools/blob/main/examples/notebooks/LlamaHeadToHead.ipynb | hegel-ai | github.com | 1970-01-01T00:00:00 | 0 | {} | 153ka5o | false | null | t3_153ka5o | /r/LocalLLaMA/comments/153ka5o/how_much_better_is_llama_2_a_simple_example/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'VTGUfFaipsQFbnM1m8I4CHic0B28M3GqpoMOgz7_bo0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=108&crop=smart&auto=webp&s=da8fa70753162fa655cd03ab79dac2793daf5445', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=216&crop=smart&auto=webp&s=4eebeb2e1c8320e6f5686a2a546386dd672f90e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=320&crop=smart&auto=webp&s=432fb876e7f69640e456990bfedefb2a667b183d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=640&crop=smart&auto=webp&s=fd464dc0d3a9b264865d5d8f391e4aab136d005e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=960&crop=smart&auto=webp&s=96f1068ed99dba73011343f625a1a0106059520f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=1080&crop=smart&auto=webp&s=1bf92dd069cc70b53c54480336a4e40a4c953252', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?auto=webp&s=ce48118b1b3caf2be70e4ac9b7d8e8fa04a2952a', 'width': 1200}, 'variants': {}}]} |
||
Llama2 Censored? | 1 | You cant get it to write a childrens story that involves the end of the world 🥲 | 2023-07-19T04:17:20 | https://www.reddit.com/gallery/153kd62 | Sumozebra | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 153kd62 | false | null | t3_153kd62 | /r/LocalLLaMA/comments/153kd62/llama2_censored/ | false | false | 1 | null |
|
Llama 2 is disappointing | 1 | With each new model launch, it's claimed that they at least match the quality of ChatGPT-3.5, yet the outcomes often fall short of expectations. Llama 2 hasn't broken this trend.
I gave the finetuned Llama 2 70b Chat model a go: [**https://huggingface.co/spaces/ysharma/Explore\_llamav2\_with\_TGI**](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI)
After posing a series of questions and pitting its responses against those of ChatGPT-3.5, the results were underwhelming. I'd venture to say that it performs similarly to the Guanaco 65b model. It's nowhere near the level of ChatGPT-3.5.
What's more, it seems to exercise a unprecedented level of censorship, surpassing even that of ChatGPT. Notably, it abstained from answering queries about the concentration camps in Xinjiang and the Tiananmen Square massacre. Let alone anything related to NSFW.
I'm left hoping that the base model doesn't come with this level of censorship. Can they even censor the base model? | 2023-07-19T04:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/153kxwx/llama_2_is_disappointing/ | Big_Communication353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153kxwx | false | null | t3_153kxwx | /r/LocalLLaMA/comments/153kxwx/llama_2_is_disappointing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jVfONw70j9xFSHK2sFB3j_M0ywv9sgZ9DCoGJM3sD1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=108&crop=smart&auto=webp&s=6f30a110b0af985d1d8140231cd4f0316a48f80a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=216&crop=smart&auto=webp&s=18122badbb0673e3d2c949b02a62535ad0268572', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=320&crop=smart&auto=webp&s=a3a6874f2f569d6490f02927f21a3c3e78e9a0ae', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=640&crop=smart&auto=webp&s=9bbfeeef3d9753edf8888a8f9ce00339a706d5ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=960&crop=smart&auto=webp&s=91030b6e805f84c6845685181ff119ba5e84cb74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=1080&crop=smart&auto=webp&s=abdea90c83851c15d5895a8766fc50c58b617af9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?auto=webp&s=33c126b9bbf110d3f9c7ca2f2417fa61c7220da7', 'width': 1200}, 'variants': {}}]} |
Updates on the Agency library, now with AMQP support | 1 | Hello LocalLlama!
I've previously posted about my python agent library called [`agency`](https://github.com/operand/agency). There's some great updates to share so I thought I'd post another followup.
I've got two big things:
1. A couple weeks back I added AMQP (RabbitMQ) support to the library. This means that you can now use `agency` to create a distributed or multiprocess system for your agents, avoiding the single-threaded limitations of python's GIL. Want to run multiple copies of an LLM and split load between them over a network? This could help with that.
2. I've just begun on a project to replace the crude "demo" app and add a modern "starter" web UI that you will be able customize for your needs. It will require programming on your part but the goal is to offer a good minimal foundation. It'll use a still to be chosen UI framework. The Streamlit library was suggested and it's one good possibility. If you happen to have any opinions on this, I'd love to know!
Other updates since I last posted include lots of documentation work, tests, a better docker configuration... so many things.
I'm excited and I just wanted to spread the word some more. I hope this project is helpful to some of you! If you have any suggestions or feedback to share please do!
[https://github.com/operand/agency](https://github.com/operand/agency) | 2023-07-19T05:08:36 | https://www.reddit.com/r/LocalLLaMA/comments/153ldk7/updates_on_the_agency_library_now_with_amqp/ | helloimop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ldk7 | false | null | t3_153ldk7 | /r/LocalLLaMA/comments/153ldk7/updates_on_the_agency_library_now_with_amqp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'm6th8V7E7zzx2CCzdbrW9zvqJWfFxufjUAokKdD9Qaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=108&crop=smart&auto=webp&s=875dcf7e2c9c07458396f503d7cf2976a3c33503', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=216&crop=smart&auto=webp&s=99e96338557344e1b9e38df9f3f65166764d632c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=320&crop=smart&auto=webp&s=62c67e35464947430c0128ebdeb5046fed9500cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=640&crop=smart&auto=webp&s=401bb1677a6550313d8213f8cfc9752a105ca587', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=960&crop=smart&auto=webp&s=33b8ffcf7dd0812dedebc0374647a5281dddcf1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=1080&crop=smart&auto=webp&s=965489bf80f760a497b585a641250189335e583f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?auto=webp&s=6800f296fa85899c024b7b7dae5664e6c8dfb5a5', 'width': 1200}, 'variants': {}}]} |
Quantization: How Much Quality is Lost? | 1 | Yeehaw, y'all 🤠
I've been pondering a lot about quantization and its impact on large language models (LLMs). As you all may know, quantization techniques like 4-bit and 8-bit quantization have been a boon for us consumers, allowing us to run larger models than our hardware would typically be able to handle. However, it's clear that there has to be a trade-off.
Quantization essentially involves reducing the precision of the numbers used in the weights of the model. This reduction in precision leads to a decrease in model size and computational requirements, but it also introduces an approximation error. The question is, just how much does this approximation error impact the quality of the model's output?
Has anyone in this community done any testing or come across any research that quantifies the impact of quantization on the quality of LLMs? I'm particularly interested in real-world experiences and practical examples. And if anyone has some insight into how the quality loss due to quantization might be mitigated, that'd be icing on the cake.
Thanks in advance for your insights and contributions to this discussion! | 2023-07-19T05:11:06 | https://www.reddit.com/r/LocalLLaMA/comments/153lfc2/quantization_how_much_quality_is_lost/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153lfc2 | false | null | t3_153lfc2 | /r/LocalLLaMA/comments/153lfc2/quantization_how_much_quality_is_lost/ | false | false | self | 1 | null |
Which is the best uncensored, commercial use LLaMA model? | 1 | Hi all,
I'm looking for an uncensored, commercially licensable LLaMA model to use in my projects. I'm using orca-mini-7b.ggmlv3.q4\_0.bin downloaded from [https://gpt4all.io/](https://gpt4all.io/index.html), but it is heavily censored.
My project is on GitHub here: [https://github.com/Uralstech/vid-orca](https://github.com/Uralstech/vid-orca). The goal is to deploy these models to Google Cloud for use in my apps through an API. | 2023-07-19T05:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/153lk9f/which_is_the_best_uncensored_commercial_use_llama/ | uralstech_MR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153lk9f | false | null | t3_153lk9f | /r/LocalLLaMA/comments/153lk9f/which_is_the_best_uncensored_commercial_use_llama/ | false | false | self | 1 | null |
Which is the best uncensored, commercial use LLaMA model? | 1 | Hi all,
I'm looking for an uncensored, commercially licensable LLaMA model to use in my projects. I'm using orca-mini-7b.ggmlv3.q4\_0.bin downloaded from [https://gpt4all.io/](https://gpt4all.io/index.html), but it is heavily censored.
My project is on GitHub here: [https://github.com/Uralstech/vid-orca](https://github.com/Uralstech/vid-orca). The goal is to deploy these models to Google Cloud for use in my apps through an API. | 2023-07-19T06:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/153msko/which_is_the_best_uncensored_commercial_use_llama/ | uralstech_MR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153msko | false | null | t3_153msko | /r/LocalLLaMA/comments/153msko/which_is_the_best_uncensored_commercial_use_llama/ | false | false | self | 1 | null |
Deploy LLaMA models to Google Cloud! | 1 | [removed] | 2023-07-19T06:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/153n1o5/deploy_llama_models_to_google_cloud/ | uralstech_MR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153n1o5 | false | null | t3_153n1o5 | /r/LocalLLaMA/comments/153n1o5/deploy_llama_models_to_google_cloud/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'loUGlN_SPaXn_KhONzdrngYcQA6ceTvcvSaOMlCo_Lk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?width=108&crop=smart&auto=webp&s=db92ffc1bd6bd361c16d42154f7e0c13df71c510', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?width=216&crop=smart&auto=webp&s=6151eb25a9d2212710087c6b67a463061d06de3e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?width=320&crop=smart&auto=webp&s=bcdaf751835d386cb4784dc2457b6b24e3556ea2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?width=640&crop=smart&auto=webp&s=4085602c47f67c3ccdc0529e358b821d0c2c6b5a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?width=960&crop=smart&auto=webp&s=a54d1a8ba609f4651cc1ee9992e59b29b0156a8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?width=1080&crop=smart&auto=webp&s=58d513a9cb834b4dd5746315e8c2795c18a1f3d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zVxamcfKRIWNf8tZo3Ey5Nbgoatmv26nEMRRrs0vmak.jpg?auto=webp&s=c4bf5609b18afa69634da7d01891cdfc98e91c72', 'width': 1200}, 'variants': {}}]} |
How do I deploy a chatbot locally on my infra using Falcon 2 or even LLAMA 2 | 1 | Context: I was tasked with creating a chatbot for my uni. I scraped their entire webstie for the knowledge base and plan on using vectorDBs.
I have the infra: its a dgx node with 8 x V100s SXM2 32 gigs each. I want to now deploy a chat model completely locally on the same with inference endpoints to gradio or something for the chat interface. Is there a comprehensive guide to this with a similar setup? I usually encounter guides that utilise quantisation or some other formatting such as ggml to run cpu inference.
I then further want to hook it up to the vectordb probably using langchain and then deploy it to my uni's website or sm for students to use. | 2023-07-19T06:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/153n5xa/how_do_i_deploy_a_chatbot_locally_on_my_infra/ | supersic1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153n5xa | false | null | t3_153n5xa | /r/LocalLLaMA/comments/153n5xa/how_do_i_deploy_a_chatbot_locally_on_my_infra/ | false | false | self | 1 | null |
Replika local llm | 1 | is there any local llm wich is similar to ReplikaAI? looking for 7B or 13B models. I have used Samantha but as you all know it has some limitation specially in RP or ERP | 2023-07-19T07:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/153ngux/replika_local_llm/ | sahl030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ngux | false | null | t3_153ngux | /r/LocalLLaMA/comments/153ngux/replika_local_llm/ | false | false | self | 1 | null |
How to size hardware to run local LLMs? | 1 | [removed] | 2023-07-19T07:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/153nv5h/how_to_size_hardware_to_run_local_llms/ | CryptoLXR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153nv5h | false | null | t3_153nv5h | /r/LocalLLaMA/comments/153nv5h/how_to_size_hardware_to_run_local_llms/ | false | false | self | 1 | null |
How to fine tune 8k context length Llama 13B on minimal number of gpus? | 1 | I have a llama 13B model I want to fine tune. I am using qlora (brings down to 7gb of gpu memory) and using ntk to bring up context length to 8k.
But on 1024 context length, fine tuning spikes to 42gb of gpu memory used, so evidently it won’t be feasible to use 8k context length unless I use a ton of gpus. Is there anyway to lower memory so that one or two 3090s are enough for 8k context length fine tuning? | 2023-07-19T07:38:17 | https://www.reddit.com/r/LocalLLaMA/comments/153o4ug/how_to_fine_tune_8k_context_length_llama_13b_on/ | bahibo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153o4ug | false | null | t3_153o4ug | /r/LocalLLaMA/comments/153o4ug/how_to_fine_tune_8k_context_length_llama_13b_on/ | false | false | self | 1 | null |
Llama-2 via MLC LLM | 1 | - Twitter: https://twitter.com/junrushao/status/1681562418768650241
- Instructions: https://mlc.ai/mlc-llm/docs/get_started/try_out.html
- Performance: 46 tok/s on M2 Max, 156 tok/s on RTX 4090.
More hardwares & model sizes coming soon! | 2023-07-19T07:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/153o791/llama2_via_mlc_llm/ | yzgysjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153o791 | false | null | t3_153o791 | /r/LocalLLaMA/comments/153o791/llama2_via_mlc_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xpoOdndGGw-zchQXfX69tP0tGcYwo5ASDfZY3OYLSbA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/X92E95e-7sJCPTwZhhW5vD2f6lrfZUjzZGD7feD1tLE.jpg?width=108&crop=smart&auto=webp&s=0c13ebabcaecd5986b6e452b8ad307cfc947d9c3', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/X92E95e-7sJCPTwZhhW5vD2f6lrfZUjzZGD7feD1tLE.jpg?auto=webp&s=b3286d0ef0dfa1136a7222765de482b6585d0f83', 'width': 140}, 'variants': {}}]} |
Kept preaching how precious human life is instead of answering the puzzle | 1 | 2023-07-19T08:38:01 | gijeri4793 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153p5vm | false | null | t3_153p5vm | /r/LocalLLaMA/comments/153p5vm/kept_preaching_how_precious_human_life_is_instead/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ZmnFuSmlkMKOOAa8VGC7iZm8_qaa9sA3HMfgvgXtY4Y', 'resolutions': [{'height': 167, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?width=108&crop=smart&auto=webp&s=5533536ecfe71c947169378e0744b8425ad63c65', 'width': 108}, {'height': 334, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?width=216&crop=smart&auto=webp&s=90a20f8ff9ef0f883176fb547d10bc154ec67a37', 'width': 216}, {'height': 495, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?width=320&crop=smart&auto=webp&s=be939f06d13cf8b0849fbd2d69362f584ae47073', 'width': 320}, {'height': 991, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?width=640&crop=smart&auto=webp&s=dd8cc952ccd5c5f3fd50b54f0d77fde206aee5be', 'width': 640}, {'height': 1487, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?width=960&crop=smart&auto=webp&s=11027d2f0a07a466de2101a5faf5d36900b5faa9', 'width': 960}, {'height': 1673, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?width=1080&crop=smart&auto=webp&s=760195c9197e0ee4fda58186ea762293560b7096', 'width': 1080}], 'source': {'height': 1673, 'url': 'https://preview.redd.it/3c2o664svvcb1.jpg?auto=webp&s=af7d663edbd981f2666cf216ae323178580b1bed', 'width': 1080}, 'variants': {}}]} |
|||
What's the best way to deploy LLAMA2 to your app? | 1 | I'm building an AI Agent platform https://askgen.ie and want to integrate llama2 into my app.
Do you use Replicate? | 2023-07-19T09:11:37 | https://www.reddit.com/r/LocalLLaMA/comments/153prc3/whats_the_best_way_to_deploy_llama2_to_your_app/ | livc95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153prc3 | false | null | t3_153prc3 | /r/LocalLLaMA/comments/153prc3/whats_the_best_way_to_deploy_llama2_to_your_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ugpNhJ4m-I1esfZlUHjsYMTXGgQ_t5BV-WeAMXtiyKY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/H9seOxV7qUy9OSRCdHHao45wFaueHHoE5wKQKHgui9k.jpg?width=108&crop=smart&auto=webp&s=ef7c1fbb56cf9ab24e9cc8f6dbae4de46d11e717', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/H9seOxV7qUy9OSRCdHHao45wFaueHHoE5wKQKHgui9k.jpg?width=216&crop=smart&auto=webp&s=d4b89052f13c3c9f730d3da73533cd09c6cd68a4', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/H9seOxV7qUy9OSRCdHHao45wFaueHHoE5wKQKHgui9k.jpg?width=320&crop=smart&auto=webp&s=9e4aa257993311858ce9f25923d2b5a885e99891', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/H9seOxV7qUy9OSRCdHHao45wFaueHHoE5wKQKHgui9k.jpg?width=640&crop=smart&auto=webp&s=fe74c4f3c513adb2d876053eb53924e7341e7653', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/H9seOxV7qUy9OSRCdHHao45wFaueHHoE5wKQKHgui9k.jpg?width=960&crop=smart&auto=webp&s=796d753b56c8ff6fd5269d2d239da0b3016fcbb1', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/H9seOxV7qUy9OSRCdHHao45wFaueHHoE5wKQKHgui9k.jpg?auto=webp&s=16d12f7de2920e083b9acaae53cce6f1af5ab386', 'width': 1000}, 'variants': {}}]} |
Is Lamma 2 good at math? | 1 | I haven't yet had the chance to try out Lamma 2.
I would like to know if Lamma 2 is any good at math. | 2023-07-19T09:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/153q42n/is_lamma_2_good_at_math/ | mr_house7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153q42n | false | null | t3_153q42n | /r/LocalLLaMA/comments/153q42n/is_lamma_2_good_at_math/ | false | false | self | 1 | null |
Finetuning LLaMA 2 (the base models) ? | 1 | Based on what i have read the base versions are not heavily censored like the chat models so i was asking what are the best approaches to finetune/align the base models for different requirements.
What tools do you use and achieved great results ? … For me i have tried [xturing](https://xturing.stochastic.ai/) and [SFTTrainer](https://huggingface.co/docs/trl/main/en/sft_trainer) and they got me a semi okay results. | 2023-07-19T09:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/153qjdx/finetuning_llama_2_the_base_models/ | MohamedRashad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153qjdx | false | null | t3_153qjdx | /r/LocalLLaMA/comments/153qjdx/finetuning_llama_2_the_base_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QKqeLS2WBgn9bb4ajQxbM7Yg1zHcbOL8MlLp6oAnq9Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?width=108&crop=smart&auto=webp&s=3f30c4f7ad442c455212d99acdc241582282ea7c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?width=216&crop=smart&auto=webp&s=cfb28801dc9269ba8811282fa4efc2bfe9798151', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?width=320&crop=smart&auto=webp&s=ee0bac5dae7ed87a0f76c64e00f3ccdd7ffc0e11', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?width=640&crop=smart&auto=webp&s=0f865bbed857ecc5c2cc6a37566af1cea8d69042', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?width=960&crop=smart&auto=webp&s=cbf46ca9f1a2a5cdcd2885fbc1d307fd1d9fbefa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?width=1080&crop=smart&auto=webp&s=704cf14ecf0877f0a75a2ba677ea08d8b569f89d', 'width': 1080}], 'source': {'height': 1890, 'url': 'https://external-preview.redd.it/_DnzBI_9cnNkHV4iTJvIovekVaCmdnjbB1S-BXg2hBM.jpg?auto=webp&s=a367c87186f206dd689e940fe17d11e8579c30dd', 'width': 3600}, 'variants': {}}]} |
TheBloke...I am eternally grateful | 1 | 2023-07-19T10:16:24 | BharatBlade | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153qxot | false | null | t3_153qxot | /r/LocalLLaMA/comments/153qxot/theblokei_am_eternally_grateful/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'MozWClcb19Oo98NyyprCYuf94oaol-k0eu34ftpx2ag', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?width=108&crop=smart&auto=webp&s=13ad89a0a7fc856af80b70cd72e37b8b6e0309e6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?width=216&crop=smart&auto=webp&s=10a1e050675ee583e0eb17aa02c144814a319094', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?width=320&crop=smart&auto=webp&s=35724604bc7d194b25378ac6264c8bf9ec47dbbc', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?width=640&crop=smart&auto=webp&s=1afa0387fe9eb48c0d1ac8d94fa1091640246e7e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?width=960&crop=smart&auto=webp&s=d4f8a23e3fd9f9486861c2e37d17fa44f823be03', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?width=1080&crop=smart&auto=webp&s=788dac7dded13f6b350063322fbf562b2dec25b2', 'width': 1080}], 'source': {'height': 3120, 'url': 'https://preview.redd.it/lik786ybdwcb1.png?auto=webp&s=84f4c16c5568f07d78f85d362067afee68201f39', 'width': 1440}, 'variants': {}}]} |
|||
[Project] Prompt-Promptor: An Autonomous Agent for Prompt Engineering | 1 | [removed] | 2023-07-19T10:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/153rdt5/project_promptpromptor_an_autonomous_agent_for/ | pikhotan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153rdt5 | false | null | t3_153rdt5 | /r/LocalLLaMA/comments/153rdt5/project_promptpromptor_an_autonomous_agent_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'f44BHl2nQJnf6bmjzLkxPVABak256fYAeYIq3yJ_EdM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=108&crop=smart&auto=webp&s=39d3e384f5a6fac942a3929bc75e9361383456c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=216&crop=smart&auto=webp&s=7f313e5a59c3b13e1f571a6b849ff714850f4f2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=320&crop=smart&auto=webp&s=084c538312d90a1f06c32e6739e4b1d642872483', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=640&crop=smart&auto=webp&s=3c24609714864be1fe416146025d4e9a5af2ac97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=960&crop=smart&auto=webp&s=86e9db6441385dff49436ccbca554eae780b4d3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?width=1080&crop=smart&auto=webp&s=9970608318326e88876568addcac20fab303af6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h-EjDwvzDhHhX2KZHBb6hZH3lac65wrRg6nISbvIHUg.jpg?auto=webp&s=4b60b9f65ebce616fdd747ae72fe9fe6f9f5aecc', 'width': 1200}, 'variants': {}}]} |
Load Llama-2-7B in free Google colab | 1 | You can use this sharded model to load llama in free Google Colab | 2023-07-19T10:41:48 | https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded | Sufficient_Run1518 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 153rf7m | false | null | t3_153rf7m | /r/LocalLLaMA/comments/153rf7m/load_llama27b_in_free_google_colab/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'DFB0CKsWaATPCjkOc-wCgWJ_AtBnQzoDZnNMtCN5fpc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?width=108&crop=smart&auto=webp&s=58cbec1aeb01090251cdc22a0a22d5396502d550', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?width=216&crop=smart&auto=webp&s=7f449eefd160a91070e0133d73aebcf25c26bfb2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?width=320&crop=smart&auto=webp&s=397cf694f336a0fea608da10372a0ca21a4efad4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?width=640&crop=smart&auto=webp&s=e833fdea736dbb513e99c82c906e31d393d4954e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?width=960&crop=smart&auto=webp&s=98554622fc2c0b6fe435d9845b38d6b51e9a6653', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?width=1080&crop=smart&auto=webp&s=0db1afc7a83c004522fdd47ba05fd6d59c8ea778', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2mtBuxTJISjt7idTgTFh26uRLvCBGKc3uE5VFdLCFxs.jpg?auto=webp&s=28c72ddf5930400828a76e794fe9d77a8c65e222', 'width': 1200}, 'variants': {}}]} |
|
Where can I find explanations about the whole Machine Learning terminology? | 1 | Hi, I am a software developer student that wants to focus into ML, but i get lost in the whole terminology that is used.
Google doesn’t help cause i don’t understand the concept explained at all.
Can you help me please with resources about the whole concepts, terminology , etc used around ML? | 2023-07-19T11:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/153scfu/where_can_i_find_explanations_about_the_whole/ | Rubytux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153scfu | false | null | t3_153scfu | /r/LocalLLaMA/comments/153scfu/where_can_i_find_explanations_about_the_whole/ | false | false | self | 1 | null |
How to stop model from generating the users text? | 1 | I've been using ggml versions of multiple models and they all seem to have the same issue sometimes in chat where I'll have the stopping sequence for silltavern / koboldcpp set to \[my characters name:\] but sometimes the model will 'forget' to add the ':' at the end meaning that it generates text for my character too.
Is there a specific prompt that anyone uses to stop / limit this or is there another way to solve it?
Or is this just a limitation of using smaller (13b) models? | 2023-07-19T11:36:30 | https://www.reddit.com/r/LocalLLaMA/comments/153sj85/how_to_stop_model_from_generating_the_users_text/ | throwaway201815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153sj85 | false | null | t3_153sj85 | /r/LocalLLaMA/comments/153sj85/how_to_stop_model_from_generating_the_users_text/ | false | false | self | 1 | null |
Question about RoPE scaling | 1 | HF Transformers just released v4.31.0 today and they added RoPE scaling to LLaMA, GPT-NeoX and Falcon (props to u/kaiokendev, u/bloc97 and u/emozilla).
Correct me if I'm wrong, but if I were to finetune a model originally with max token length at 2048 with RoPE scaling of 2.0, then I would have to construct my dataset so that they are chunked at 4096 in order for it to learn positions 2049-4096. Is this right?
Another thought I had is to change max_position_embeddings in the config in order to have a final finetuned model with a longer sequence. For example, taking a pretrained model at 2048, modify and finetune to 8192 with linear scaling, then use dynamic scaling to extend 4x to 32768? | 2023-07-19T12:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/153tiq5/question_about_rope_scaling/ | khacager | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153tiq5 | false | null | t3_153tiq5 | /r/LocalLLaMA/comments/153tiq5/question_about_rope_scaling/ | false | false | self | 1 | null |
Redmond Puffin 13B Preview (Llama 2 finetune) | 1 | 2023-07-19T13:33:47 | https://twitter.com/Teknium1/status/1681556127656579075?t=IC9AszPKqtyFFAKItwD4Cg&s=19 | pedantic_pineapple | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 153v7rc | false | {'oembed': {'author_name': 'Teknium (e/λ)', 'author_url': 'https://twitter.com/Teknium1', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">We are releasing a preview model by <a href="https://twitter.com/NousResearch?ref_src=twsrc%5Etfw">@NousResearch</a>, led by <a href="https://twitter.com/Dogesator?ref_src=twsrc%5Etfw">@Dogesator</a> and <a href="https://twitter.com/JSupa15?ref_src=twsrc%5Etfw">@JSupa15</a>, trained by <a href="https://twitter.com/theemozilla?ref_src=twsrc%5Etfw">@theemozilla</a>. <br><br>Trained on ~3,000 quality gpt-4 multiturn conversations, in vicuna "Human: Assistant:" prompt format, based on work by OpenChat. <br><br>Download here: <a href="https://t.co/j9HNlDOMv9">https://t.co/j9HNlDOMv9</a></p>— Teknium (e/λ) (@Teknium1) <a href="https://twitter.com/Teknium1/status/1681556127656579075?ref_src=twsrc%5Etfw">July 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Teknium1/status/1681556127656579075', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_153v7rc | /r/LocalLLaMA/comments/153v7rc/redmond_puffin_13b_preview_llama_2_finetune/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UAnENdHTTgYb4HDDIt9YVZcTwLcG3jLWfK2dS4RwJyA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ijT8cJhwEz61fHjgAq04_7ZK67i7AWKbvO2I1B_d7UQ.jpg?width=108&crop=smart&auto=webp&s=915d2d3bccb810446dee33995328cdfc63442130', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/ijT8cJhwEz61fHjgAq04_7ZK67i7AWKbvO2I1B_d7UQ.jpg?auto=webp&s=8f7cb1fd4a193a2ababc1f5ee56e9ae84e2a7eca', 'width': 140}, 'variants': {}}]} |
||
Help a beginner getting started | 1 |
I'm a beginner, and I'm going to start a project soon where I have to choose the best suitable model and do some fine tuning to improve some tasks. So my questions are:
1. Given that my PC is not really powerful, will using a cloud platform solve completely the problem?
2. How can I ensure the security aspect of my application
3. If you have resources that can help me learn more please share them with me
Thanks | 2023-07-19T13:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/153vst5/help_a_beginner_getting_started/ | CulturalChemical5640 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153vst5 | false | null | t3_153vst5 | /r/LocalLLaMA/comments/153vst5/help_a_beginner_getting_started/ | false | false | self | 1 | null |
Has anyone managed to get the Llama-2 70b model working? | 1 | Either through bitsandbytes 4-bit, or the GPTQ version? I have 2x 3090, and tried to get it going, but all I get is errors. | 2023-07-19T14:05:07 | https://www.reddit.com/r/LocalLLaMA/comments/153w04j/has_anyone_managed_to_get_the_llama2_70b_model/ | ptxtra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153w04j | false | null | t3_153w04j | /r/LocalLLaMA/comments/153w04j/has_anyone_managed_to_get_the_llama2_70b_model/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.