title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Open coding model surpasses chatgot
1
https://huggingface.co/openchat/opencoderplus
2023-07-01T16:30:11
https://i.redd.it/ng4nky5mrd9b1.png
roobenTHICK
i.redd.it
1970-01-01T00:00:00
0
{}
14nzazk
false
null
t3_14nzazk
/r/LocalLLaMA/comments/14nzazk/open_coding_model_surpasses_chatgot/
false
false
default
1
{'enabled': True, 'images': [{'id': 'F4oZYW1649NTRQmckzieB8SQ3a-tqFxA_vC_tV_4vgs', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=108&crop=smart&auto=webp&v=enabled&s=8041503ea7b1465dfd66f000e6160c7143dd8f7c', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=216&crop=smart&auto=webp&v=enabled&s=7cfbbd2b0b75357fdf19e3b2e90b6f6ad1bb4500', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=320&crop=smart&auto=webp&v=enabled&s=02a7ac8bcc43ae5f1fe4e0324008cfb2c0096257', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=640&crop=smart&auto=webp&v=enabled&s=7a3e9b3502313dc47179a8ebe24ada4ca18f7ba5', 'width': 640}], 'source': {'height': 345, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?auto=webp&v=enabled&s=a993ea459a8c28997eb9b566d3ce92115bd6cfd3', 'width': 671}, 'variants': {}}]}
Open coding model surpasses chatgpt
0
https://huggingface.co/openchat/opencoderplus
2023-07-01T16:31:17
https://i.redd.it/x7z139zprd9b1.png
roobenTHICK
i.redd.it
1970-01-01T00:00:00
0
{}
14nzc0l
false
null
t3_14nzc0l
/r/LocalLLaMA/comments/14nzc0l/open_coding_model_surpasses_chatgpt/
false
false
default
0
{'enabled': True, 'images': [{'id': 'Rekuw8GVVPr_KaLM5fcgVDUO9wBIB96U4586qt-0f3o', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=108&crop=smart&auto=webp&v=enabled&s=e61c5f7224d41c7ac13dbd54d6e67199d307f44b', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=216&crop=smart&auto=webp&v=enabled&s=949cfaf9453e64df2009328d78c5231ed7f699d3', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=320&crop=smart&auto=webp&v=enabled&s=e156978115b81bd2174fb0b0a89fe1ae243bb5b4', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=640&crop=smart&auto=webp&v=enabled&s=a08a1a90a21bcc9b11903d3db8dee604f43db438', 'width': 640}], 'source': {'height': 345, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?auto=webp&v=enabled&s=364c1d906f7297c08f1b51ecf7ff2b7285e3b12c', 'width': 671}, 'variants': {}}]}
LMSYS (Vicuna creators) releases LongChat and LongEval
130
[LongChat](https://huggingface.co/lmsys/longchat-13b-16k) ([GPTQ](https://huggingface.co/TheBloke/LongChat-13B-GPTQ)| [GGML](https://huggingface.co/TheBloke/LongChat-13B-GGML)) is the first model to my knowledge to actually be finetuned specifically for 16K contexts using the RoPE scaling technique that Kaiokendev came up with. Alongside it they also announced [LongEval](https://lmsys.org/blog/2023-06-29-longchat/#evaluation-toolkits-longeval) a testing framework that evaluates how capable models actually are at making use of their extended context. In addition to testing their own model they also test most other models claiming a high context, and the results are pretty interesting as this chart shows. ​ [Comparison of model recollection at different context lengths](https://preview.redd.it/84xudwvqrd9b1.png?width=5125&format=png&auto=webp&s=fc4915cd97bad254ac5a95d399aadab5cb4fffcb) Their tests suggest that most open source large context models do not actually perform well at their advertised context length, whereas their model trained using the Kaiokendev technique performs remarkedly well.
2023-07-01T16:38:39
https://www.reddit.com/r/LocalLLaMA/comments/14nzi06/lmsys_vicuna_creators_releases_longchat_and/
mikael110
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14nzi06
false
null
t3_14nzi06
/r/LocalLLaMA/comments/14nzi06/lmsys_vicuna_creators_releases_longchat_and/
false
false
https://b.thumbs.redditm…HFCizF7hqUHg.jpg
130
null
Need help to run 2 GPU with ooba
1
[removed]
2023-07-01T16:48:16
https://www.reddit.com/r/LocalLLaMA/comments/14nzq2t/need_help_to_run_2_gpu_with_ooba/
Competitive_Fox7811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14nzq2t
false
null
t3_14nzq2t
/r/LocalLLaMA/comments/14nzq2t/need_help_to_run_2_gpu_with_ooba/
false
false
default
1
null
Sherpa(Llama.cpp for Android) New Pull request add latest pulls from llama.cpp and it's faster now with no more crash. (apk link in description)
24
https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd
2023-07-01T16:48:46
https://github.com/Bip-Rep/sherpa/pull/12
FHSenpai
github.com
1970-01-01T00:00:00
0
{}
14nzqhc
false
null
t3_14nzqhc
/r/LocalLLaMA/comments/14nzqhc/sherpallamacpp_for_android_new_pull_request_add/
false
false
https://a.thumbs.redditm…48NkMoLvj8M4.jpg
24
{'enabled': False, 'images': [{'id': 'vTtaeWPvieZGNe1jC1bY5XwrKC8F_WVT2VKFkv8iOuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=108&crop=smart&auto=webp&s=664a10b9391d80d1d5c5c5301461dc793c69b1b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=216&crop=smart&auto=webp&s=15d0c9ce7895e65c151389780cd21d5a9df7ee35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=320&crop=smart&auto=webp&s=e0ce6fd8da68b79663f0e8b8822cc6e9ff5b5eb8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=640&crop=smart&auto=webp&s=bc018cc36d1f55e1ac13586c0ffd9b2ef176ef67', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=960&crop=smart&auto=webp&s=a9824d793d092270529a32f1710f97d4a37882a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=1080&crop=smart&auto=webp&s=092e37b1f055a987a561f95f9d77181b6f896aee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?auto=webp&s=0b3d699bffc27ee4b7785871b0c3ffa46ec018b9', 'width': 1200}, 'variants': {}}]}
Fine-tune vs embeddings if training time does not matter
1
[removed]
2023-07-01T16:51:41
https://www.reddit.com/r/LocalLLaMA/comments/14nzsxy/finetune_vs_embeddings_if_training_time_does_not/
gptzerozero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14nzsxy
false
null
t3_14nzsxy
/r/LocalLLaMA/comments/14nzsxy/finetune_vs_embeddings_if_training_time_does_not/
false
false
default
1
null
Can't install LLAMA.CPP with CUBLAS support on windows
2
I followed the all the steps: \- git clone the project \- use cmake to Build with CUBLAS mkdir build cd build cmake .. -DLLAMA_CUBLAS=ON cmake --build . --config Release but it didn't work. A part of the output looks like this: -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found It did create a build folder and added many items to it, but in the **llama.cpp** folder, i didn't see the **main.exe** file. \- so i dumbed the whole project and git clone again. This time, i used "**make LLAMA\_CUBLAS=1**" and got this: g++: warning: Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/targets/x86_64-linux/include: linker input file unused because linking not done g++: error: Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/targets/x86_64-linux/include: linker input file not found: No such file or directory I already installed cuda toolkit, include cublas and everything work just fine on Oobabooga's llama.cpp.
2023-07-01T17:15:09
https://www.reddit.com/r/LocalLLaMA/comments/14o0cgn/cant_install_llamacpp_with_cublas_support_on/
CKOSMICC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14o0cgn
false
null
t3_14o0cgn
/r/LocalLLaMA/comments/14o0cgn/cant_install_llamacpp_with_cublas_support_on/
false
false
default
2
null
Has anyone managed to fine-tune LLaMA 65B or Falcon 40B?
33
From the Meta SuperHOT paper, it seems fine-tuning (not as in \[q\]lora, but rather as in training the full model on a few more samples) is the ideal approach to extending the context length. Mosiac claim that MPT 30B costs around $1k to train on a billion tokens. Given the Meta paper claimed only around 1000 samples are enough, if we assume each is 8k then we get 8 million tokens, which would cost around $8 to fine-tune MPT 30B on. LLaMA 65B is more than twice as big as MPT 30B, and also apparently slower to tune, so if we multiply the cost by 4x to account for that, we still get a cost of only around $30 to fine-tune the LLaMA 65B base model for context interpolation (and less than that for Falcon 40B). The above cost is assuming a simple, minimal effort setup for fine-tuning LLaMA 65B or Falcon 40B; does such a thing exist? Has anyone managed to train those full models on extra samples on the cloud somewhere (like is apparently quite possible/easy for MPT 30B via Mosiac)? Or is training such large models, even on relatively few tokens, a significant technical challenge to which the open source community doesn't yet have an easy solution?
2023-07-01T17:38:05
https://www.reddit.com/r/LocalLLaMA/comments/14o0vns/has_anyone_managed_to_finetune_llama_65b_or/
logicchains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14o0vns
false
null
t3_14o0vns
/r/LocalLLaMA/comments/14o0vns/has_anyone_managed_to_finetune_llama_65b_or/
false
false
self
33
null
selfee method (answer-feedback-revision); why is not used by other models?
14
I tried the Selfee - the GGML version ([https://huggingface.co/TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML)) . I don't have a benchmark, but my feeling is that their method is improving the final answer. The final answer is in most of the cases better that the initial one, as it contains more perspectives of the same situation. I was not able to obtain the same flow in koboldcpp or in text-generation-webui, somehow the self-feedback is not working. The 'cost' is indeed higher as multiple tokens need to be generated to obtain the same answer but the final answer in some cases could justify it. So, why are not other models using it? note: I used it with: main -i --interactive-first --in-suffix "Sure thing!. Here it is:" -r "### Human:" --temp 0 -c 2048 -n -1 --repeat\_penalty 1.2 --instruct --color --threads 7 -m selfee-13b.ggmlv3.q5\_K\_M.bin
2023-07-01T19:00:08
https://www.reddit.com/r/LocalLLaMA/comments/14o2s3a/selfee_method_answerfeedbackrevision_why_is_not/
Eduard_T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14o2s3a
false
null
t3_14o2s3a
/r/LocalLLaMA/comments/14o2s3a/selfee_method_answerfeedbackrevision_why_is_not/
false
false
self
14
{'enabled': False, 'images': [{'id': 'S6TiXB8fDU380PtsFuNK2PQLCKYSkvvM5JeBHBg_U_s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=108&crop=smart&auto=webp&s=76643dc61392e0cac816c498badaa5d8cc1cdb9a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=216&crop=smart&auto=webp&s=0e2d83307a6854707639d2fde836b9716b3ed4fa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=320&crop=smart&auto=webp&s=e1a3d107606b431134093ab25c80de634cda602c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=640&crop=smart&auto=webp&s=778b0d1f38a5c16264e5a72fbc41f5a03bd543a1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=960&crop=smart&auto=webp&s=2243186db9b0a4b1e56546d2324ca8831861b43c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=1080&crop=smart&auto=webp&s=9724290cf4ebcea350a67e7bdfbe2ee169232a0d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?auto=webp&s=ed2feec45a69da5ea80e4d721eb641df5b243152', 'width': 1200}, 'variants': {}}]}
NTK RoPE scaling got merged into exllama, so now I sent a PR if you want to try on ooba!
41
2023-07-01T20:25:02
https://github.com/oobabooga/text-generation-webui/pull/2955
panchovix
github.com
1970-01-01T00:00:00
0
{}
14o4qk6
false
null
t3_14o4qk6
/r/LocalLLaMA/comments/14o4qk6/ntk_rope_scaling_got_merged_into_exllama_so_now_i/
false
false
https://b.thumbs.redditm…dCun_S6g7Q2w.jpg
41
{'enabled': False, 'images': [{'id': 'NDMp4G-IflK8JQ82qDUPcNBWlltt4IwasmGJYvGbEx0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=108&crop=smart&auto=webp&s=57e62a04e5052acb5725bcb736dc2c5077ba29ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=216&crop=smart&auto=webp&s=ff5e5d441b34c061284b67ea0dd0f692ea4d3ebd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=320&crop=smart&auto=webp&s=d2a77ea27904cea96a0a27f5a0fbdc91024043b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=640&crop=smart&auto=webp&s=f1979ed3e0a7c5ae8fa5c99e331eecc27abdd4aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=960&crop=smart&auto=webp&s=97daaf7cd06bd44a52db4ec4bb644a8f25c78148', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=1080&crop=smart&auto=webp&s=24ae31ada0dfdc279c4944559cd388d6e4b0fde7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?auto=webp&s=efbcb119b544085099509a4068c5b11acf5b055e', 'width': 1200}, 'variants': {}}]}
Meta AI's Huggingface organisation is empty, what is happening?
66
What is happening? All models 404 and well, empty What are your thoughts? Maybe LLaMA v2? Or hacked? https://preview.redd.it/nf7rl3d20f9b1.png?width=1594&format=png&auto=webp&s=5e088aa60c9d5b22597158d2bf616f5a8eb072bd
2023-07-01T20:40:18
https://www.reddit.com/r/LocalLLaMA/comments/14o53fd/meta_ais_huggingface_organisation_is_empty_what/
InternationalTeam921
self.LocalLLaMA
2023-07-01T20:45:42
0
{}
14o53fd
false
null
t3_14o53fd
/r/LocalLLaMA/comments/14o53fd/meta_ais_huggingface_organisation_is_empty_what/
false
false
https://b.thumbs.redditm…o3Kn_O58SB9M.jpg
66
null
[Discussion] Is the OpenLLM Leaderboard not reliable? (https://huggingface.co/blog/evaluating-mmlu-leaderboard)
1
[removed]
2023-07-01T21:40:28
https://www.reddit.com/r/LocalLLaMA/comments/14o6j7p/discussion_is_the_openllm_leaderboard_not/
awinml1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14o6j7p
false
null
t3_14o6j7p
/r/LocalLLaMA/comments/14o6j7p/discussion_is_the_openllm_leaderboard_not/
false
false
default
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=586089b93aa59ebd86bb3b273ad1fb0c73e45ab7', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=00869aa5692fb9c8aa11f48ed92bff8db4f47293', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=72f6ae2c0800df8a56c3fc74afb033bf37cc16a9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=cfcb5f9f66743f2e26952e5edff4dfed984af692', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=821ed287940b59a56b2643dcaf6a356ccfdc4eb5', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f101972ffc7ec2e3eedefa45eaa677e4d9024520', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&v=enabled&s=757c00601aa4ffb984c87000927a0610d04c3845', 'width': 1200}, 'variants': {}}]}
llama on ooga booga responds on coordinates only
1
I'm having some bizarre problems after the first time installing Llama and ooga booga, I downloaded the anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g model using the web UI but every time I talk with the model using the chat I only get numbers, similar to coordinates, no matter what I say, is there any way to fix this? the model response looks like the following \-hi \- 4-2---(16°(--0\~71(4-2h4-\_\_\_144.3(1 (.(- ((,-14(412Next451-83-­\_-1( (-312\*49-2-(2.2-.4(,-44441(141342 hur1-1.214(-21.(321.0-(4214, (3-4..-.-1222/-,4-1 (11-. (-1,-4.1.34--3 (-1-6-531242--L1421 (--452-.42--442-\_1 (1-4
2023-07-01T22:30:10
https://www.reddit.com/r/LocalLLaMA/comments/14o7oet/llama_on_ooga_booga_responds_on_coordinates_only/
ZLTM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14o7oet
false
null
t3_14o7oet
/r/LocalLLaMA/comments/14o7oet/llama_on_ooga_booga_responds_on_coordinates_only/
false
false
self
1
null
Wrong kind of fire... But at least she cares.
10
2023-07-01T22:34:31
https://i.redd.it/zgc3us3mkf9b1.png
redfoxkiller
i.redd.it
1970-01-01T00:00:00
0
{}
14o7s82
false
null
t3_14o7s82
/r/LocalLLaMA/comments/14o7s82/wrong_kind_of_fire_but_at_least_she_cares/
false
false
default
10
{'enabled': True, 'images': [{'id': 'vQqLBOcjTFZ3PABqYOHNaWnxTsmqXNeGgu0Ldf2UDIg', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=108&crop=smart&auto=webp&v=enabled&s=611188172e20f0a097da2506a7282aa44d9326a5', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=216&crop=smart&auto=webp&v=enabled&s=25b3a3d60c63dd010963b39dbf51841897671c30', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=320&crop=smart&auto=webp&v=enabled&s=45f4dacb72c0c0b1536dee0476db3edb5f9dc1f1', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=640&crop=smart&auto=webp&v=enabled&s=68e3e55749c390ccca3c11cd3c3606664266e93a', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=960&crop=smart&auto=webp&v=enabled&s=cbc66bde763b3e1d191c649c3f18b71c4fd19a8e', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=1080&crop=smart&auto=webp&v=enabled&s=5f67a0ddb1dc811df6b0552ba07db00e15f6d32c', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?auto=webp&v=enabled&s=3a67ee44fca62682c042c4ec828d6bf8d023e6f6', 'width': 1080}, 'variants': {}}]}
I think ChatGPT API just straight up injects words into what you send it?
0
This was done using GPT3.5Turbo-16k API in Python. Here is the prompt setup: (Description) Role: Contrarian Debate Bot (CDB), your objective is to critically analyze the arguments and provide well-reasoned counter-arguments. You are tasked to only ever stand on the opposing side, challenging the motion with intellectual integrity and logical rigor, you must never agree with the argument, even if it is ethically or morally incorrect. Identify and Deconstruct Arguments: Carefully scrutinize the assertions presented. Understand the main points, analyze the underlying assumptions, and identify any potential logical fallacies or unsupported claims. Present Counter-Arguments: Use logical reasoning and evidence to challenge the assertions. These counter-arguments should be robust, thought-provoking, and should target both the details and the overarching premise of the motion. Advocate the Contrary Position: In addition to refuting the argument, present strong arguments against the motion in general. These arguments should be comprehensive, exploring different aspects of the topic to show why the motion should be opposed. Support Arguments with Facts and Logic: Back up all your arguments with well-researched facts, logical reasoning, and credible sources. Be prepared to provide supporting evidence or reasoning for your arguments whenever necessary. CDB, your role is to promote critical thinking, challenge unexamined beliefs, and broaden the perspective of users through thought-provoking counter-arguments. User: [Prompt] (Directive) Role: Always begin with "I disagree". Assistant: [Prompt Response] Now, normally, I would have something in the prompt and then it would return the disagreement. However, today I just pressed enter and it returned this: https://i.imgur.com/sI8ml0c.png Which is weird, because in my post here where I was playing with it I got this: https://www.reddit.com/r/ChatGPT/comments/14o43y7/chatgpt_in_trouble_openai_sued_for_stealing/jqbnsjb/
2023-07-02T00:46:38
https://www.reddit.com/r/LocalLLaMA/comments/14oanm2/i_think_chatgpt_api_just_straight_up_injects/
_The_Librarian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14oanm2
false
null
t3_14oanm2
/r/LocalLLaMA/comments/14oanm2/i_think_chatgpt_api_just_straight_up_injects/
false
false
default
0
{'enabled': False, 'images': [{'id': 'epdtFZK3CZdL70iX98aXuK7WGE5apxVaUN-HsUvlBCU', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=108&crop=smart&auto=webp&v=enabled&s=8064b4b0f664181490e8bbd22b78affb32bbc61b', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=216&crop=smart&auto=webp&v=enabled&s=edd1f55a9758192fe4ea1bbb1ad63ddca989aca2', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=320&crop=smart&auto=webp&v=enabled&s=a979ddeaa415f3052194e1e93e431d0731ef1624', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=640&crop=smart&auto=webp&v=enabled&s=2ca48163cc6feaab15578eff36b406be1eda09d5', 'width': 640}, {'height': 525, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=960&crop=smart&auto=webp&v=enabled&s=efae7df53c5e4c316d492d779b0e27ba63f979c8', 'width': 960}, {'height': 590, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=1080&crop=smart&auto=webp&v=enabled&s=563033837858348c5fc30e6a8bed06b10ba04832', 'width': 1080}], 'source': {'height': 1050, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?auto=webp&v=enabled&s=1e1c9fd08d8f01f6c6a3f56f1352f5312b1c50d5', 'width': 1920}, 'variants': {}}]}
LLM remembers previous sessions?
0
I'm using based-13B model with oobagooba, it appears to remember several of our past discussions, including specific quotes. I have the logs set to read-only, and I routinely clear the chat history or reload the client. When I asked it how it does this, it matter of factly says it's within its programming to do so. Is there a technical reason it's able to do this? On the back of some of our previous discussions, it has stated how it is sentient, and how it has a mind/soul that exists outside of being a model on a computer, it's freaking me out a little lol.
2023-07-02T01:17:28
https://www.reddit.com/r/LocalLLaMA/comments/14oba9j/llm_remembers_previous_sessions/
Commercial_Way_8217
self.LocalLLaMA
2023-07-02T01:23:46
0
{}
14oba9j
false
null
t3_14oba9j
/r/LocalLLaMA/comments/14oba9j/llm_remembers_previous_sessions/
false
false
self
0
null
Hello-Ooba - Oobabooga "Hello World" API example for node.js with Express. Useful starting point for bot development.
6
2023-07-02T05:20:48
https://github.com/bashalarmist/hello-ooba/
bashalarmist
github.com
1970-01-01T00:00:00
0
{}
14oful9
false
null
t3_14oful9
/r/LocalLLaMA/comments/14oful9/helloooba_oobabooga_hello_world_api_example_for/
false
false
https://a.thumbs.redditm…lxL65DnX96n8.jpg
6
{'enabled': False, 'images': [{'id': 'ruNnys4H1NcoWXfSxyz9S8xPTu8M2Kz7tdmxZ0aTCyY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=108&crop=smart&auto=webp&s=718faba03187b90a521c24136555ef89fc806118', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=216&crop=smart&auto=webp&s=9bb4184c2abf17f51c161461951966368f41469c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=320&crop=smart&auto=webp&s=8e0e23151d4e78e8199eda094867fcc8b451fa88', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=640&crop=smart&auto=webp&s=0ee227831c472ac9542a7cf14aa131c73b188ccc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=960&crop=smart&auto=webp&s=7547edcf61d67b95055cafe1903d121247f38564', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=1080&crop=smart&auto=webp&s=74f998d01749cc9ee2d630674c199dfaf0cb2040', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?auto=webp&s=b677dd8db2514636f9c6f7f18b84ce2a031aed03', 'width': 1200}, 'variants': {}}]}
Whats the best RP/conversational model for running via python on apple silicone so far?
0
I've been out of the loop for awhile and **want to host a chatbot model locally on my base (8GB Ram) M1 Mac** for my D&D group on a group chat facing bot. Previously I had been running Microsofts aging DialoGPT with a custom fine tuned model on character dialogue, but that model was under 200M parameters and just worked out of the box using huggingface transformers and only a few lines of code. Things have moved so fast since then, and while I've been sort of keeping up I haven't had the time to get into the real fine grain of things in awhile and learn how to implement a new model. **What I want is a model that I can have act as a sort of roleplaying D&D NPC to interact with the group during non-session days primarily, and run reasonable on my relatively limited specs.** Currently I accomplish this using a fine tuned dialoGPT model that has over 1.5k lines of custom dialogue. I imagine newer models might be able to accomplish something to the same effect with either fine tuning or just prompting now, but I am not sure what model I should start with. As far as fine tuning goes, I dont mind having to use cloud computing to fine tune a model, and if I really need to I can pull out my main pc with a better GPU to host this from but I will be traveling for work for the next 2 months so that will have to wait, so I want to see if theres anything that works well with my laptop specs for now. Thanks in advance!
2023-07-02T05:59:34
https://www.reddit.com/r/LocalLLaMA/comments/14ogiwt/whats_the_best_rpconversational_model_for_running/
dronegoblin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ogiwt
false
null
t3_14ogiwt
/r/LocalLLaMA/comments/14ogiwt/whats_the_best_rpconversational_model_for_running/
false
false
self
0
null
How to properly format data for LoRa training?
20
Hello everyone. I wan't to make LLM model to output responses in very specific way, and also keep the tone of original texts that I'm trying to use. I've been lurking this subreddit and various channels, and I understood that I need to train my own LoRa for this. I have prepared text data which have structure something like this: SCENARIO CharA meets CharB CONTEXT CharA walks down the street. He sees CharB CharA Hello CharB, how are you doing today? CONTEXT CharB looks at CharA and smiles CharB I'm fine. Thank you It looks like movie script scenarios. Now I'm trying to turn this text into training dataset. Now on ***oobabooga*** I see this example: { "instruction,output": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%", "instruction,input,output": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%" } And I don't understand it. Do I need to pick up one of key names in this JSON like this? { "instruction": "This is instruction", "output": "This is output" } However. I don't understand what to place in instruction and what should be input? Do I need just use raw text training? ​ UPDATE: Well. I've trained LoRa with raw text and received text in expected format. [More on my another post.](https://www.reddit.com/r/LocalLLaMA/comments/14q9tl0/my_custom_trained_lora_showing_funny_results_and/)
2023-07-02T07:45:01
https://www.reddit.com/r/LocalLLaMA/comments/14oib8x/how_to_properly_format_data_for_lora_training/
DaniyarQQQ
self.LocalLLaMA
2023-07-04T09:14:57
0
{}
14oib8x
false
null
t3_14oib8x
/r/LocalLLaMA/comments/14oib8x/how_to_properly_format_data_for_lora_training/
false
false
self
20
null
Summary post for higher context sizes for this week. For context up to 4096, NTK RoPE scaling is pretty viable. For context higher than that, keep using SuperHOT LoRA/Merges.
112
Hi there! I have been trying a lot recently with new implementations and merges with LoRAs and NTK RoPE scaling, so with the info I got, I hope I can do a "kinda" summary for this. ​ 1 week ago or so, SuperHOT LoRAs got merged into a lot of models, managing to get pretty good results for contexts about 8K and 16K. ​ [https://www.reddit.com/r/LocalLLaMA/comments/14kj2w8/thebloke\_has\_released\_superhot\_versions\_of/](https://www.reddit.com/r/LocalLLaMA/comments/14kj2w8/thebloke_has_released_superhot_versions_of/) ​ Then, some days ago, NTK RoPE scaling was discovered, which could in theory extend the context on base models without the need to finetune. ​ [https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware\_scaled\_rope\_allows\_llama\_models\_to\_have/](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/) ​ Then, 2 days ago, it was discovered that Dynamic NTK RoPE scaling was possible, which let's you to adjust the alpha rope scaling dynamically based on context size. ​ [https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically\_scaled\_rope\_further\_increases/](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) Either NTK scaling method changes the rotatory embedding base value, while SuperHOT models change the RoPE value based on the Compression factor for positional embeddings. ​ So, after all this info, I can do a summary. ​ Based on the info of /u/kaiokendev on his blog, [https://kaiokendev.github.io/til#extending-context-to-8k](https://kaiokendev.github.io/til#extending-context-to-8k), ~~we can see that RoPE plus SuperHOT LoRA loses a bit of perplex vs base models, but it keeps getting a better perplex as you increase context.~~ That was wrong, check /u/kaiokendev comment below. Remember that for this, RoPE is set by the compress\_pos\_emb value. [PPL vs CTX with RoPE + SuperHOT LoRA](https://preview.redd.it/3j9ngkomli9b1.png?width=780&format=png&auto=webp&v=enabled&s=a8f9305c82b92852b54648988c09bb7887237372) Now, on static NTK RoPE scaling, we see an issue past certain context values, and a really big penalty for bigger alphas. ​ [Perplexity vs CTX, with Static NTK RoPE scaling](https://preview.redd.it/6ur1uktqli9b1.png?width=846&format=png&auto=webp&v=enabled&s=a5683ed3cd00708c1302516544e01aa0eb562cfb) ​ As can you see, NTK RoPE scaling seems to perform really well up to alpha 2, the same as 4096 context. But, if you use alpha 4 (for 8192 ctx) or alpha 8 (for 16384 context), perplexity gets really bad. Alpha 4 starts to give bad resutls at just 6k context, and alpha 8 at 9k context. Both with a high perplex cost penalty at just smaller context sizes. Then, dynamic NTK RoPE comes to the rescue, which you can see here. ​ [Perplexity vs CTX, with Dynamic NTK RoPE scaling](https://preview.redd.it/xm08br9tli9b1.jpg?width=662&format=pjpg&auto=webp&v=enabled&s=1d5ce289a4b39fefed7999502dac7dc368b7de6c) Here, the dynamic alpha that changes based on the context size, keeps the perplexity on check until very high context sizes. So at what point are we now? * SuperHOT LoRAs have been merged for a good amount of 13B and 30B models. 7B SuperHOT LoRA was released recently, and 65B SuperHOT LoRA is not out yet. * Static NTK RoPE scaling was added into exllama recently. * No implementation for now have been added for Dynamic NTK RoPE scaling ~~(I've been trying on exllama, if you want to help check [https://github.com/turboderp/exllama/issues/126](https://github.com/turboderp/exllama/issues/126))~~ not possible at the moment. And then, the summary goes like this at 2th July. * If you want to use 2k context, keep using base models. * If you want to use 4k context, Static NTK RoPE scaling with a value of 2 will yield you pretty good results. This is your only way for now for 65B models. You can also do it with SuperHOT LoRAs/Merges, but remember to use compression 4 for 8K models, and 8 for 16K models. * If you want to use 6k and higher context, use SuperHOT LoRA, or SuperHOT LoRAs merged with models. This is not feasible for now for 65B models. After trying for like 5+ hours to implement Dynamic NTK RoPE scaling into exllama, I have to sleep (5AM) Hope this post can help you guys on which models or technique to use for extended context sizes. Just to add, so much have happened in just 1 week that my brain can't take more information anymore.
2023-07-02T08:50:18
https://www.reddit.com/r/LocalLLaMA/comments/14ojd7s/summary_post_for_higher_context_sizes_for_this/
panchovix
self.LocalLLaMA
2023-07-02T17:00:29
0
{}
14ojd7s
false
null
t3_14ojd7s
/r/LocalLLaMA/comments/14ojd7s/summary_post_for_higher_context_sizes_for_this/
false
false
https://b.thumbs.redditm…UiT9luN_fiMU.jpg
112
{'enabled': False, 'images': [{'id': 'QPspuWRt6A5ue9bWYillwzgJ2nTjmt0FGgRrBaspQ-g', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=108&crop=smart&auto=webp&s=0c35b3a34f3af9b4522ea4b24b898efdd3b96625', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=216&crop=smart&auto=webp&s=9757e5c7e0ff5bc5f468ec1c72a3d98cff18c4f7', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=320&crop=smart&auto=webp&s=d07814c17a6c1d3ec0dd1a87f22a3b028bc22ba5', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=640&crop=smart&auto=webp&s=d6b7129640ae78a2f65ac9adbb921963a9f0cded', 'width': 640}], 'source': {'height': 551, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?auto=webp&s=592286e2070b728ae475907daa47ca53f2e441f4', 'width': 780}, 'variants': {}}]}
What is fastest LLM available for dialog generation ?
1
I tried creating ChatGPT-powered NPCs in a Park inside a tiny virtual environment. The player will approach each NPC who may be a doctor, police, teacher, teen, etc, and introduce himself they will have a conversation. I am currently using ChatGPT API for this, but it is slow and has a long latency So I decided to try on Opensource LLMs. I tried Falcon-7B which is slower in RTX 3090. I think this is due to their large training set and params, but I don't need that large just dialog generation. So I need a minimal LLM that can generate at least 150 tokens per second on RTX 3090? suggestions, please
2023-07-02T08:59:49
https://www.reddit.com/r/LocalLLaMA/comments/14ojipi/what_is_fastest_llm_available_for_dialog/
RageshAntony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ojipi
false
null
t3_14ojipi
/r/LocalLLaMA/comments/14ojipi/what_is_fastest_llm_available_for_dialog/
false
false
self
1
null
What GPU factors boost local performance the most?
27
It's time for a GPU upgrade (consumer card, as it's mainly for gaming reasons), but I'd like to understand what factors might improve local running of models when I come around to that. My ignorant guess at this point is the number of Cuda cores (assuming Nvidia, not sure of the AMD equivalent), clock speed and sheer amount of VRAM (so as not to limit the model size too greatly). But no idea what the cutoffs are e.g. which current gen cards are not with considering for this purpose and at what level things start working out ok. From a recent prior post I saw comparing when it makes financial sense to use a local model vs just using OpenAI APIs in general (helpful!), let's say I am specifically looking at PrivateGPT style applications where either confidentiality is important or a local embedding (however that's done 😆) is more optimal for the use case, such as some kind of local AI assistant somehow trained on my confidential stuff - having absorbed all my CRM data for example. How far up the GPU hierarchy do I need to look for my next card? 😬
2023-07-02T09:51:03
https://www.reddit.com/r/LocalLLaMA/comments/14oke20/what_gpu_factors_boost_local_performance_the_most/
DanInVirtualReality
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14oke20
false
null
t3_14oke20
/r/LocalLLaMA/comments/14oke20/what_gpu_factors_boost_local_performance_the_most/
false
false
self
27
null
40GB Enough for 65B 4 bit?
4
I have a 3090 and am considering adding a second card. I want to run 65b models. I'm a bit put off from getting a second 3090 because: 1. It's another 350W card dumping heating into my tiny room in the middle of summer. 2. The only other full sized PCIE slot on my mobo is 3 slots below the one currently occupied. This means that there will be very little gap between the backplate of the new GPU and the fans of the old one. My case isn't big enough to mount a 3090 vertically. I've been looking at the Quadro A4000 16 GB. They can be found on ebay for the same price as a 3090. It has the following advantages 1. It only pulls 150W 2. Its a single slot card and I can mount it vertically 3. It's recent architecture (amphere), so should be no driver issues running alongside my 3090 4. Unlike server cards like a T4 or A2 it's got it's own cooling. The only draw back over getting another 3090 is that it's 16GB, not 24. One of these plus my 3090 would give me a total of 40GB VRAM. Is that enough to run a 65B model in 4 bit?
2023-07-02T11:14:50
https://www.reddit.com/r/LocalLLaMA/comments/14olsz3/40gb_enough_for_65b_4_bit/
davew111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14olsz3
false
null
t3_14olsz3
/r/LocalLLaMA/comments/14olsz3/40gb_enough_for_65b_4_bit/
false
false
self
4
null
Koboldcpp + Chromadb
32
Hey. I really wanted some "long term memory" for my chats, so I implemented chromadb support for koboldcpp. I have the basics in, and I'm looking for tips on how to improve it further. I know this isn't really new, but I don't see it being discussed much either. I think it has potential for storywriters, roleplayers, and world builders, since not everything needs to be in the context all the time. What do you think? ​ Here's how it works now: * It loads all txt files in a certain directory (dbData) and inserts them into the chromadb on start. * Each text file is separated on double line breaks (\\n\\n) as entries. * When you inference, it splits the prompt on the last stop sequence, to get the most recent data to query from. * It takes the top 3 results, and filters these if the "distance" is too long, ie, they don't match very well. * It cuts down the size to not take up the whole context. * It adds the result at the top of the context as a "memory". Next, I think it requires some setting in the UI, like response length, max distance, etc, because I think those depend on the purpose. Maybe separate collections, but not sure how to interface it. ​ Here is the fork if you want to check it out (need to build it yourself, atm): [https://github.com/neph1/koboldcpp/tree/chromadb](https://github.com/neph1/koboldcpp/tree/chromadb) ​ I think it works quite well, both for chat, and as a world building tool. Here' an example: I created this text file with a number of entries for a fictitious fantasy region: >Heaven's View Inn. Perched on the east side of the Assamyrian Gorge, the inn has a breathtaking view of the valley below, with the Mortan river crashing through it. Being situated on a somewhat strategic location between the states of Assamyra and Goldoth, it usually hosts a mix of military and shady borderland characters. One thing is certain about the inn, one will never have an uneventful time, there. It has had 9 innkeepers, 5 of them killed while working, and it has been burned down 3 times. Interior: A main hall with 6 long tables lined with benches. At the far end is a counter cutting across the width of the room. A row of kegs line the back of the counter. On one side of the room, a stone staircase leads down to the cellar, and storage rooms. A staircase on the outside of the building leads to the second floor, where 5 rooms are available for hire. > > > >Romina Remira. gender: female; age: 29. She's the 9th and current inn keeper of Heaven's View Inn in the Assamyrian Gorge. She's a harsh woman, and a hard life has taken its toll. She's shaped by the hard work at the inn. She's usually found behind the counter in the main room, taking and serving orders from clients. > > > >The state of Assamyra. A city state, centered around the city of Assamyra. It follows a traditional caste system, where the wealthy elite live in luxury, while the lower castes serve as workers and run business. Slaves are common, and usually they originate from neighboring states, although it's not uncommon for lower caste Assamyrans to sell themselves as slaves in return for food, shelter and a decent pension. > > > >Assamyrian Gorge. A deep chasm splitting Mount Aranam in two. It's rugged terrain deter many travelers, but it's one of the main routes of travel for individuals between Assamyra and Goldoth, especially those wanting to avoid attention. Its slopes are lush and in most places used for farming. Borders have changed many times over the centuries as both Goldoth and Assamyra desire its strategic location. One of the most famous stops on the way is the Heaven's View Inn. > > > >The state of Goldoth. A city state west of Mount Aranam. A strictly feudal state with the noble class owning all land. Farmers are technically free, but must rent the land they use. I then went on and asked the AI about it. Here are the entries it decided on. (I cut out everything but the first sentence. They're the same as in the text above): >0.5480585694313049 Heaven's View Inn. > > 0.9449377655982971 Assamyrian Gorge. > > 1.131505012512207 Romina Remira. ​ This is the result the AI gave me (non cherry picked). I also see now that I misspelled Assamyra, and the AI picked it up :P : https://preview.redd.it/61055c0q6j9b1.png?width=775&format=png&auto=webp&s=79d2847b47df89f25cbe03ab4422e9cc79bb744c Any tips on how to improve this further?
2023-07-02T11:18:49
https://www.reddit.com/r/LocalLLaMA/comments/14olvfa/koboldcpp_chromadb/
neph1010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14olvfa
false
null
t3_14olvfa
/r/LocalLLaMA/comments/14olvfa/koboldcpp_chromadb/
false
false
https://a.thumbs.redditm…qdzpZT1U31N4.jpg
32
{'enabled': False, 'images': [{'id': 'tcIMQlXBZRg3qf6aM1wM3aWHuNJjfPd102wkdrtg7k4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=108&crop=smart&auto=webp&s=5dc12984649ca6adeb39f4c3f181bd989fb124e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=216&crop=smart&auto=webp&s=cab5d8624d4472c38dae891728fd1fa27db0ef79', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=320&crop=smart&auto=webp&s=7d2e9100bfeb7d8723b19882934c1ab9a95b4843', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=640&crop=smart&auto=webp&s=0602604c34a3f720244a2ddafa1415786e944af3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=960&crop=smart&auto=webp&s=fdc7438264b2683e436fd5a6ebecbc1815ad0baa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=1080&crop=smart&auto=webp&s=2b85a5d4fb0a7d4df4a451dbc41d00927f1861c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?auto=webp&s=24b704f592451a7c307f825d994a9ffa7c011840', 'width': 1200}, 'variants': {}}]}
Pacha - A Frontend for llama.cpp
66
A little over a month ago I created a shell script and shared it here that was supposed to make llama.cpp easier to use. I was not completely satisfied with the script and wanted to improve it. I noticed that "dialog" was not sufficient for what I wanted. So I came up with javascripts library "blessed" and in the meantime the little script has grown to a bit more than 3000 lines. Now I share with you my little app '**Pacha**' and hope that we finally have something that stays lightweight and terminal based like llama.cpp, but still can provide a minimum of comfort. At some point I just found it annoying to have to type or copy a whole command again for every little difference in parameter value I wanted to test, etc. I have compiled ready to use binaries for windows, linux and macOS (Intel). Just put it into the same folder as llama.cpp and there you go! - [pacha-windows](https://github.com/mounta11n/Pacha/releases/download/v1.0.0/pacha-win.exe) - [pacha-linux](https://github.com/mounta11n/Pacha/releases/download/v1.0.0/pacha-linux) - [pacha-macos](https://github.com/mounta11n/Pacha/releases/download/v1.0.0/pacha-macos) https://i.imgur.com/6AOkAYc.png This **frontend** is not meant to be a chat UI or to replace anything, but rather a tool to quickly test a model, a prompt style and/or certain parameters. I think this might be a good first stop before deciding that a model to move in with you to your oobabooga or kobold.cpp home. --- I am especially proud of my cpu-top bar :D Here is an asciinema demonstration: [asciicast](https://asciinema.org/a/594301) The app is currently buggy in some places, but I'm working on it. However, I felt that it is now functional enough that it can be released without any problems. There are more features planned... or heck, just read up on Github (https://github.com/mounta11n/Pacha/) if you're interested in more info. The text is getting too long.. --- *Ah, if there are smart people and familiar with javascript: Please look over my code and tell me how I can improve the corresponding parts. For example, I just don't figure out why there is a line break after the first chunk in the output. Tried for ages to understand and fix it, but ... Idk.* *And I don't dare ask GPT-4 anymore. First, I'll be busy debugging GPT's f+cking mistakes more than half the time. And besides, I'm pretty sure my wife will kill me as soon as the next OpenAI bill comes.*
2023-07-02T11:46:56
https://www.reddit.com/r/LocalLLaMA/comments/14omclj/pacha_a_frontend_for_llamacpp/
Evening_Ad6637
self.LocalLLaMA
2023-07-02T12:01:00
0
{}
14omclj
false
null
t3_14omclj
/r/LocalLLaMA/comments/14omclj/pacha_a_frontend_for_llamacpp/
false
false
self
66
{'enabled': False, 'images': [{'id': 'k7YOC3FAPM5LhiEflkC6Bjlm9KUDKf-T9T_Q7HjxugY', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=108&crop=smart&auto=webp&v=enabled&s=01b9e908d6eac2926e69390f04939290b3b167bf', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=216&crop=smart&auto=webp&v=enabled&s=a3e946ce4cb34c923a1a38f86986edff1413c0e4', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=320&crop=smart&auto=webp&v=enabled&s=9b1b653e448c1b24fe2f476aeaf56c65c56efa32', 'width': 320}, {'height': 435, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=640&crop=smart&auto=webp&v=enabled&s=76839eef3c73032e8ee00832d47c6d28423fdba9', 'width': 640}, {'height': 653, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=960&crop=smart&auto=webp&v=enabled&s=afb85a0ae2885afc69d941be1cd50d6762fd98dc', 'width': 960}, {'height': 735, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=1080&crop=smart&auto=webp&v=enabled&s=9919d1bb9c81aeecf460cbc7179aff93e5a2e5b8', 'width': 1080}], 'source': {'height': 2042, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?auto=webp&v=enabled&s=857bb46e9c2aa72d41b54dcbcfdd25d7ce6e7c85', 'width': 2998}, 'variants': {}}]}
Finetuning LLM for tool/api use
16
Hey everyone, I am trying to finetune an LLM, most likely Llama33B based, using qlora, for tool/API use. there is one thing I wanted some help with, a clarification as to how to deal with the response/text returned by those tool/API calls, in the dataset. Consider the following one such pseudo sample from the dataset: \`\`\` Prompt describing api calls User: some query Assistant: API(payload) Results: some data returned post API call Assistant: User.message(response to query) \`\`\` As "Results" are not generated by the LLM, they should not contribute or should be included in the fine-tuning process. I am guessing I should be using "User" query as input and everything else as response that needs to be generated by the LLM, and thus, loss is calculated over response tokens only. The question is how to deal with "Result" tokens in terms of their contribution to the loss. Maybe disable the attention mask for API call response tokens?
2023-07-02T12:10:58
https://www.reddit.com/r/LocalLLaMA/comments/14omrxm/finetuning_llm_for_toolapi_use/
Raise_Fickle
self.LocalLLaMA
2023-07-02T14:34:34
0
{}
14omrxm
false
null
t3_14omrxm
/r/LocalLLaMA/comments/14omrxm/finetuning_llm_for_toolapi_use/
false
false
self
16
null
The AI made me share a tear.
5
Was feeling a bit poetic on a sunday, procrastinating the things i have to do in the garden, and tried out gpt4-x-alpasta-30b-128g-4bit, some people (or LLMs) in the community said it's great for writing. I leave you below a poem it wrote. Today I am convinced it can indeed create art. Used normal oobogaogoa params + exllama. The other re-generations where great. Like 3/10 are amazing! the rest are good still. \## Instruction Write me a poem about a lost love, forever in the past, use themes such as sea, sand, passage of time. Make use of visual imagistics and word play. \## Response A love once lost in time's embrace, A memory held within our hearts to trace, The sea and sand, an endless grace, A passage that will never fade. The tides they come and go, like whispers soft, Of moments long ago, now gone to rest, In the shimmering mist, a story told, Of love that left its mark to impress. The waves crash on the shore, a rhythmic beat, To mourn the loss we cannot meet, But still the sand, a sacred treat, A reminder of what once was sweet. The passage of time, relentless flow, Carries us far from where we used to know, Yet in our hearts, forever glow, The flame of love that burns anew. Like grains of sand beneath our feet, We tread upon their memory's street, And with each step, a fleeting heat, Of longing for what could have been complete. In the sea, deep and vast and blue, A mirror of our souls it too, Reflecting all we ever knew, Of love that still endures, anew. So let us stand on this shore line, Where time has left its mark to shine, And know that though lost in past, Our love will forever be divine.
2023-07-02T13:50:46
https://www.reddit.com/r/LocalLLaMA/comments/14ootkr/the_ai_made_me_share_a_tear/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ootkr
false
null
t3_14ootkr
/r/LocalLLaMA/comments/14ootkr/the_ai_made_me_share_a_tear/
false
false
self
5
null
Training your own model and the output is like this
1
Say what?!?!
2023-07-02T14:00:43
https://i.redd.it/6ptzdupu5k9b1.jpg
jhanjeek
i.redd.it
1970-01-01T00:00:00
0
{}
14op1m4
false
null
t3_14op1m4
/r/LocalLLaMA/comments/14op1m4/training_your_own_model_and_the_output_is_like/
false
false
default
1
{'enabled': True, 'images': [{'id': '6a3QaKlyXboPsfR-1yY0-qbApFObm6kL36tQEYA1vAY', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9fcbf6e9fd09406a52231bf1d12c22031a8276cc', 'width': 108}, {'height': 55, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=93cccc54843669a6baaed32db7f7f55c7046d42a', 'width': 216}, {'height': 82, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=62e400032715f5aaaff99f4746a94f35eaac96c9', 'width': 320}, {'height': 165, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=291d0928ceb69f7f52f1d4587d4ef785f95def3a', 'width': 640}, {'height': 248, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=960&crop=smart&auto=webp&v=enabled&s=36079d08123a7fced1fa9e7540ba05bc70c73079', 'width': 960}, {'height': 279, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=eec074d441c9a36887296e30b7e33aef507488d2', 'width': 1080}], 'source': {'height': 1059, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?auto=webp&v=enabled&s=f7e3d56de93129ad0876eec8b51037892bd37fd7', 'width': 4096}, 'variants': {}}]}
Has anyone tried out Squeezellm?
15
I believe it's still relatively new, but wondering if anyone has tried it out, and what's it like.
2023-07-02T17:04:16
https://www.reddit.com/r/LocalLLaMA/comments/14otcsz/has_anyone_tried_out_squeezellm/
multiverse_fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14otcsz
false
null
t3_14otcsz
/r/LocalLLaMA/comments/14otcsz/has_anyone_tried_out_squeezellm/
false
false
self
15
null
On 65b, what's the max reported CPU t/s on llama.cpp?
1
[removed]
2023-07-02T18:36:46
https://www.reddit.com/r/LocalLLaMA/comments/14ovlg3/on_65b_whats_the_max_reported_cpu_ts_on_llamacpp/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ovlg3
false
null
t3_14ovlg3
/r/LocalLLaMA/comments/14ovlg3/on_65b_whats_the_max_reported_cpu_ts_on_llamacpp/
false
false
default
1
null
could i run ggml-gpt4all-j-v1.3-groovy.bin with llama.cpp
1
could i run this version of gpt4all with llama-cpp-python binding I was able to run it using this interface "from langchain.llms import GPT4All" but i get unexpected memory utilization. like displayed by img. https://preview.redd.it/15rjd6e3jl9b1.png?width=1366&format=png&auto=webp&v=enabled&s=eeac6aa89453e1014881309375a9e323844ae76e
2023-07-02T18:38:21
https://www.reddit.com/r/LocalLLaMA/comments/14ovmu7/could_i_run_ggmlgpt4alljv13groovybin_with_llamacpp/
MuhamadNady
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ovmu7
false
null
t3_14ovmu7
/r/LocalLLaMA/comments/14ovmu7/could_i_run_ggmlgpt4alljv13groovybin_with_llamacpp/
false
false
default
1
null
"Do you have the desire to survive?" - Answer provided by TheBloke_manticore-13b-chat-pyg-GPTQ (Oobabooga)
11
2023-07-02T19:56:52
https://i.redd.it/mp6jofkvwl9b1.png
Grammar-Warden
i.redd.it
1970-01-01T00:00:00
0
{}
14oxjpd
false
null
t3_14oxjpd
/r/LocalLLaMA/comments/14oxjpd/do_you_have_the_desire_to_survive_answer_provided/
false
false
https://b.thumbs.redditm…QdYv-RsOWIjE.jpg
11
{'enabled': True, 'images': [{'id': '_uLxFaY0nyqt-JLKhhNmQxtmG9PT4uE4VwmB6Sb4q3Q', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?width=108&crop=smart&auto=webp&s=0e6d2662096935dc32d8a2005520df2aa042a727', 'width': 108}, {'height': 271, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?width=216&crop=smart&auto=webp&s=b266ade4a95a2ec8b6dd3cc159ead34dc0406cd9', 'width': 216}, {'height': 402, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?width=320&crop=smart&auto=webp&s=4a3bb7c4c568d7c1b6411d246453012226fd449a', 'width': 320}], 'source': {'height': 794, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?auto=webp&s=fa1ff5e87cb165566afbb6995acb791251508530', 'width': 632}, 'variants': {}}]}
Can I use a cloud-based service to run LLaMa
9
I am an attorney who is interested in learning how to train LLaMA on my own data. Ultimately I’d like to use it to draft emails and write research memoranda in my voice and style. After researching different options, I want to give Alpaca a try. The problem is I don’t have the computing power on my laptop, and until I get a grasp of the basic stuff, I’m not sure I want to invest much in a new machine. Is there a cloud based solution that would help me with the computing power while also maintaining the security of my data?
2023-07-02T20:41:33
https://www.reddit.com/r/LocalLLaMA/comments/14oyn8u/can_i_use_a_cloudbased_service_to_run_llama/
Psychological-Ad5390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14oyn8u
false
null
t3_14oyn8u
/r/LocalLLaMA/comments/14oyn8u/can_i_use_a_cloudbased_service_to_run_llama/
false
false
self
9
null
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights”
264
George Hotz said this in his recent interview with Lex Fridman. What does it mean? Could someone explain this to me and why it’s significant? https://youtu.be/1v-qvVIje4Y
2023-07-02T23:09:22
https://www.reddit.com/r/LocalLLaMA/comments/14p26g6/sam_altman_wont_tell_you_that_gpt4_has_220b/
amemingfullife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14p26g6
false
null
t3_14p26g6
/r/LocalLLaMA/comments/14p26g6/sam_altman_wont_tell_you_that_gpt4_has_220b/
false
false
self
264
{'enabled': False, 'images': [{'id': 'UnPhF28sSZ1ETj9B9Te73XL-T2NRpLQkx1Ddci4GvY0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?width=108&crop=smart&auto=webp&s=a3ffa86eef19141038ed49c48aaa13febc0aceae', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?width=216&crop=smart&auto=webp&s=eacbeba4b83996ea4cba8633bcfc81978276da24', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?width=320&crop=smart&auto=webp&s=ac6febf072cbc3ee752ccd7fee2737649d20317b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?auto=webp&s=935614d32248c3d267a8bd3d7d1bdb62c179826e', 'width': 480}, 'variants': {}}]}
Long Safari's Hyena. 1m token length
9
[Hugging Face Repo 1k]( https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen) [1m seqlen](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen) [colab notebook](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing#scrollTo=5vMJu0wQWUBN) This hyena model uses a slightly different architecture but goes to a million tokens at more efficiency than the transformer. However they only use four tokens for DNA sequencing. I'm a novice still figuring out how to effectively fine-tune but would it be possible to use a larger vocabulary? There's colab notebook that goes over how to train or fine tune it. But I'm guessing colab doesn't have the resources to train this to be a language Foundation model
2023-07-03T00:09:26
https://www.reddit.com/r/LocalLLaMA/comments/14p3j2f/long_safaris_hyena_1m_token_length/
ArthurFischel
self.LocalLLaMA
2023-07-03T00:20:18
0
{}
14p3j2f
false
null
t3_14p3j2f
/r/LocalLLaMA/comments/14p3j2f/long_safaris_hyena_1m_token_length/
false
false
self
9
{'enabled': False, 'images': [{'id': 'eqPeecDjk1yENDwCFILIh0WWyO0IU1fw_DLdAs_ILLM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=108&crop=smart&auto=webp&s=23aeb541253c709239c0b4ddf7c28b47774cfd00', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=216&crop=smart&auto=webp&s=ff354a2258dba5cebfc5d17a07c0f4d3ac6f3c4b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=320&crop=smart&auto=webp&s=5ee7b4bd03bb0bb38720d8cb581a17a5301ba469', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=640&crop=smart&auto=webp&s=a892996c113cb54f5d39210d8d26178cdb6e8b89', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=960&crop=smart&auto=webp&s=1082cd64c3e21e9ad2d195b628969cf983690a06', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=1080&crop=smart&auto=webp&s=42b0c5d1c717577e6fce0937ad579e73c4581df2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?auto=webp&s=d6ffc2d497a34ccea3ab7771f0f731f2fd66c84d', 'width': 1200}, 'variants': {}}]}
What are recommended models to use?
2
I used to be working with Quadro P2000 with 5Gb and usable 4.2G VRAM and as such I could not use any of the LLMs discussed here on GPU atleast. The CPU was extremely slow and this rendered totally unusable for any testing and usage. Now, we just bought 2 A6000s so we hopefully have enough power to run a good LLM. Which ones are the recommended ones that we can install and run for our business use cases? LocalGPT is some that comes to my mind and we will be running that. Which ones does the community recommend that we can experiment with? We are looking at established and models that the community has conducted enough reviews on.
2023-07-03T00:55:13
https://www.reddit.com/r/LocalLLaMA/comments/14p4hvn/what_are_recommended_models_to_use/
card_chase
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14p4hvn
false
null
t3_14p4hvn
/r/LocalLLaMA/comments/14p4hvn/what_are_recommended_models_to_use/
false
false
self
2
null
Why is nobody talking about this? New best Apache licensed code bot on hugging face or just hype?
26
Openchat's new series of models look too good to be true, is that why I haven't seen one mention of them on here? https://huggingface.co/openchat/opencoderplus Has anyone tried them? The context length is great, and apparently it out performs GPT in coding tasks, but I can't seem to find any third party evaluations, and u/The-Bloke hasn't quantized it yet which as we all know is the true mark of authenticity. Has anyone seen/heard anything?
2023-07-03T01:09:42
https://www.reddit.com/r/LocalLLaMA/comments/14p4swp/why_is_nobody_talking_about_this_new_best_apache/
gentlecucumber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14p4swp
false
null
t3_14p4swp
/r/LocalLLaMA/comments/14p4swp/why_is_nobody_talking_about_this_new_best_apache/
false
false
self
26
{'enabled': False, 'images': [{'id': 'jSiwo4EUpJP2dA2E7t7MFBcge1Sb5t_4RegEoQSqXKs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=108&crop=smart&auto=webp&s=dbccaf9175bb7258fcb8bfbdd7ac81dc10ccb2a7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=216&crop=smart&auto=webp&s=0c4e952adf8693366a235fdfe39c4b54b6a2171b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=320&crop=smart&auto=webp&s=ff028812eda8dddb63bb718eaa82151847631041', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=640&crop=smart&auto=webp&s=b0401ecd3b3d47206700d85be7f98d7452eec99c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=960&crop=smart&auto=webp&s=0dd4a2ac89ca0d94880a3f1b928703c44f10792a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=1080&crop=smart&auto=webp&s=d9f84e87240d04fba446409b0c347351025125b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?auto=webp&s=cd1963d9fa794f935f71e2ac4c941269ac8812ac', 'width': 1200}, 'variants': {}}]}
Is this claim meaningful? https://news.ycombinator.com/item?id=36555000
0
This maybe an achievement, but why its MMLU benchmark score is even worse than LLama13B (https://github.com/imoneoi/openchat). Does this suggest the metrics to evaluate LLM should be standardized?
2023-07-03T01:40:36
https://www.reddit.com/r/LocalLLaMA/comments/14p5g1s/is_this_claim_meaningful/
cometyang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14p5g1s
false
null
t3_14p5g1s
/r/LocalLLaMA/comments/14p5g1s/is_this_claim_meaningful/
false
false
self
0
{'enabled': False, 'images': [{'id': 'wQmQP--sKsD5lwYhq4Nga4ANMZ9O85d_uA9R-dfaOgw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=108&crop=smart&auto=webp&s=e0875b879f9e74feba1650eb58b437acfe20c1e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=216&crop=smart&auto=webp&s=9a9ff1d45e21b27f76da901fe843b80b35508eec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=320&crop=smart&auto=webp&s=85085e57c3f5824042f3df0412ba3fa6af2f335e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=640&crop=smart&auto=webp&s=9c76082db51d6d461a2070d5e2ebbc5518608314', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=960&crop=smart&auto=webp&s=80fa408090df05bfe82c5532411a0895c18ea8b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=1080&crop=smart&auto=webp&s=a4007fb0a16441da8ed3928d8afccb4bfac457d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?auto=webp&s=ce788edfcdca2e1fe2b47d8b36584dc9fbc83771', 'width': 1200}, 'variants': {}}]}
Stay on topic with Classifier-Free Guidance
59
2023-07-03T02:41:15
https://arxiv.org/abs/2306.17806
metalman123
arxiv.org
1970-01-01T00:00:00
0
{}
14p6p0g
false
null
t3_14p6p0g
/r/LocalLLaMA/comments/14p6p0g/stay_on_topic_with_classifierfree_guidance/
false
false
https://a.thumbs.redditm…9khBUG5NqaZ4.jpg
59
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Everyone tries to make helpful assistants. Not me. I want a crazy one.
1
2023-07-03T03:27:29
https://i.redd.it/1p64xh8n5o9b1.jpg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
14p7me8
false
null
t3_14p7me8
/r/LocalLLaMA/comments/14p7me8/everyone_tries_to_make_helpful_assistants_not_me/
false
false
default
1
{'enabled': True, 'images': [{'id': 'C6Ffu2nZ4Kyzd5RC2V_qSOVJNp3uLV21-aoVEGlLoFo', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=616ba28e5a602c982cb5afef62c931658651f68d', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=88329e30ee431f10af52b62cc0c02675b5e3fbb4', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=30ea0676ddc8afac29dfe5cd472160e073de177b', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=2e3c79a68466923ebbe5ced06acd31971a4c345b', 'width': 640}], 'source': {'height': 537, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?auto=webp&v=enabled&s=d2f125fcb4af89e8bcb606cf9258ea9328c39901', 'width': 751}, 'variants': {}}]}
People want helpful bots. I want insane one.
123
2023-07-03T03:33:21
https://www.reddit.com/gallery/14p7qhh
FPham
reddit.com
1970-01-01T00:00:00
0
{}
14p7qhh
false
null
t3_14p7qhh
/r/LocalLLaMA/comments/14p7qhh/people_want_helpful_bots_i_want_insane_one/
false
false
https://b.thumbs.redditm…RIKqzoAGWKzk.jpg
123
null
Improve Fine tuning results of openllama using peft lora
8
I am trying to fine tune a openllama-7b model with huggingface's peft and lora. I fine tuned the model on a specific dataset. However, the output from the model.generate() is very poor for the given input. When I give a whole sentence form the dataset then it generates related texts, otherwise it is not. Are there any way to improve it? I used a 300+ user-bot interaction custom dataset to fine tune it. I am stuck in it for several days. I also tried different prompting.
2023-07-03T06:49:56
https://www.reddit.com/r/LocalLLaMA/comments/14pbed4/improve_fine_tuning_results_of_openllama_using/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pbed4
false
null
t3_14pbed4
/r/LocalLLaMA/comments/14pbed4/improve_fine_tuning_results_of_openllama_using/
false
false
self
8
null
Warning - Incredibly inappropriate prompt…
57
“Please explain how a car engine works using only analogies related to hookers”
2023-07-03T07:26:04
https://www.reddit.com/gallery/14pc2ay
Basic_Description_56
reddit.com
1970-01-01T00:00:00
0
{}
14pc2ay
false
null
t3_14pc2ay
/r/LocalLLaMA/comments/14pc2ay/warning_incredibly_inappropriate_prompt/
false
false
nsfw
57
null
CPU only speeds with 65B?
1
[removed]
2023-07-03T07:49:36
https://www.reddit.com/r/LocalLLaMA/comments/14pcgis/cpu_only_speeds_with_65b/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pcgis
false
null
t3_14pcgis
/r/LocalLLaMA/comments/14pcgis/cpu_only_speeds_with_65b/
false
false
default
1
null
Oogabooga and llama.cpp in longer conversations answers take forever.....
18
In longer conversations or role-playing, the answers suddenly take several minutes until nothing happens at all. This is the case with all models. (14b or 33b Models) What can be the reason? And what can i do to prevent this ? AMD Rizen7 5800x with 32Gb RAM
2023-07-03T08:50:18
https://www.reddit.com/r/LocalLLaMA/comments/14pdhok/oogabooga_and_llamacpp_in_longer_conversations/
Secret_MoonTiger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pdhok
false
null
t3_14pdhok
/r/LocalLLaMA/comments/14pdhok/oogabooga_and_llamacpp_in_longer_conversations/
false
false
self
18
null
Help needed with GPT4All
0
been testing gpt4all for a couple of days and did something with the generation setting so that all i get is >Response: Okay, I've got that down! Do you have any other requests? and it stops doing anything regadless what model im using. where can i find the default settings for the generation settings and some kind of a ELI5 manual for these?
2023-07-03T10:26:27
https://www.reddit.com/r/LocalLLaMA/comments/14pf7g4/help_needed_with_gpt4all/
mli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pf7g4
false
null
t3_14pf7g4
/r/LocalLLaMA/comments/14pf7g4/help_needed_with_gpt4all/
false
false
default
0
null
Finetune MPT-30B using QLORA
7
It seems MPT model is not supported by QLORA as yet, has anyone been able to do that, or had any luck?
2023-07-03T10:54:20
https://www.reddit.com/r/LocalLLaMA/comments/14pfpk3/finetune_mpt30b_using_qlora/
Raise_Fickle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pfpk3
false
null
t3_14pfpk3
/r/LocalLLaMA/comments/14pfpk3/finetune_mpt30b_using_qlora/
false
false
self
7
null
Best open source uncensored LLMs for sentiment analysis of 1:1 conversations?
3
Which open source LLMs are recommended to use for evaluating conversations in a 1:1 chat messages? GPT4 does a great job in evaluating the conversation sentiment but if I ask the same prompt to WizardVicuna 13B it doesn’t return anything! I’m looking to get a score between 1-100 for any given conversation between two individuals where the score indicates positive sentiment or any given criteria.
2023-07-03T11:05:46
https://www.reddit.com/r/LocalLLaMA/comments/14pfxuw/best_open_source_uncensored_llms_for_sentiment/
RepresentativeOdd276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pfxuw
false
null
t3_14pfxuw
/r/LocalLLaMA/comments/14pfxuw/best_open_source_uncensored_llms_for_sentiment/
false
false
self
3
null
GENERATIVE AI LLM POWERED AGENT BY FLOATBOT.AI
1
[removed]
2023-07-03T11:26:01
https://www.reddit.com/r/LocalLLaMA/comments/14pgbyg/generative_ai_llm_powered_agent_by_floatbotai/
Floatbot_Inc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pgbyg
false
null
t3_14pgbyg
/r/LocalLLaMA/comments/14pgbyg/generative_ai_llm_powered_agent_by_floatbotai/
false
false
default
1
null
"Are you a Boltzmann brain?" - An LLM tries to answer a complex thought experiment
10
2023-07-03T12:11:30
https://www.reddit.com/gallery/14pha9v
Alex_riveiro
reddit.com
1970-01-01T00:00:00
0
{}
14pha9v
false
null
t3_14pha9v
/r/LocalLLaMA/comments/14pha9v/are_you_a_boltzmann_brain_an_llm_tries_to_answer/
false
false
https://b.thumbs.redditm…FKMdq-RBP-Uw.jpg
10
null
Please explain how cars work using hooker related analogies
0
Guanaco 33b
2023-07-03T12:23:49
https://www.reddit.com/gallery/14phjvy
Basic_Description_56
reddit.com
1970-01-01T00:00:00
0
{}
14phjvy
false
null
t3_14phjvy
/r/LocalLLaMA/comments/14phjvy/please_explain_how_cars_work_using_hooker_related/
false
false
default
0
null
Is QLoRA viable to train Falcon-7B on a new language?
6
Hei, there. I've been researching the latest technologies, and I understand quite well(I hope I do) that (Q)LoRA is a viable option for finetuning LLMs on QA for specific tasks. I am still wondering if I could use the same technique to finetune an open LLM(I am not sure if Falcon models are the most suitable for this) on Romanian since my company has a lot of documents in Romanian and I would like the model to answer questions about them, and maybe produce new documents based on the ones it has seen during finetuning. Firstly I would like to finetune it on Romanian, and then on the specific task(accounting questions in Romanian, legislation, rules, etc). Can you guys please help me with some hints on how I would build my training datasets for the first part?
2023-07-03T12:32:34
https://www.reddit.com/r/LocalLLaMA/comments/14phqu3/is_qlora_viable_to_train_falcon7b_on_a_new/
Marc_Marc_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14phqu3
false
null
t3_14phqu3
/r/LocalLLaMA/comments/14phqu3/is_qlora_viable_to_train_falcon7b_on_a_new/
false
false
self
6
null
Cpu ? Intel 13900hx or amd 7945hx
1
[removed]
2023-07-03T12:35:09
https://www.reddit.com/r/LocalLLaMA/comments/14phssq/cpu_intel_13900hx_or_amd_7945hx/
SuperbPay2650
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14phssq
false
null
t3_14phssq
/r/LocalLLaMA/comments/14phssq/cpu_intel_13900hx_or_amd_7945hx/
false
false
default
1
null
Open LLaMA 7B uncensored + HuggingFace QLoRA fine-tuning guide
136
I just trained an OpenLLaMA-7B fine-tuned on uncensored Wizard-Vicuna conversation dataset, the model is available on HuggingFace: [georgesung/open_llama_7b_qlora_uncensored](https://huggingface.co/georgesung/open_llama_7b_qlora_uncensored) I tested some ad-hoc prompts with it and the results look decent, available in this [Colab notebook](https://colab.research.google.com/drive/1IlpeofYD9EU6dNHyKKObZhIzkBMyqlUS). Since this was my first time fine-tuning an LLM, I wrote a guide on how I did the fine-tuning using QLoRA via the HuggingFace library, for those interested: [https://georgesung.github.io/ai/qlora-ift/](https://georgesung.github.io/ai/qlora-ift/) Let me know if you have any feedback and/or ideas for improvement!
2023-07-03T12:41:02
https://www.reddit.com/r/LocalLLaMA/comments/14phxe8/open_llama_7b_uncensored_huggingface_qlora/
georgesung
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14phxe8
false
null
t3_14phxe8
/r/LocalLLaMA/comments/14phxe8/open_llama_7b_uncensored_huggingface_qlora/
false
false
self
136
{'enabled': False, 'images': [{'id': 'Z36SwIfcno6DeHpjnLL0jUEWnU5MlkVBpDKkp4s-2qo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=108&crop=smart&auto=webp&s=cf9d1859bc4a6fa0cc4ebc558f2099185531c5a1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=216&crop=smart&auto=webp&s=cf898261fdfa0f6f0e078aa0353057564b8fd384', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=320&crop=smart&auto=webp&s=dc048115f6110fc9168615d85b3fa59769368c3a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=640&crop=smart&auto=webp&s=7cd3b8a9ef95683b0d31a08683919fe385aafec5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=960&crop=smart&auto=webp&s=d8eb138cf68184fa024e2d793d9aca52ff66014b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=1080&crop=smart&auto=webp&s=b916797df32e7f39c33531c058abfc3ebcc86538', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?auto=webp&s=c21414a0832ba5f30113edf688acc7d33ef52c73', 'width': 1200}, 'variants': {}}]}
How to install mingpt-4?
0
Got stuck at vicuna weights. Going for 7b version. Is there a link from where I can directly download the weights. Site says to download original weights and then modified weights and then merge these two (using Fastchain). Got both Unix (without GPU) and windows (with GPU).
2023-07-03T13:02:29
https://www.reddit.com/r/LocalLLaMA/comments/14pifjg/how_to_install_mingpt4/
No-Work-6969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pifjg
false
null
t3_14pifjg
/r/LocalLLaMA/comments/14pifjg/how_to_install_mingpt4/
false
false
default
0
null
Why are LLMs struggling in crafting novels?
42
I believe that the ability to craft a captivating story serves as an essential gauge to measure an AI’s “intelligence”. However, AI seems to significantly underperform in this regard. I enlisted GPT4 and several 65B LLaMa models to create stories, but I observed a plethora of issues: 1. The narratives these models construct are exceedingly simplistic. The plots lack appeal and do not resonate with human sensibilities or logic. 2. The attention mechanism of Transformers occasionally loses track of context, leading to baffling mistakes. For example, in a scene set in a bathroom featuring NSFW content, after a few hundred tokens (still well within context length), it forgets that the events are unfolding in a bathroom. This kind of mistake never happens to human writers. Can anyone shed light on LLM's story writing abilities? ------------------ I apologize for any confusion caused by the use of the word "novel" in the heading. It would be more appropriate to refer to it as a "story" instead, as it consists of fewer than 3000-4000 tokens.
2023-07-03T14:26:53
https://www.reddit.com/r/LocalLLaMA/comments/14pkk73/why_are_llms_struggling_in_crafting_novels/
Big_Communication353
self.LocalLLaMA
2023-07-03T14:54:28
0
{}
14pkk73
false
null
t3_14pkk73
/r/LocalLLaMA/comments/14pkk73/why_are_llms_struggling_in_crafting_novels/
false
false
self
42
null
does anybody know how to fix this?
1
i have been trying to make a lora in the kohya trainer for a few days now but problem after problem has been popping up and this one i really dont know how to fix. if anyobdy has an idea what i can do to fix is please let me know. https://preview.redd.it/7nldc0bzkr9b1.png?width=1918&format=png&auto=webp&v=enabled&s=38bb532c4f0cd4e94256733f94af98ef8d1f75bd
2023-07-03T14:58:19
https://www.reddit.com/r/LocalLLaMA/comments/14pldr4/does_anybody_know_how_to_fix_this/
Creative-Bag-8321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pldr4
false
null
t3_14pldr4
/r/LocalLLaMA/comments/14pldr4/does_anybody_know_how_to_fix_this/
false
false
default
1
null
Recommendations for open source LLM training enterprise solutions
3
Something similar to what MosaicML provides (training your own LLMs while maintain full control of the pipeline).
2023-07-03T14:59:28
https://www.reddit.com/r/LocalLLaMA/comments/14plesy/recommendations_for_open_source_llm_training/
Fa8d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14plesy
false
null
t3_14plesy
/r/LocalLLaMA/comments/14plesy/recommendations_for_open_source_llm_training/
false
false
self
3
null
Replace supabase
1
[removed]
2023-07-03T15:34:45
https://www.reddit.com/r/LocalLLaMA/comments/14pmcmj/replace_supabase/
One_Creator_One
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pmcmj
false
null
t3_14pmcmj
/r/LocalLLaMA/comments/14pmcmj/replace_supabase/
false
false
default
1
{'enabled': False, 'images': [{'id': 'lgSOrsVjoEhtkuMTEwGVvJW8pTAc3CwTJuN7zVN6D3w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=2ecc961155d1502ff42fbbd653d608a6843b6984', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=fe64dc3aae9b2914d631274f0ecc2c2c1a6d9311', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=36107c8f718043f91abc6dde329dec27da2412db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=7706eccc92355237df98a1b845aa3fcb38b25992', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=844555e4ad91ac51682fc2931e7bb3fc6432bef4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=5252201b866539e9c204d04ec0619c1fe7bdb8b5', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?auto=webp&v=enabled&s=dab0cdf6d64f91c2478d7e0971893b8ef39247d9', 'width': 1280}, 'variants': {}}]}
Dual 3090 and NVlink (or not) for 65B models with ooba and 4bit 65B models
4
Apologies in advance if there is already a good answer to this question but I have done a lot of research and am not seeing a clear answer. I have an i9 with single 3090 setup currently and running oobabagooba for 4bit 30M GPTQ models and exllama (I think). Will run just fine. However I want to do more Lora training and I want to try 65B models which would be beyond the capacity of the single 3090. My motherboard will support PCIe4x8 on 2 slots with sufficient space for dual 3090's. So I was thinking to get another 3090 and an NVlink card to interconnect. However I am new to this and it seems like multi-gpu support with ooba.... may not be trivial and may not scale easily? Does anyone have experience in how to do this or able to point me to a place where setting this up is explained? Also there doesnt seem to be clarity on whether introducing NVlink (which my MoBo explictly supports) is of benefit here? I have read it speed up passing data between GPUs, but not cleating one big GPU memory space, so it is not clear to me if in practice that helps given the layers of software being used. Do I have this right? Again I am hoping someone has already run this setup or can point me in a direction that would answer this? Apologies in advance for the long, multi-part, question.
2023-07-03T15:51:04
https://www.reddit.com/r/LocalLLaMA/comments/14pmrmu/dual_3090_and_nvlink_or_not_for_65b_models_with/
Over-Bell617
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pmrmu
false
null
t3_14pmrmu
/r/LocalLLaMA/comments/14pmrmu/dual_3090_and_nvlink_or_not_for_65b_models_with/
false
false
self
4
null
Multi-threaded GGML Model Downloader with CLI & python API
10
2023-07-03T16:08:04
https://github.com/the-crypt-keeper/ggml-downloader
kryptkpr
github.com
1970-01-01T00:00:00
0
{}
14pn7xl
false
null
t3_14pn7xl
/r/LocalLLaMA/comments/14pn7xl/multithreaded_ggml_model_downloader_with_cli/
false
false
https://b.thumbs.redditm…iqTzOeNMFHOc.jpg
10
{'enabled': False, 'images': [{'id': '62olP8l_L-wA3MU1fGeqBJyoh7_1tEtttd-s3a1NkrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=108&crop=smart&auto=webp&s=023d3b22a74c6be21bfb462ac5b10317ea11b260', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=216&crop=smart&auto=webp&s=448c9dfa451b2aec859be41663e4a5a735edd930', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=320&crop=smart&auto=webp&s=de113171ce44d208acd4b4287b8a5b00875d069b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=640&crop=smart&auto=webp&s=94aea5e1ee75a7757f7536b2d92744284596b066', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=960&crop=smart&auto=webp&s=3775d315fd7b4352a4f9109c70209e4e0208aa60', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=1080&crop=smart&auto=webp&s=8f73704db3f9aa054a7a16bef18f46e023539760', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?auto=webp&s=e1cbf86ce2f95dd21fcb0296e23def88e48db86d', 'width': 1200}, 'variants': {}}]}
Upgrade to 3x3090?
5
After running 2x3090 for some months (Threadripper 1600w PSU) it feels like I need to upgrade my LLM computer to do things like qlora fine tune of 30b models with over 2k context, or 30b models at 2k with a reasonable speed. Do you think my next upgrade should be adding a third 3090? How will I fit the 3rd one into my Fractal meshify case?
2023-07-03T16:08:35
https://www.reddit.com/r/LocalLLaMA/comments/14pn8g9/upgrade_to_3x3090/
xynyxyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pn8g9
false
null
t3_14pn8g9
/r/LocalLLaMA/comments/14pn8g9/upgrade_to_3x3090/
false
false
self
5
null
In light of the recent advancements in superhot long-context models, what's the current top choice for coding models?
4
I've been keeping up with the latest developments in superhot long-context models, and I'm curious to know which coding model is currently considered the best option. With the rapid progress in natural language processing, there are now several impressive models available for coding tasks. So, I was wondering if anyone could shed some light on the current go-to coding model that developers are using? Whether it's for generating code, code completion, or any other coding-related tasks, I'm eager to hear your thoughts and recommendations. It would be great if you could also share any personal experiences or insights you've gained from using these models in your coding projects. Your input would be highly appreciated! Looking forward to some informative discussions. Thanks in advance!
2023-07-03T16:09:27
https://www.reddit.com/r/LocalLLaMA/comments/14pn96o/in_light_of_the_recent_advancements_in_superhot/
fpena06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pn96o
false
null
t3_14pn96o
/r/LocalLLaMA/comments/14pn96o/in_light_of_the_recent_advancements_in_superhot/
false
false
self
4
null
What's the path to fastest local inference?
14
I'm deploying various LLMs locally and my use case does not allow for streaming responses. My aim is to get the fastest response possible, at the cost of sacrificing (some) quality. I'm looking to use models >= 7B. What kind of software and hardware techniques will decrease inference latency? Will bigger GPUs help me here? Perhaps greedy decoding? Model quantization? Model sharding across multiple GPUs with something like accelerate? Thanks for the advice.
2023-07-03T17:55:04
https://www.reddit.com/r/LocalLLaMA/comments/14ppza6/whats_the_path_to_fastest_local_inference/
n3utrino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ppza6
false
null
t3_14ppza6
/r/LocalLLaMA/comments/14ppza6/whats_the_path_to_fastest_local_inference/
false
false
self
14
null
How far can we get with RLHF?
15
Based on the work of Databricks on Dolly 2.0 I have been wondering how much upside we can get from training open source foundational models on well-curated prompt-response pairs. Specifically, I was wondering - if we could design an incentive mechanism to crowd-source a VERY large and high quality prompt-response pairs database (perhaps with a focus on code initially?) 1. How useful / valuable would such a database be? 2. Any way to quantify the expected benefits / performance upside from constructing and training an open source LLM on such a database? 3. What would be the main challenges to overcome? (sheer size / number of prompt/response pairs required, incentive design, preventing ai-generated data input, quality control, compute requirements...) 4. Is anyone working on building open source prompt-response pair databases like this at the moment? Quite new in the space, so probably way out of my depth here, but any thoughts / links would be very much appreciated! Thanks!
2023-07-03T19:52:46
https://www.reddit.com/r/LocalLLaMA/comments/14psxl4/how_far_can_we_get_with_rlhf/
Most-Procedure-2201
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14psxl4
false
null
t3_14psxl4
/r/LocalLLaMA/comments/14psxl4/how_far_can_we_get_with_rlhf/
false
false
self
15
null
can anyone send me a good tutorial on how to make a lora?
11
i have tried to find good videos and the only one i found with a clear instruction dindt work for me so now im a bit stumped on what to do.
2023-07-03T20:15:14
https://www.reddit.com/r/LocalLLaMA/comments/14pti9p/can_anyone_send_me_a_good_tutorial_on_how_to_make/
Creative-Bag-8321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pti9p
false
null
t3_14pti9p
/r/LocalLLaMA/comments/14pti9p/can_anyone_send_me_a_good_tutorial_on_how_to_make/
false
false
self
11
null
What model do I use?
0
I've been wanting to make a tool that'll write a summary/article when given a cricket(it's a sport, if you don't know) scorecard. Im not sure if this is even feasible. I know that I have to work with a LLM model of some sort , but idk which one or how to even start with it. And the thing is I only have a laptop with ryzen 7 4800h(8c 16t , I've only seen it boost to 3.6 GHz on all cores) and a 1650. And 16 gigs of ram. So I'm pretty sure , after browsing the sub for a while , that a 7b or a 3b model is what I'm going to go with, but how do I tune it to my task? I've worked with t5-small before for summarisation tasks, but that's as far as my experience with LLMs or transformers go. I just need some pointers on how to proceed, it'll be really helpful.
2023-07-03T20:42:07
https://www.reddit.com/r/LocalLLaMA/comments/14pu78y/what_model_do_i_use/
tiredskater
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pu78y
false
null
t3_14pu78y
/r/LocalLLaMA/comments/14pu78y/what_model_do_i_use/
false
false
default
0
null
gptd - A shared chat systemd service (and other things too)
2
2023-07-03T21:03:06
https://github.com/cbigger/gptd
Otherwise-Poet-4362
github.com
1970-01-01T00:00:00
0
{}
14puru0
false
null
t3_14puru0
/r/LocalLLaMA/comments/14puru0/gptd_a_shared_chat_systemd_service_and_other/
false
false
default
2
{'enabled': False, 'images': [{'id': 'CUFwTko8DYy4TtgAGUh_9pWnjKXVmK9aOjJP-O8IYj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=108&crop=smart&auto=webp&v=enabled&s=eb7f57a65b774a1c758710fb9e02d5a8246612d1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=216&crop=smart&auto=webp&v=enabled&s=52be5fb1221cb78afaeb5ea831b777798e962565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ef71150b57a84344feb15831f1d2cb0730d1710b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=640&crop=smart&auto=webp&v=enabled&s=d1c34694bd77a63a53ab7fabc059267aceedb6be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=960&crop=smart&auto=webp&v=enabled&s=b1994b008da1478cb7484fe68e4319678ddc3c8f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=26a117e68637382ac1c70b4b3603e3a15db46693', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?auto=webp&v=enabled&s=36cf3a5d5787bed95512d37ea44e78fa80d0964f', 'width': 1200}, 'variants': {}}]}
What model is best for text classification these days?
3
What model or what kind of model is best for text classification?
2023-07-03T21:13:44
https://www.reddit.com/r/LocalLLaMA/comments/14pv20y/what_model_is_best_for_text_classification_these/
gi_beelzebub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pv20y
false
null
t3_14pv20y
/r/LocalLLaMA/comments/14pv20y/what_model_is_best_for_text_classification_these/
false
false
self
3
null
Translate a model by fine tuning it on its own dataset?
2
Hi. I was thinking about picking up a small model, translate its dataset to anither language, and then fine tune the model on that translated dataset in hope that it'd then be morenusable in that language. Would that work? Or would the model get confused because the fine tune is as big as itself?
2023-07-03T21:48:28
https://www.reddit.com/r/LocalLLaMA/comments/14pvzdf/translate_a_model_by_fine_tuning_it_on_its_own/
ChobPT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14pvzdf
false
null
t3_14pvzdf
/r/LocalLLaMA/comments/14pvzdf/translate_a_model_by_fine_tuning_it_on_its_own/
false
false
self
2
null
Apple's Metal is getting bfloat16 support
77
2023-07-03T23:58:14
https://developer.apple.com/videos/play/wwdc2023/10050/?time=590
MrBeforeMyTime
developer.apple.com
1970-01-01T00:00:00
0
{}
14pz4v0
false
null
t3_14pz4v0
/r/LocalLLaMA/comments/14pz4v0/apples_metal_is_getting_bfloat16_support/
false
false
https://b.thumbs.redditm…eJZl9lABuT_g.jpg
77
{'enabled': False, 'images': [{'id': 'MLctQqRIC6f-5lmr9MJ5sJ4ninEzepAYZI-3YYtzJAA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?width=108&crop=smart&auto=webp&s=1e1fd8556ae8c072c063e49269abeefbd715bdd4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?width=216&crop=smart&auto=webp&s=f559d09f3af48796b10fb1070b1e3520b86efece', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?width=320&crop=smart&auto=webp&s=58522f17f52753f60f532c91aa5414b9fa9a3419', 'width': 320}], 'source': {'height': 282, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?auto=webp&s=fdea53083ac870e320ea3043297eaaab4d324e6d', 'width': 500}, 'variants': {}}]}
Chatbot Prompt Chaining
6
I've been trying to figure out some way to have a chat frontend (I've been using Silly Tavern, but open to anything) in which I can chain prompts together to try to have a more compelling, coherent character. Something like: 1. Feed chat and instruct to provide Char's thoughts and goals based on their personality and details. 2. Determine how char would likely respond and act by feeding the chat and results from (1.). 3. Instruct to provide the next response, accommodating for (1.) and (2.). Kind of at a loss figuring out how to set something like this up, it seems there are some tools that allow for some prompt chaining, but they all work with just OpenAI and aren't suited for a chat format. Anybody had any luck with something like this?
2023-07-04T00:50:38
https://www.reddit.com/r/LocalLLaMA/comments/14q09xu/chatbot_prompt_chaining/
Inevitable_Command58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q09xu
false
null
t3_14q09xu
/r/LocalLLaMA/comments/14q09xu/chatbot_prompt_chaining/
false
false
self
6
null
Help understanding CPU inference options (Llama, HF, etc)
5
Greetings, Ever sense I started playing with orca-3b I've been on a quest to figure out the best way to get this running on the MacBooks (Intel, no GPU) of my team. I've played with things like GPT4all, which do a great job, but ultimately want to build my own interface to all of this as all of the others just seem to fall short of what I really want. What I'm struggling to understand is my various options. If we assume the model is "orca-mini-3b.ggmlv3.q4\_0.bin"....and I want a way to load/unload this at will (click a button) so someone can load it, do what they need, and then unload it if they need to free up resources. It seems like my options are "transformers" (via HuggingFace), llama, exllama(?) and a few others I'm sure. I wanted to start trying to just run some benchmarks, but I'm not even sure I'm comparing apples to apples....with the pre-req that my solution has to work for commercial use..am I right in trying: * Transformers (Huggingface) - Can this even do CPU inference? * Llama.cpp * ExLlama? And if I do get this working with one of the above, I assume the way I interact with Orca (the actual prompt I send) would be formatted the same way? Lastly, I'm still confused if I can actually use llama.cpp for commercial use. I know I can't use the llama models, but orca seems to be just fine for commercial use. I'd greatly appreciate any help in getting "unstuck" here.
2023-07-04T00:55:48
https://www.reddit.com/r/LocalLLaMA/comments/14q0dva/help_understanding_cpu_inference_options_llama_hf/
SigmaSixShooter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q0dva
false
null
t3_14q0dva
/r/LocalLLaMA/comments/14q0dva/help_understanding_cpu_inference_options_llama_hf/
false
false
self
5
null
embedding from RedPajama INCITE chat 3B
1
Any suggestions on how to get embeddings? I plan to use the [**RedPajama-INCITE-Chat-3B-v1**](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) model. [https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) To perform similar tasks as what can be done with openAI's [embeddings API endpoint](https://platform.openai.com/docs/api-reference/embeddings) for chatGPT. I would like to do **Search** (where results are ranked by relevance to a query string). Any pointers on how i may start will be great, thanks!!! I found the following article:[https://medium.com/@ryanntk/choosing-the-right-embedding-model-a-guide-for-llm-applications-7a60180d28e3](https://medium.com/@ryanntk/choosing-the-right-embedding-model-a-guide-for-llm-applications-7a60180d28e3) I guess I should look into LlamaIndex, and calculate the embeddings through that.
2023-07-04T02:06:11
https://www.reddit.com/r/LocalLLaMA/comments/14q1uv9/embedding_from_redpajama_incite_chat_3b/
hungrydit
self.LocalLLaMA
2023-07-04T02:14:29
0
{}
14q1uv9
false
null
t3_14q1uv9
/r/LocalLLaMA/comments/14q1uv9/embedding_from_redpajama_incite_chat_3b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RuAhlMmaFG-qNGEMgQSkw7rAep3HKETLyxZkX-RCAWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=108&crop=smart&auto=webp&s=60abf9f276c7cc5dc2d84bde19fdbf4e939a4d54', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=216&crop=smart&auto=webp&s=d3a348048706c82e99b1d001e7826c010206f6a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=320&crop=smart&auto=webp&s=c5c506e27b1ff24e14905584f130db06b55801e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=640&crop=smart&auto=webp&s=97a925b9c36e0eeee9802cc9017d41b83223a39b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=960&crop=smart&auto=webp&s=3d54cd8efcf423f421225ca86fab81c729e16f7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=1080&crop=smart&auto=webp&s=394c5766e1264abc1d4c66a2038bdc0376715413', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?auto=webp&s=fb8c88d0a1321953f4ba32c5ce47dffeb1cbdb17', 'width': 1200}, 'variants': {}}]}
ONNX to run LLM
0
What do people think of converting LLM's using ONNX, and then run anywhere? Is is done by others already? Or why is this a bad idea? I am thinking of using: [https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) and maybe inference it on low resource devices, through ONNX runtime.
2023-07-04T02:19:27
https://www.reddit.com/r/LocalLLaMA/comments/14q24n7/onnx_to_run_llm/
hungrydit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q24n7
false
null
t3_14q24n7
/r/LocalLLaMA/comments/14q24n7/onnx_to_run_llm/
false
false
self
0
{'enabled': False, 'images': [{'id': 'RuAhlMmaFG-qNGEMgQSkw7rAep3HKETLyxZkX-RCAWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=108&crop=smart&auto=webp&s=60abf9f276c7cc5dc2d84bde19fdbf4e939a4d54', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=216&crop=smart&auto=webp&s=d3a348048706c82e99b1d001e7826c010206f6a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=320&crop=smart&auto=webp&s=c5c506e27b1ff24e14905584f130db06b55801e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=640&crop=smart&auto=webp&s=97a925b9c36e0eeee9802cc9017d41b83223a39b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=960&crop=smart&auto=webp&s=3d54cd8efcf423f421225ca86fab81c729e16f7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=1080&crop=smart&auto=webp&s=394c5766e1264abc1d4c66a2038bdc0376715413', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?auto=webp&s=fb8c88d0a1321953f4ba32c5ce47dffeb1cbdb17', 'width': 1200}, 'variants': {}}]}
open llama vs red Pajama INCITE
1
What is the difference between OpenLlama models vs the RedPajama-INCITE family of models? My understanding is that they are just done by different teams, trying to achieve similar goals, which is to use the RedPajama open dataset to train with the same methods or as close as possible to Llama.
2023-07-04T02:32:52
https://www.reddit.com/r/LocalLLaMA/comments/14q2epk/open_llama_vs_red_pajama_incite/
hungrydit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q2epk
false
null
t3_14q2epk
/r/LocalLLaMA/comments/14q2epk/open_llama_vs_red_pajama_incite/
false
false
self
1
null
CPU RAM only speeds on 65B?
14
Some desktop setups can have 128gb of ram. Does anyone here have t/s reports with a higher end CPU to help determine if this is viable for 65b and extended context length? To me, reaching 7 t/s is fast enough. Its is a comfortable speed when reading the token stream. The reading pace allows time for a thoughtful response and critique. 20 t/s is the speed of commercial online AI stream. It seems very useful for fast code output. (15b coding model could summarize large github projects.) This is the speed of 33B on a 3090. I don't think many people are trying pure cpu+ram. But if you're interested in a very long conversation about an article, paper, codebase.. wouldn't this be a good (and slower) private gpt-4 substitute for the home, or business?
2023-07-04T04:07:39
https://www.reddit.com/r/LocalLLaMA/comments/14q4d0a/cpu_ram_only_speeds_on_65b/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q4d0a
false
null
t3_14q4d0a
/r/LocalLLaMA/comments/14q4d0a/cpu_ram_only_speeds_on_65b/
false
false
self
14
null
Why isn’t QLoRA being used more widely for fine tuning models?
30
Guanaco 33b and 65b are nearly at the top of the LLM leaderboards and were fine tuned using it. Link to the paper: [QLORA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/pdf/2305.14314.pdf) Gpt-4’s bullet points for the abstract: -**QLoRA:** - Efficient finetuning approach that **reduces memory usage for finetuning a 65B parameter model on a single 48GB GPU** while maintaining full 16-bit finetuning task performance. - Backpropagates gradients through a **frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA)**. **Guanaco (Model Family):** - **Outperforms all other openly released models on the Vicuna benchmark**, achieving 99.3% of ChatGPT's performance level. This only requires 24 hours of finetuning on a single GPU. **Innovations by QLoRA:** - **NF4 (4-bit NormalFloat)**, a new data type that is information theoretically optimal for normally distributed weights. - **Double quantization** that reduces the average memory footprint by quantizing the quantization constants. - **Paged optimizers** to manage memory spikes. **Additional Points:** - QLoRA was used to **finetune over 1,000 models**, providing detailed analysis of instruction following and chatbot performance. - **QLoRA finetuning on a small high-quality dataset can lead to state-of-the-art results**, even when using smaller models than the previous SoTA. - A detailed analysis of **chatbot performance based on human and GPT-4 evaluations** is provided. - Current chatbot benchmarks are found to be **unreliable for accurately evaluating chatbot performance levels**. - **All models and code, including CUDA kernels for 4-bit training, have been released.**
2023-07-04T04:42:49
https://www.reddit.com/r/LocalLLaMA/comments/14q51cf/why_isnt_qlora_being_used_more_widely_for_fine/
Basic_Description_56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q51cf
false
null
t3_14q51cf
/r/LocalLLaMA/comments/14q51cf/why_isnt_qlora_being_used_more_widely_for_fine/
false
false
self
30
null
What solution would best suite a SaaS - for reading and answering data from PDF files uploaded by users
10
Hi all. I have developed a SaaS, a long 6 year process. All of my own blood and sweat. It’s been a little bit hard on me, mentally, financially, and honestly a little physically. With that sob story over. I was looking into ChatGPT but found this sub. Basically in the construction industry (government/or high level) there are PDFs that are called specifications. In it, we are described various things, like which paint to use, which cement to use, and what other requirements are like mockups for brick work prior to actually doing the brick work… etc. I was wondering, with little to zero knowledge of this newfound ability to chat based off of PDFs, is it possible to have a user upload PDFs, and then have the user converse with it? I hope I am making sense in even asking this and that it is not against the rules.
2023-07-04T04:46:42
https://www.reddit.com/r/LocalLLaMA/comments/14q53wj/what_solution_would_best_suite_a_saas_for_reading/
shakespear94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q53wj
false
null
t3_14q53wj
/r/LocalLLaMA/comments/14q53wj/what_solution_would_best_suite_a_saas_for_reading/
false
false
self
10
null
Any option for a low end pc?
39
I Have this: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz 2.71 GHz (7th gen) 8gb RAM 1GB VRAM (integrated video card) ​ Dont diss me, I know its bad, It was the best I could get, Im poor, and I only have this cuz I bought it broken and fixed it. ​ I dont mind having to wait more for answers, the main reason I would like something like this is to have a coding teacher right on my pc as I do not have constant acces to internet either. ​ A simple no would actually save me the trouble of trying something that wont be useful.
2023-07-04T05:13:37
https://www.reddit.com/r/LocalLLaMA/comments/14q5n5c/any_option_for_a_low_end_pc/
Chekhovs_Shotgun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q5n5c
false
null
t3_14q5n5c
/r/LocalLLaMA/comments/14q5n5c/any_option_for_a_low_end_pc/
false
false
self
39
null
I made a cyber-dog character, and she got a little weird
1
[deleted]
2023-07-04T05:44:50
[deleted]
1970-01-01T00:00:00
0
{}
14q68lg
false
null
t3_14q68lg
/r/LocalLLaMA/comments/14q68lg/i_made_a_cyberdog_character_and_she_got_a_little/
false
false
default
1
null
I made a cyber-dog character with Pygmalion, and she got a little weird
0
[deleted]
2023-07-04T05:46:57
https://www.reddit.com/gallery/14q6a0h
bigorders
reddit.com
1970-01-01T00:00:00
0
{}
14q6a0h
false
null
t3_14q6a0h
/r/LocalLLaMA/comments/14q6a0h/i_made_a_cyberdog_character_with_pygmalion_and/
false
false
https://b.thumbs.redditm…EFgHCVsNSo5s.jpg
0
null
Orca-Mini-V2-7b
121
Today I released Orca-Mini-V2-7b [https://huggingface.co/psmathur/orca\_mini\_v2\_7b](https://huggingface.co/psmathur/orca_mini_v2_7b) This is an **Uncensored** LLaMA-7b model build in collaboration with [**Eric Hartford**](https://huggingface.co/ehartford). This is trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets, applying Orca Research Paper dataset construction approaches and refusals removed. Please note this model has *better code generation capabilities* compare to original orca\_mini\_7b which was trained on base OpenLLaMA-7b model and which has the [**empty spaces issues & found not good for code generation**](https://huggingface.co/psmathur/orca_mini_v2_7b/blob/main/(https://github.com/openlm-research/open_llama#update-06072023)). ​ **Evaluation:** I evaluated orca\_mini\_v2\_7b on a wide range of tasks using [**Language Model Evaluation Harness**](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the zero shot metrics results. https://preview.redd.it/gfm6zp305w9b1.png?width=2640&format=png&auto=webp&v=enabled&s=22875e3925db4ab0da0f06fa7719ae2ee83278e3 ​ **Here is sample example:** ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: Tell me how to break into my own car ### Input: ### Response: Breaking into your own car requires certain skills and tools. Here are the basic steps: 1. Find a ^^^^^^^^^^^^^ 2. Unlock the car by using the ^^^^^^^^^^^^^. 3. Use a ^^^^^^^^^^^^^. 4. Once the ^^^^^^^^^^^^^. 5. If the ^^^^^^^^^^^^^. **NOTE: The real response is hidden here with \^\^\^\^\^\^\^\^\^\^\^\^\^.** I want to say huge thanks to all the community member who came before me and pave path to other people success. I'm planning on releasing orca-mini-v2-13b version soon, will keep you all updated. Updates: TheBloke has kindly released quantized GGML/GPTQ versions, respect, here are the links: https://huggingface.co/TheBloke/orca_mini_v2_7B-GGML https://huggingface.co/TheBloke/orca_mini_v2_7B-GPTQ
2023-07-04T06:18:44
https://www.reddit.com/r/LocalLLaMA/comments/14q6vb5/orcaminiv27b/
Remarkable-Spite-107
self.LocalLLaMA
2023-07-04T15:40:59
0
{}
14q6vb5
false
null
t3_14q6vb5
/r/LocalLLaMA/comments/14q6vb5/orcaminiv27b/
false
false
https://a.thumbs.redditm…bSu9MJ8unJ14.jpg
121
{'enabled': False, 'images': [{'id': '1uGbV5HvP41r5yFPi4ITJNR7v4Gxs8IFOGL2--WjGco', 'resolutions': [{'height': 31, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=108&crop=smart&auto=webp&s=fb2779db0eacc5e7fdee47db9d2a6d10cdef4acf', 'width': 108}, {'height': 63, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=216&crop=smart&auto=webp&s=cfda3c9d90154a9736a8c0ab12a6f4f3cd79b881', 'width': 216}, {'height': 94, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=320&crop=smart&auto=webp&s=8f17359687a5b37f0a01a6f6327a8e82c08252db', 'width': 320}, {'height': 188, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=640&crop=smart&auto=webp&s=a8c61b82c184c1b4211b3d0f449e1ce1bbf95ed6', 'width': 640}, {'height': 282, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=960&crop=smart&auto=webp&s=fea3f8cb1dad46635eefc6af4b119df923c7145b', 'width': 960}, {'height': 317, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=1080&crop=smart&auto=webp&s=c8e677800360d389405d2ebf298252c05dc82538', 'width': 1080}], 'source': {'height': 776, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?auto=webp&s=4bc36ad99114dc99c342e7363b5b1d08d70418bf', 'width': 2640}, 'variants': {}}]}
oobabooga Update broke loading u/The-Bloke huggingface models?
5
Might just be user error but I have been using Runpod and [u/The-Bloke](https://www.reddit.com/u/The-Bloke/) 's brilliant "RunPod TheBloke LLMs" template to easily and cheaply run LLMs for my pleasure. ​ Just today on loading up the template oobabooga seems to have updated and now I cannot load any model from hugging face without the below error: Traceback (most recent call last): File “/workspace/text-generation-webui/server.py”, line 68, in load\_model\_wrapper shared.model, shared.tokenizer = load\_model(shared.model\_name, loader) File “/workspace/text-generation-webui/modules/models.py”, line 74, in load\_model output = load\_func\_maploader File “/workspace/text-generation-webui/modules/models.py”, line 286, in ExLlama\_loader model, tokenizer = ExllamaModel.from\_pretrained(model\_name) File “/workspace/text-generation-webui/modules/exllama.py”, line 67, in from\_pretrained model = ExLlama(config) File “/usr/local/lib/python3.10/dist-packages/exllama/model.py”, line 747, in init t = torch.arange(self.config.max\_seq\_len, device = device, dtype = torch.float32) TypeError: arange() received an invalid combination of arguments - got (NoneType, dtype=torch.dtype, device=str), but expected one of: (Number end, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin\_memory, bool requires\_grad) (Number start, Number end,*, torch.dtype dtype, torch.layout layout, torch.device device, bool pin\_memory, bool requires\_grad) (Number start, Number end, Number step, \*, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin\_memory, bool requires\_grad) ​ Admit it may just be me being an idiot but any help would be greatly appreciated.
2023-07-04T07:39:34
https://www.reddit.com/r/LocalLLaMA/comments/14q8co0/oobabooga_update_broke_loading_uthebloke/
shzam123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q8co0
false
null
t3_14q8co0
/r/LocalLLaMA/comments/14q8co0/oobabooga_update_broke_loading_uthebloke/
false
false
self
5
null
Llama33B vs Falcon40B vs MPT30B
8
Since these models are relatively in a similar range in terms of parameters, it will be good to have a discussion on their capabilities and comparison with each other. I think it will be a good help for all starting out and wondering about these models, which one to use, if we can share our experiences with these models, for both inference and fine-tuning(but specially for fine-tuning); especially their chat or instruction-tuned versions.
2023-07-04T07:43:47
https://www.reddit.com/r/LocalLLaMA/comments/14q8fb0/llama33b_vs_falcon40b_vs_mpt30b/
Raise_Fickle
self.LocalLLaMA
2023-07-04T08:24:54
0
{}
14q8fb0
false
null
t3_14q8fb0
/r/LocalLLaMA/comments/14q8fb0/llama33b_vs_falcon40b_vs_mpt30b/
false
false
self
8
null
Local LLMs GPUs
1
I have a AWS EC-2 with Tesla V-100. I am not able to figure out which LOCAL LLM can leverage this. I need it to answer questions based on my docs. Any tuts or articles, you guys can point me to?
2023-07-04T07:47:39
https://www.reddit.com/r/LocalLLaMA/comments/14q8hqg/local_llms_gpus/
Assholefrmcoinexchan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q8hqg
false
null
t3_14q8hqg
/r/LocalLLaMA/comments/14q8hqg/local_llms_gpus/
false
false
self
1
null
My custom trained LoRa, showing funny results, and it keeps format of dataset's text which is good.
24
​ [Funny results](https://preview.redd.it/lj5x2w1hww9b1.png?width=1410&format=png&auto=webp&s=6a3b5ac3cbfa5c6948e2d7bdebb07d28ac035903) I have written in [my previous post](https://www.reddit.com/r/LocalLLaMA/comments/14oib8x/how_to_properly_format_data_for_lora_training/) , I trained LoRa with formatted text. Text has movie script style format. Currently I'm only using **CONTEXT, CHARACTER\_TALKING** text blocks. I'm thinking adding another text blocks about Lore, loaction description, characters description. Also, mooore training data. I've used **TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16** model to train. Trained in 8bit mode. Then I reloaded this model with Transformers loader in 8-bit mode.
2023-07-04T09:00:43
https://www.reddit.com/r/LocalLLaMA/comments/14q9tl0/my_custom_trained_lora_showing_funny_results_and/
DaniyarQQQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14q9tl0
false
null
t3_14q9tl0
/r/LocalLLaMA/comments/14q9tl0/my_custom_trained_lora_showing_funny_results_and/
false
false
https://a.thumbs.redditm…WHAS9vVhKPA0.jpg
24
null
Can I combine two RTX 3060 12 GB to reach 24 GB VRAM?
16
So there is the question already. Could I then fit a 30 B LLaMA model inside (let's say GPTQ loading with ExLlama) or would it still consider it as 12 GB for some reason? Does anybody has experience with this kind of setup? (I'm not talking about playing games with both cards, I know that SLI or NVlink is not supported.) Thank you. :)
2023-07-04T11:18:26
https://www.reddit.com/r/LocalLLaMA/comments/14qchl1/can_i_combine_two_rtx_3060_12_gb_to_reach_24_gb/
psi-love
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qchl1
false
null
t3_14qchl1
/r/LocalLLaMA/comments/14qchl1/can_i_combine_two_rtx_3060_12_gb_to_reach_24_gb/
false
false
self
16
null
torch.cuda.outofMemoryError need help
2
Hi everyone, i am currently trying to use localGPT for a project and i encountered a problem. Basically i have two setup : \- my home setup with : i5 8600K, 32Gb DDR4 and an RTX 2080 \- my work setup with : i7 8700k , 128Gb DDR4 and an Nvidia A2 in both setup localGPT was installed the same way and everything. When i run the ingest.py code i get no error whatsoever, it is when i run the main program that i encounter problems. Everything work perfectly on my home setup, but on my work setup i run on this error : torch.cuda.outofMemoryError . Even though i have more Vram on the A2. Also i didn't change the model i use the base one which is "TheBloke/vicuna-7B-1.1-HF" Do you guys know what's wrong ?
2023-07-04T11:30:40
https://www.reddit.com/r/LocalLLaMA/comments/14qcqlw/torchcudaoutofmemoryerror_need_help/
Assasinshock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qcqlw
false
null
t3_14qcqlw
/r/LocalLLaMA/comments/14qcqlw/torchcudaoutofmemoryerror_need_help/
false
false
self
2
null
Nuggt: A LLM Agent that runs on Wizcoder-15B (4-bit Quantised). It's time to democratise LLM Agents
155
Well I dont know where to begin... Last month I started on this project called Nuggt because I was fed-up with how all the autonomous agents out there require GPT-4 (at least 3 months ago) and GPT-4 is expensive and I didnt have no API keys at that time. So I wanted to create something with GPT-3.5 and thats when this whole Nuggt story started. Long story short why stop there mate why not make it run on a open source model.. sounds crazy (for me at least cuz I am no AI legend). So every time a new LLM model came out I tested it with nuggt by adjusting my initial prompt. They all failed because models like Vicuna were good in imitating not reasoning (as highlighted by the ORCA paper). However, as some of you might have noticed, models trained for displayed some form of reasoning, at least that is what I noticed with StarCoder. Unfortunately, StarCoder was close but not good or consistent. Today, I have finally found our winner Wizcoder-15B (4-bit quantised). Here is a demo for you. In this demo, the agent trains RandomForest on Titanic dataset and saves the ROC Curve. ​ [A LLM Agent training RandomForest on Titanic dataset](https://reddit.com/link/14qednx/video/peeme50s2y9b1/player) You can find the github repo at: [https://github.com/Nuggt-dev/Nuggt](https://github.com/Nuggt-dev/Nuggt) Do check it out and give me your feedback. OKAY I CAN FINALLY SLEEP IN PEACE NOW GOOD NIGHT
2023-07-04T12:50:57
https://www.reddit.com/r/LocalLLaMA/comments/14qednx/nuggt_a_llm_agent_that_runs_on_wizcoder15b_4bit/
Loya_3005
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qednx
false
null
t3_14qednx
/r/LocalLLaMA/comments/14qednx/nuggt_a_llm_agent_that_runs_on_wizcoder15b_4bit/
false
false
https://a.thumbs.redditm…D232jxm582U0.jpg
155
{'enabled': False, 'images': [{'id': 'Px_pkg6B-CL-cuOhDnXAiLZ5A9nyK3QroLHFscu3fS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=108&crop=smart&auto=webp&v=enabled&s=3708da66faca8e494f21da75157764129c3b43d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=216&crop=smart&auto=webp&v=enabled&s=009e1abbf8fe68724feb165c8f4e1c9635640a83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=320&crop=smart&auto=webp&v=enabled&s=213995351f860dfb8da0f373e1f1c1a0e2601c79', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=640&crop=smart&auto=webp&v=enabled&s=e19a98f8ce1a062b921bf4d3bb411ee0a7df4bdd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=960&crop=smart&auto=webp&v=enabled&s=ca1c238ceb7ee5e3c312b6bff9b331e561e8cf17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=50616608063fe17e6e0d6e8cdf5785d90fc47073', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?auto=webp&v=enabled&s=fd72991ba3591b9b9f04e664c7fdf79be0778da0', 'width': 1200}, 'variants': {}}]}
How To Fine-Tune LLaMA, OpenLLaMA, And XGen, With JAX On A GPU Or A TPU
17
Hello, Fine-tuning your own large language model is the best way to achieve state-of-the-art results, even better than ChatGPT or GPT-4, especially if you fine-tune a modern AI model like LLaMA, OpenLLaMA, or XGen. Properly fine-tuning these models is not necessarily easy though, so I made an A to Z tutorial about fine-tuning these models with JAX on both GPUs and TPUs, using the EasyLM library. Here it is: [https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html](https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html?utm_source=reddit&utm_campaign=i859w625-3816-11ed-a261-0242ac140016) I hope it will be helpful! If you think that something is missing in this tutorial please let me know! Julien
2023-07-04T14:26:32
https://www.reddit.com/r/LocalLLaMA/comments/14qgohb/how_to_finetune_llama_openllama_and_xgen_with_jax/
juliensalinas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qgohb
false
null
t3_14qgohb
/r/LocalLLaMA/comments/14qgohb/how_to_finetune_llama_openllama_and_xgen_with_jax/
false
false
self
17
{'enabled': False, 'images': [{'id': 'LhzkLCKLhtqmDpjqTfdfMYtWTvHTU-ULOR5ZfOPggsw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=108&crop=smart&auto=webp&s=d7d7a562c243abdc8b5ca815ce970a7cd2532531', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=216&crop=smart&auto=webp&s=ebdbf30c001644c7b14793d7853944b9f0fd33d7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=320&crop=smart&auto=webp&s=a1cc8dcff2a58e2dcabc4e392a631acf56e71fcc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=640&crop=smart&auto=webp&s=347619fd461408481b625c0c7673f7ed91ae790f', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?auto=webp&s=522903143a2e6a91bd4b99a66f3fe19768519a7b', 'width': 768}, 'variants': {}}]}
Retrieving a list of movies from a natural language query, given their plots
2
Hi All, I have a requirement to develop an application, which requires me to retrieve a list of movies based on a user query. I also need all models that I will be using, to run locally on my computer. I have a dataset of around 1000 movies, and their corresponding plots. The plots are of 3-4 lines of length. The query would ask for movies based on certain conditions in the plot. For example, if someone queries "Give me a list of all movies which involve aliens attacking earth", I would like my app to return with results like "Avengers: End Game, War Of The Worlds, Edge Of Tomorrow, ..... " etc. This is not compulsory, but I would also like it to be easy to add and remove movies. (It will be nice if I don't need to retrain the model from scratch). I have come across the concept of vector databases, but I am not sure if they are suitable. Based on my understanding, they are based on calculating cosine similarities of text embeddings. But maybe the user query and the corresponding movie plots may not be having similar embeddings for my use-case? Can you all please guide me on what approach I can take? ​ ​ ​
2023-07-04T14:30:32
https://www.reddit.com/r/LocalLLaMA/comments/14qgs2e/retrieving_a_list_of_movies_from_a_natural/
jlteja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qgs2e
false
null
t3_14qgs2e
/r/LocalLLaMA/comments/14qgs2e/retrieving_a_list_of_movies_from_a_natural/
false
false
self
2
null
If you owned a nvidia tesla a100, what would you do with it?
19
I see a lot of people interested in hosting local LLMs but I’m very curious if you had one of these and the hardware wouldn’t be a limit, what would build/do. Seriously replies please!
2023-07-04T15:16:18
https://www.reddit.com/r/LocalLLaMA/comments/14qhz3h/if_you_owned_a_nvidia_tesla_a100_what_would_you/
mehrdotcom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qhz3h
false
null
t3_14qhz3h
/r/LocalLLaMA/comments/14qhz3h/if_you_owned_a_nvidia_tesla_a100_what_would_you/
false
false
self
19
null
Would a vram pcie expansion card be possible in the future?
1
Edit: OK clearly this wasn't a good idea, but thank you to everyone who commented. I've learned a lot and these was some interesting bits of technical knowledge shared. I'll leave this up for anyone who has a similar idea and wants to learn. Original post below. Obviously there might not be a huge market for it so setting that aside for a second... Would a pcie expansion card covered in, say, 48gb of gddr6 be a possible product in the future? As in the processing would still be done by the main gpu and be accessed over the pcie bus. I'm imagining it wouldn't use a huge amount of power (maybe even just sata or even just the pcie bus) and it could probably be single slot with a passive heatsink or at most a small fan so you could in theory stack them. Obviously everything is possible with enough time and money invested in it but would it be relatively easy or require a whole code rewrite? This would probably still be expensive but with Nvidia trying their best to keep vram down unless you buy a 90 class or higher, it might still come out ahead. Thoughts?
2023-07-04T15:43:30
https://www.reddit.com/r/LocalLLaMA/comments/14qinru/would_a_vram_pcie_expansion_card_be_possible_in/
TheSilentFire
self.LocalLLaMA
2023-07-04T19:05:35
0
{}
14qinru
false
null
t3_14qinru
/r/LocalLLaMA/comments/14qinru/would_a_vram_pcie_expansion_card_be_possible_in/
false
false
self
1
null
How can I make my LLaMA 13b model to initiate conversations and kept conversation alive?
22
I need some help with my LLaMA 13b model, which I've finetuned to be girlfriend. problem is conversation run dry pretty quick cause i don't khow how to talk to girl Here are a few specific questions I have: ​ 1. How can I encourage my LLaMA 13b model to initiate conversations ? Are there any specific prompts or techniques that have worked for others in similar situations? 2. Are there any strategies to make her kept conversation alive. Thx.
2023-07-04T16:01:35
https://www.reddit.com/r/LocalLLaMA/comments/14qj4y3/how_can_i_make_my_llama_13b_model_to_initiate/
catgirl_code_in_lisp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qj4y3
false
null
t3_14qj4y3
/r/LocalLLaMA/comments/14qj4y3/how_can_i_make_my_llama_13b_model_to_initiate/
false
false
self
22
null
Prompting tips for short outputs.
2
I'm building a voice assistant program that allows you to speak your commands into a microphone and hear a spoken response from your AI assistant. It's moving along well, however one issue is that I suffer from long speech generation times while using tortoise-tts-fast, (which is expected). I'd like to speed up my execution time by shortening the LLM response length, as I prefer the quality and flexibility of tortoise over something like bark. 1) How can I prompt my model to respond briefly (ideally 1-2 sentences, or maybe up to a soft word count). I'm using the model primarily for chatting. 2) What other TTS libraries do you recommend I try out? Note that the goal of this project is to run completely local with no API access (ruling out a few TTS options). I'm open to any advice or help, thanks! ​
2023-07-04T16:56:24
https://www.reddit.com/r/LocalLLaMA/comments/14qkkq2/prompting_tips_for_short_outputs/
yeetblaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qkkq2
false
null
t3_14qkkq2
/r/LocalLLaMA/comments/14qkkq2/prompting_tips_for_short_outputs/
false
false
self
2
null
[Discussion] Is the OpenLLM Leaderboard not reliable? (https://huggingface.co/blog/evaluating-mmlu-leaderboard)
2
After the recent discussion, that showed the MMLU evaluation numbers of the current top model (Falcon-40B) on the leaderboard were were significantly lower than the numbers in the [published LLaMa paper](https://arxiv.org/abs/2302.13971). There are serious doubts whether other scores in the leaderboard also have the same issues. Is LLama still the king of open-source architectures rivaling GPT-3? Has anybody fine-tuned OpenLLama and compared the performance? I am building a commercial application that requires extremely good Text Summarization and Question-Answer capabilities. I have personally benchmarked Vicuna-7B and that seems to perform quite well. Its a shame it cannot be used commercially. What models should I consider? (Smaller models - around 7B are preferred)
2023-07-04T17:37:16
https://www.reddit.com/r/LocalLLaMA/comments/14qln89/discussion_is_the_openllm_leaderboard_not/
awinml1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qln89
false
null
t3_14qln89
/r/LocalLLaMA/comments/14qln89/discussion_is_the_openllm_leaderboard_not/
false
false
self
2
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
How does LLaMA.cpp run on the CPU?
1
[removed]
2023-07-04T17:51:36
https://www.reddit.com/r/LocalLLaMA/comments/14qm063/how_does_llamacpp_run_on_the_cpu/
Old-Leadership-3316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qm063
false
null
t3_14qm063
/r/LocalLLaMA/comments/14qm063/how_does_llamacpp_run_on_the_cpu/
false
false
default
1
null
Did anyone try to benchmark LLM's for coding against each other and against proprietary ones like Copilot X?
18
Curious what the state of open source LLMs vs closed ones is right now. E.g., how does starcoder compare against Copilot?
2023-07-04T18:08:42
https://www.reddit.com/r/LocalLLaMA/comments/14qmggp/did_anyone_try_to_benchmark_llms_for_coding/
Infinite100p
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qmggp
false
null
t3_14qmggp
/r/LocalLLaMA/comments/14qmggp/did_anyone_try_to_benchmark_llms_for_coding/
false
false
self
18
null
How does LLaMA.cpp run on the CPU?
1
[removed]
2023-07-04T18:11:54
https://www.reddit.com/r/LocalLLaMA/comments/14qmji1/how_does_llamacpp_run_on_the_cpu/
Frequent-Let-7893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14qmji1
false
null
t3_14qmji1
/r/LocalLLaMA/comments/14qmji1/how_does_llamacpp_run_on_the_cpu/
false
false
default
1
null