title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
If i hosted a chatgpt like website running a uncensored model on my rtx 3080 at home is that legal? | 1 | For people without expensive computers to have access to a uncensored model. i can probably make it work selfhosted at home as any gpu cloud is very pricey | 2023-08-11T22:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/15onfng/if_i_hosted_a_chatgpt_like_website_running_a/ | jptboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15onfng | false | null | t3_15onfng | /r/LocalLLaMA/comments/15onfng/if_i_hosted_a_chatgpt_like_website_running_a/ | false | false | self | 1 | null |
Documentation based qa | 3 | 2023-08-11T23:16:59 | https://huggingface.co/Arc53/docsgpt-7b-falcon | ale10xtu | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15oo2a0 | false | null | t3_15oo2a0 | /r/LocalLLaMA/comments/15oo2a0/documentation_based_qa/ | false | false | 3 | {'enabled': False, 'images': [{'id': '9xTJELL1YL4PyriMXYWRWD3cUAbTClyF15_unUlVjVQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=108&crop=smart&auto=webp&s=398d02814010f50239d36285cce603a9956e5ce6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=216&crop=smart&auto=webp&s=c613c8979bcf43402af4901fdc8156a3f611c490', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=320&crop=smart&auto=webp&s=670b9c1adbc0fed8074ee29e2bd406b0b7020aa1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=640&crop=smart&auto=webp&s=69cf0de3bac96a35ffb4bd30aae6064bffe844ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=960&crop=smart&auto=webp&s=f868a22c69d74d6e6c59860eccef9f753299edc1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=1080&crop=smart&auto=webp&s=a52c4898cf5d426d686010532a09d408d73000b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?auto=webp&s=e4f15d7baf297e601bd2eb8e04bc505d16cb0b28', 'width': 1200}, 'variants': {}}]} |
||
Our Workflow for a Custom Question-Answering App | 1 | Live demoed our MVP custom answering app today. It’s a Falcon-7b model fine tuned on an instruction set generated from one of the military services’ doctrine and policies. That’s then pointed at a vector database with the same publications indexed via Lama index, with engineering to force answers from context only, and set to "verbose" (links to the context chunks).
Our workflow:
1. Collected approx 4k unclassified/non-CUI pubs from one of the services.
2. Chunked each document into 2k tokens, and then ran them up against Davinci in our Azure enclave, with prompts generating questions.
3. Re-ran the same chunks to generate answers to those questions
4. Collated Q&A to create an instruct dataset (51k) in the target domain's discourse.
5. LoRA fine-tuned Falcon-7b on the Q&A dataset
6. Built a vector database (Chroma DB) on the same 4k publications
7. Connected a simple web UI to Llama-Index that passes natural language questions as vectors to the vector DB, returns 4-nearest neighbor chunks ("context") and the question to fine-tuned LLM.
8. Prompt includes language forcing the LLM to answer from context only.
9. Llama-Index returns the answer to the UI, along with link to the hosted context chunks.
The one thing we are still trying to improve is alignment training--currently Llama-Index and the prompt engineering keep it on rails but natively the model can be pretty toxic or dangerous. | 2023-08-11T23:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/15oome9/our_workflow_for_a_custom_questionanswering_app/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oome9 | false | null | t3_15oome9 | /r/LocalLLaMA/comments/15oome9/our_workflow_for_a_custom_questionanswering_app/ | false | false | self | 1 | null |
Are there any good fantasy writing Lora for llama 2 or otherwise? | 1 | Looking for some lora to work as a prose enhancer that doesn't shy away from nsfw (violence) or even some sex scenes.
Thanks in advance! | 2023-08-12T01:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/15oqjaq/are_there_any_good_fantasy_writing_lora_for_llama/ | Squeezitgirdle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oqjaq | false | null | t3_15oqjaq | /r/LocalLLaMA/comments/15oqjaq/are_there_any_good_fantasy_writing_lora_for_llama/ | false | false | self | 1 | null |
I'm trying to get TheBloke_airoboros-33B-GPT4-2.0-GPTQ to create an accurate list of modern science fiction books, and just... I just... | 1 | 2023-08-12T01:27:29 | CatastrophicallyEmma | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15or13q | false | null | t3_15or13q | /r/LocalLLaMA/comments/15or13q/im_trying_to_get_thebloke_airoboros33bgpt420gptq/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'nhPGyOPLreXGcl0PkmXeulWfEvVfyVPv6JDhG8P4UZ0', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=108&crop=smart&auto=webp&s=2ef2b16e278a32b318572f54a215747385a1d1ef', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=216&crop=smart&auto=webp&s=33351d7e5de5e385095993150698d6d68726c9a7', 'width': 216}, {'height': 421, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=320&crop=smart&auto=webp&s=be3831565d3e7f9fd9148adf25cf26507479c1cb', 'width': 320}, {'height': 843, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=640&crop=smart&auto=webp&s=be6b5c1ae09e2fc7cb38faec85b63cd836292de2', 'width': 640}, {'height': 1264, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=960&crop=smart&auto=webp&s=eead07270d7b71b88f02a937162a659e3c9b01db', 'width': 960}, {'height': 1422, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=1080&crop=smart&auto=webp&s=834e4b96aa08846246f5627eff87cad8ca4721f8', 'width': 1080}], 'source': {'height': 1432, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?auto=webp&s=816af06ea9734807d6c17a7a1a08cdeb66568a44', 'width': 1087}, 'variants': {}}]} |
|||
I been recently wondering, is there any way to train an LLM to output something specific. Kinda like a rating system (can be as simple as thumbs up or thumbs down) or is that what LORAs are? | 1 | I'm still really new to this so forgive me if it's a silly question. | 2023-08-12T01:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/15orelr/i_been_recently_wondering_is_there_any_way_to/ | VirylLucas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15orelr | false | null | t3_15orelr | /r/LocalLLaMA/comments/15orelr/i_been_recently_wondering_is_there_any_way_to/ | false | false | self | 1 | null |
I have tuned llama2 7B and openai davinci for text generation, is there a way i can compare the results of both. | 1 | [removed] | 2023-08-12T01:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/15orjkh/i_have_tuned_llama2_7b_and_openai_davinci_for/ | mrtac96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15orjkh | false | null | t3_15orjkh | /r/LocalLLaMA/comments/15orjkh/i_have_tuned_llama2_7b_and_openai_davinci_for/ | false | false | self | 1 | null |
How to measure effective context length? | 1 | I'd like to verify how much text llms can actually consider while giving responses. for this i came up with 2 experiments:
- give a piece of text with a certain amount of words, and have the llm respond to the query: "what is the first line of the text? what is the last line of the text?" | 2023-08-12T02:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/15orwir/how_to_measure_effective_context_length/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15orwir | false | null | t3_15orwir | /r/LocalLLaMA/comments/15orwir/how_to_measure_effective_context_length/ | false | false | self | 1 | null |
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of llama.cpp and text-generation-webui | 1 | Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp.
​
Features:
\* Subscription Links: Our distinctive feature allows you to consolidate your services into a single shareable link. Share your LLM with your team or friends.
\* Multi-Platform: YourChat is available on Windows, MacOS, Android, and iOS, ensuring a seamless experience whether you're on mobile or desktop.
\* Built-In Prompts: Channel your creativity using our integrated prompts sourced from [github.com/f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts).
​
API Extensiveness:
\* text-generation-webui
\* llama.cpp
\* GPT Compatible API (for third-party OpenAI-like APIs)
\* OpenAI API (not available on apple app store)
​
Some Screenshots:
​
[Chat with preset prompt](https://preview.redd.it/ne8npkz4mlhb1.png?width=2000&format=png&auto=webp&s=4bc91195e3853d5abe3c247c960e229c18b685dc)
[Completion Mode](https://preview.redd.it/aa3cb605mlhb1.png?width=2000&format=png&auto=webp&s=475fcd45e8e7e3702518d4a0ef79190c881a5c6a)
[Download LLMs with subscription URL](https://preview.redd.it/gucvdmz4mlhb1.png?width=2000&format=png&auto=webp&s=df32be4b9c8170f9d3c69d49411234ecb3fc186d)
​
Download:
Play Store: [https://play.google.com/store/apps/details?id=app.yourchat](https://play.google.com/store/apps/details?id=app.yourchat)
App Store: [https://apps.apple.com/app/yourchat/id6449383819](https://apps.apple.com/app/yourchat/id6449383819)
Desktop Version: [https://yourchat.app/download](https://yourchat.app/download) | 2023-08-12T03:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/15otin3/introducing_yourchat_a_multiplatform_llm_chat/ | constchar_llc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15otin3 | false | null | t3_15otin3 | /r/LocalLLaMA/comments/15otin3/introducing_yourchat_a_multiplatform_llm_chat/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iPAxVKyDrGKh6Fy565L9IfeVxI98eU-gpQ8iVdEZJnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=108&crop=smart&auto=webp&s=ab3f6d772980f572178a1d5757d0e6f5bea255f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=216&crop=smart&auto=webp&s=332fe311b43f5a5f859d9279c9def2d00eb06023', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=320&crop=smart&auto=webp&s=3f6dd960c376db66dd213d024ae3b3c93a679fbd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=640&crop=smart&auto=webp&s=f658714715aa641c6a36b35f078c1668d9c8c74a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=960&crop=smart&auto=webp&s=4e23c5c8e4c70238b19c50294757ab09e4eadc2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=1080&crop=smart&auto=webp&s=a0b345ce1215db3acfdb307d3e49649d3de40dbe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?auto=webp&s=b9e5c5ed6eb2186b6e5d914c915197dd7df3de6a', 'width': 1200}, 'variants': {}}]} |
|
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of llama.cpp and text-generation-webui. | 1 | Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp.
​
Features:
\* Subscription Links: Our distinctive feature allows you to consolidate your services into a single shareable link. Share your LLM with your team or friends.
\* Multi-Platform: YourChat is available on Windows, MacOS, Android, and iOS, ensuring a seamless experience whether you're on mobile or desktop.
\* Built-In Prompts: Channel creativity using integrated prompts sourced from github.com/f/awesome-chatgpt-prompts.
​
API Extensiveness:
\* text-generation-webui
\* llama.cpp
\* GPT Compatible API (for third-party OpenAI-like APIs)
\* OpenAI API (not available on apple app store version)
​
Some Screenshots:
[Chat with preset prompt](https://preview.redd.it/qibwbf2xmlhb1.png?width=2000&format=png&auto=webp&s=6c1974842e4bea3a6f062859764f69496713a5f6)
[Completion Mode](https://preview.redd.it/9am6zi2xmlhb1.png?width=2000&format=png&auto=webp&s=8be7c17c4d5b853c957bc9eb1ef6ee99288647f5)
[Download LLMs with subscription URL](https://preview.redd.it/dudsfg2xmlhb1.png?width=2000&format=png&auto=webp&s=80a8800e4b3da36ad15e7eb4eabc92c132870898)
​
Download:
Play Store: [https://play.google.com/store/apps/details?id=app.yourchat](https://play.google.com/store/apps/details?id=app.yourchat)
App Store: [https://apps.apple.com/app/yourchat/id6449383819](https://apps.apple.com/app/yourchat/id6449383819)
Desktop Version: [https://yourchat.app/download](https://yourchat.app/download) | 2023-08-12T03:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/15otnrb/introducing_yourchat_a_multiplatform_llm_chat/ | constchar_llc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15otnrb | false | null | t3_15otnrb | /r/LocalLLaMA/comments/15otnrb/introducing_yourchat_a_multiplatform_llm_chat/ | false | false | 1 | null |
|
Cloud GPU Quotas | 1 | I've been running various different LLMs on the cloud and have been able to run 7 and 13 billion parameter models with ease. I am working on a small personal project with no strong commercial value. However, 30 billion parameter models require about 40 GB GPU space. There are two problems: Llambda allows you to allocate machines but almost never has any available. AWS, Azure, and Paper space have appropriate machines (apparently) but have quotas that limit you from ever creating a machine with the correct metrics. AWS is a little confusing: as with many clouds they ask you to file tickets to increase your quota. After filing around 6 tickets, it's clear they won't increase my limit beyond 20 GB GPU.
Does anyone know of a cloud that allows you to actually allocate large (40+GB) GPU machines easily? Thanks | 2023-08-12T03:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ottvi/cloud_gpu_quotas/ | Pristine_Drag_5695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ottvi | false | null | t3_15ottvi | /r/LocalLLaMA/comments/15ottvi/cloud_gpu_quotas/ | false | false | self | 1 | null |
How to use QLora for LLaMa 2? | 1 | Hello ! I want to use QLora for fine-tune LLaMA-2 70b (maybe 13b). And i don't know how to use qlora.
*"python qlora.py --model\_name\_or\_path TheBloke/**llama-2-70b.ggmlv3.q4\_K\_M**.bin*
*--dataset my-data"*
It's correct command for use ?
| 2023-08-12T03:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/15ou360/how_to_use_qlora_for_llama_2/ | Alex_Strek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ou360 | false | null | t3_15ou360 | /r/LocalLLaMA/comments/15ou360/how_to_use_qlora_for_llama_2/ | false | false | self | 1 | null |
Llama 2 chatbot performance for multiple users | 1 | Im thinking about hosting a local llama 2 chat chat using vector embedding internally within my company. Inference was avg about 20 seconds per query on v100. If I put a front end and allow multiple users to query it in simultaneously:
1) is this even possible or would the queries be queued
2) if it was possible to inference multiple requests at once, would performance take a proportionate hit? | 2023-08-12T05:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/15ovffk/llama_2_chatbot_performance_for_multiple_users/ | godspeedrebel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ovffk | false | null | t3_15ovffk | /r/LocalLLaMA/comments/15ovffk/llama_2_chatbot_performance_for_multiple_users/ | false | false | self | 1 | null |
Error when attempting to train raw data | 1 | [removed] | 2023-08-12T05:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/15ovuax/error_when_attempting_to_train_raw_data/ | Lower_Spasm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ovuax | false | null | t3_15ovuax | /r/LocalLLaMA/comments/15ovuax/error_when_attempting_to_train_raw_data/ | false | false | self | 1 | null |
Unleash the Power of LLMs in Your Telegram Bot on a Budget | 1 | Interested in supercharging your Telegram bot with large language models (LLMs)? Here's a concise guide:
* **Introduction**: Harness LLMs like llama2-chat and vicuna. The bot is hosted on Amazon's free-tier EC2, with LLM inference on Beam Cloud.
* **Telegram Bot Setup**: Initiate with u/botfather on Telegram, get your token, and start a conversation with your bot.
* **Hosting**: Deploy on Amazon’s free-tier EC2 instance. The guide provides steps from EC2 setup to bot launch.
* **LLM Integration**: Beam Cloud, an affordable choice, is used for LLM inference. The bot taps into langchain and huggingface.
🔗 [**GitHub Repo**](https://github.com/ma2za/telegram-llm-guru) 🔗 [**Full Medium Article**](https://medium.com/@saverio3107/crafting-a-cost-effective-llm-powered-telegram-bot-a-step-by-step-guide-4d1e760e7eec) 🔗 [**Join Medium for More Updates**](https://medium.com/@saverio3107/membership)
Dive in, experiment, and enhance your Telegram bot's capabilities! Feedback and insights are welcome. 🚀 | 2023-08-12T06:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/15owpt8/unleash_the_power_of_llms_in_your_telegram_bot_on/ | Xavio_M | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15owpt8 | false | null | t3_15owpt8 | /r/LocalLLaMA/comments/15owpt8/unleash_the_power_of_llms_in_your_telegram_bot_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WQaQz-DVrgtgbDxYcPOn0564CHaCPQWuay69Tl4JfVA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=108&crop=smart&auto=webp&s=1a1c62cfebf549c00745e3b0bec276fee8698bc4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=216&crop=smart&auto=webp&s=51a855dd70a18369401098f32e7b92460c4c4144', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=320&crop=smart&auto=webp&s=4fbb3d6cab69e2b3cfb7b4aa4d373445c4edd898', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=640&crop=smart&auto=webp&s=92c9fac434252b7e584d9149d9b68e831c8c6a19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=960&crop=smart&auto=webp&s=c6dc21f88a39e4d4d7ff855b433df7791e74682e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=1080&crop=smart&auto=webp&s=d4c32ae1e8625aaa249c9e49806aae9635a57efe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?auto=webp&s=77a8a02ac93ca90d3a1a5266971608bc94875092', 'width': 1200}, 'variants': {}}]} |
Google search extension for the webui | 1 | I stumbled upon this great extension for text-generation-webui.
All you have to do is install and start your question with “search X”.
The context will be shown on the console and the answer should be based on the google search result.
This is not my project but I have been using it for a couple of days with great success!
Would recommend you guys trying it.
https://github.com/simbake/web_search | 2023-08-12T07:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/15oxqas/google_search_extension_for_the_webui/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oxqas | false | null | t3_15oxqas | /r/LocalLLaMA/comments/15oxqas/google_search_extension_for_the_webui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KkOZ34ewH8CkmAZoKJ9_9OFtSBEZZoiE4Nj2KvRoQ54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=108&crop=smart&auto=webp&s=020a6a4f13a947d8a648ebd3c72b1555c72c8420', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=216&crop=smart&auto=webp&s=1546b33e34521c67a406d4060fdf19b964a68d54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=320&crop=smart&auto=webp&s=a9bc6ec116104a5f816003ccf30aed67da2cb0a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=640&crop=smart&auto=webp&s=2851a65c169291965b51d935290cedd52e8f4f7a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=960&crop=smart&auto=webp&s=a9c6ea5a3a41fc27d70968ff849c1844abc5f6e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=1080&crop=smart&auto=webp&s=8a74de8888a05640d99ad21055242b29971f3fe7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?auto=webp&s=26028ecea4458f6b21ba2049b621b47e99dd0a46', 'width': 1200}, 'variants': {}}]} |
Are there any models which do something similar to Sudowrite? | 1 | Lately I started writing and while being pretty fluent in English, it’s not my mother tongue so on a long and creative texts it shows, I simply don’t know some phrases, expressions and I find myself repeating things I know multiple times, using same words when there are other synonyms. I found it pretty useful for descriptions, I could input my rough vision of what I imagined and it wrote some pretty good paragraphs or sentences.
Are there any models I could run on my i7 6700k, 32GB of RAM and 3060 Ti that would do similar things to what Sudowrite is doing? Of course I don’t expect perfectly similar alternative, but for example I input large parts of my text and then ask to rewrite certain paragraphs with prompts or something like that. | 2023-08-12T07:34:31 | https://www.reddit.com/r/LocalLLaMA/comments/15oy1oa/are_there_any_models_which_do_something_similar/ | JozoBozo121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oy1oa | false | null | t3_15oy1oa | /r/LocalLLaMA/comments/15oy1oa/are_there_any_models_which_do_something_similar/ | false | false | self | 1 | null |
What is the best API right now for self-hosted LLM usage? | 1 | I want to deploy a model and chat with it via messenger or some other interface and maybe give access to other people. Ideally API should have authorisation (ooga does not as I understand). | 2023-08-12T08:44:32 | https://www.reddit.com/r/LocalLLaMA/comments/15oz9iz/what_is_the_best_api_right_now_for_selfhosted_llm/ | InkognetoInkogneto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oz9iz | false | null | t3_15oz9iz | /r/LocalLLaMA/comments/15oz9iz/what_is_the_best_api_right_now_for_selfhosted_llm/ | false | false | self | 1 | null |
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore (a model that lets owners remove their data) | 1 | 2023-08-12T09:14:47 | https://twitter.com/ssgrn/status/1689256059234361344 | saintshing | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15ozsmo | false | {'oembed': {'author_name': 'Suchin Gururangan', 'author_url': 'https://twitter.com/ssgrn', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Feel risky to train your language model on copyrighted data?<br><br>Check out our new LM called SILO✨, with co-lead <a href="https://twitter.com/sewon__min?ref_src=twsrc%5Etfw">@sewon__min</a><br><br>Recipe: collect public domain & permissively licensed text data, fit parameters on it, and use the rest of the data in an inference-time-only datastore. <a href="https://t.co/PqlqtbIFIS">pic.twitter.com/PqlqtbIFIS</a></p>— Suchin Gururangan (@ssgrn) <a href="https://twitter.com/ssgrn/status/1689256059234361344?ref_src=twsrc%5Etfw">August 9, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ssgrn/status/1689256059234361344', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15ozsmo | /r/LocalLLaMA/comments/15ozsmo/silo_language_models_isolating_legal_risk_in_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'LmWeAOeip9W2tpSy2skNoH72_0V4VYugfoVxJcuLXi8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/StLqUbdnDlSYajXN6uu44lefCAFNGSLN3kLPmQKLemQ.jpg?width=108&crop=smart&auto=webp&s=d9c24c04b01b732ad79419c855c336af6d7469d5', 'width': 108}], 'source': {'height': 70, 'url': 'https://external-preview.redd.it/StLqUbdnDlSYajXN6uu44lefCAFNGSLN3kLPmQKLemQ.jpg?auto=webp&s=804eadaefa46a344578aaf99d382e2764a86602c', 'width': 140}, 'variants': {}}]} |
||
New to this, need some questions answered. | 1 | Hello! I'm pretty new to all of this, so I'm sorry if anything I say sounds stupid. I recently downloaded oobabooga and got georgesung\_llama2\_7b\_chat\_uncensored running successfully. However, I have a few questions:
1. What model loader should I be using and what are the main differences between them?
2. How do I offload the model to the GPU? I notice the model has a cpu tag in the model options. Does this mean its already running on the GPU by default unless I check the cpu option?
3. How do I get the model to really take advantage of my computer's resources? At least while running the model mentioned above, I've noticed my computer's fans don't really spin up. I know that its a small model, but I can't help but feel that I could be running it faster. I have a pretty beefy computer (i9-13900k, 64GB DDR5, RTX 4080) and want to make sure that I'm making the most of my hardware.
4. Given my hardware, what would you say is the most advanced language model that I could run?
I'm using Windows btw.
I'm really looking forward to getting into this! Language models are very fascinating to me and I'm very interested in learning more about them! | 2023-08-12T09:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/15p0e2b/new_to_this_need_some_questions_answered/ | BombTime1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p0e2b | false | null | t3_15p0e2b | /r/LocalLLaMA/comments/15p0e2b/new_to_this_need_some_questions_answered/ | false | false | self | 1 | null |
how to make wizardmath-70b-v1.0.ggmlv3.q8_0.bin correctly answer this puzzle? | 1 | [https://paste.c-net.org/PlungedLackey](https://paste.c-net.org/PlungedLackey)
It fails to see the relevance of Bob participating in both marathons.
Is there a technique to make the model answer correctly?
Is there an alternative model more suited to this problem? thanks
my previous attempts with other models:
[https://www.reddit.com/r/LocalLLaMA/comments/15dfzag/how\_to\_make\_the\_models\_like/](https://www.reddit.com/r/LocalLLaMA/comments/15dfzag/how_to_make_the_models_like/) | 2023-08-12T10:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/15p1kgd/how_to_make_wizardmath70bv10ggmlv3q8_0bin/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p1kgd | false | null | t3_15p1kgd | /r/LocalLLaMA/comments/15p1kgd/how_to_make_wizardmath70bv10ggmlv3q8_0bin/ | false | false | self | 1 | null |
Let my character remember the conversation | 1 | My progress since last week has been towards creating my friendly chat partner, personal assistant.
A quick recap of what it can do:
* import tavernai (webp) characters
* listen to my voice, and reply in SAPI5 voices
* communicate with different languages models at the same time, with different roles (RP, summarization), different sizes (22b, 13b, 7b)
* chat rooms are ~~simple text files~~ in Trilium notes
​
There are hierarchical note taking tools that are used to keep personal notes. Why not use it to keep my personal chat logs.
​
Keep chat logs in a note taking tool (Trilium)
* save chat log as notes
* each chat character has a directory named after them
* the chat long-term memories come from the notes in directory (inner note)
​
[ The character's past memories are written on a white background.](https://preview.redd.it/f5h47fpgpnhb1.png?width=1522&format=png&auto=webp&s=d48972040c26ad7ed3bd1da61814d447d731477b)
​
The documents (notes) are not only stored in the Trilium app, but also in a separate area (space)
* store short dialogues (question - answer)
* index dialogues for fast retrieval
* search by synonyms (by matrix operations)
​
versus Traditional text search
* exact match
* fuzzy search
​
These dialogues can be
* a simple chat log,
* a knowledge base in Q&A format
​
The character can remember the summary of the conversation, but not the whole text. If I have a question to the character, the search is done in the whole text! Most people are unable to recall what exactly was said, the same applies to a virtual character.
​
[The memories of several characters from the week](https://preview.redd.it/uhgbt8jiqnhb1.png?width=973&format=png&auto=webp&s=39cf7c5f923ac52651c441ed06d5e38e167643b4)
Every time I chat with a character, the conversation is saved in a daily note. The number of notes and, thus the number of conversations is unlimited, the character is capable of recalling events from days ago.
There are no server requirements to run the chat, besides the lightweight koboldcpp. LTM can be done using any generic, popular math library.
It could be further improved by sending different questions (rp, math, finance, dba) to any language model - based on their specialization. | 2023-08-12T11:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/15p1q7d/let_my_character_remember_the_conversation/ | justynasty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p1q7d | false | null | t3_15p1q7d | /r/LocalLLaMA/comments/15p1q7d/let_my_character_remember_the_conversation/ | false | false | 1 | null |
|
When starting LoRA training, first steps already showing very low losses, is that right? | 1 | Hello everyone.
I'm tryng to use [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for my LoRA training instead of oobabooga. I've prepared my 30MB dataset as completion raw corpus format in JSONL. I'm using **meta-llama/Llama-2-13b-hf** for my LoRA training.
Here some \`yml\` configs:
load_in_8bit: true
load_in_4bit: false
strict: false
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
bf16: true
fp16: false
tf32: false
I'm using **RTX A6000** for my LoRA training. And at first training steps its started to output strange results like this:
{'loss': 1.6568, 'learning_rate': 2e-05, 'epoch': 0.01}
{'loss': 1.6157, 'learning_rate': 4e-05, 'epoch': 0.02}
{'loss': 1.6146, 'learning_rate': 6e-05, 'epoch': 0.03}
{'loss': 1.6502, 'learning_rate': 8e-05, 'epoch': 0.04}
{'loss': 1.8111, 'learning_rate': 0.0001, 'epoch': 0.04}
{'loss': 1.8191, 'learning_rate': 0.00012, 'epoch': 0.05}
{'loss': 1.688, 'learning_rate': 0.00014, 'epoch': 0.06}
{'loss': 1.503, 'learning_rate': 0.00016, 'epoch': 0.07}
{'loss': 1.8784, 'learning_rate': 0.00018, 'epoch': 0.08}
{'loss': 1.5776, 'learning_rate': 0.0002, 'epoch': 0.09}
{'loss': 1.7116, 'learning_rate': 0.00019999535665248002, 'epoch': 0.1}
{'loss': 1.6978, 'learning_rate': 0.0001999814270411335, 'epoch': 0.11}
{'loss': 1.5436, 'learning_rate': 0.000199958212459561, 'epoch': 0.12}
{'loss': 1.5556, 'learning_rate': 0.00019992571506363, 'epoch': 0.13}
{'loss': 1.6217, 'learning_rate': 0.00019988393787127441, 'epoch': 0.13}
{'loss': 1.5164, 'learning_rate': 0.0001998328847622148, 'epoch': 0.14}
Which is very strange. At **0.01** epoch it shows it has very low losses. When I've been using oobabooga, it started from about **3** then wen down to **1.4.**
Is that even right? Does that mean that LoRA is successfully trained on fraction of epoch? | 2023-08-12T12:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/15p2v7r/when_starting_lora_training_first_steps_already/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p2v7r | false | null | t3_15p2v7r | /r/LocalLLaMA/comments/15p2v7r/when_starting_lora_training_first_steps_already/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EO6qVfOQXm2_-d9cG85lSO-sJ2QZ2XZUzLO4YrGnUZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=108&crop=smart&auto=webp&s=967b806868da1f8b68e1d466ba68230b80437ff9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=216&crop=smart&auto=webp&s=f00227225acdb9efbb994870d05b3a7242553633', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=320&crop=smart&auto=webp&s=82b34cbb10cf089230703c29486f4f648abf0741', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=640&crop=smart&auto=webp&s=1bfd3c08b17cc0648cdae5edc50b2911e7528e80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=960&crop=smart&auto=webp&s=c020393ef732eec1eea41e977b9cb8432c1e9884', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=1080&crop=smart&auto=webp&s=62b97811afd2153004ff121449be77bf2c9020b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?auto=webp&s=e8d3e829f033b4b832150f61c48b2db95d475b25', 'width': 1200}, 'variants': {}}]} |
Tried to deploy vicuna on sage-maker (aws) but got some errors | 1 | ​
[\(the gpu\)](https://preview.redd.it/a984ai0ovohb1.png?width=1366&format=png&auto=webp&s=0a092af062b73c8a5a532dca3277abde3398610b)
​
​
UnexpectedStatusException: Error hosting endpoint huggingface-pytorch-tgi-inference-2023-08-12-14-06-11-491: Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check. Please check CloudWatch logs for this endpoint..
​
​
Any help is welcomed! | 2023-08-12T14:27:28 | https://www.reddit.com/r/LocalLLaMA/comments/15p60nk/tried_to_deploy_vicuna_on_sagemaker_aws_but_got/ | Expensive_Breakfast6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p60nk | false | null | t3_15p60nk | /r/LocalLLaMA/comments/15p60nk/tried_to_deploy_vicuna_on_sagemaker_aws_but_got/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kvOxhBrkQsKDZDFDpUmXfe7SlhsRzIUjJ-pIjzJq6lw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=108&crop=smart&auto=webp&s=e2a18922dbc730b6fdcf2fa2806081ee67323147', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=216&crop=smart&auto=webp&s=47833fabdb1b07432ec38465c2868cfcc0ff8eec', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=320&crop=smart&auto=webp&s=7786d030771a94f116ff44d423690b0bac3f1a9f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=640&crop=smart&auto=webp&s=ff70e12afb698dbc4860d2bd8cbb12fb4f132456', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=960&crop=smart&auto=webp&s=f2a00fd59d26a3aa6a80ea3f57de6fead76cc76b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=1080&crop=smart&auto=webp&s=4d3a603acb8f5a1a1eeeebc4744f58d9f7203ede', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?auto=webp&s=c78228e082b96ec2c3ddfef949be0a0337917f84', 'width': 1200}, 'variants': {}}]} |
|
Is there some place where I can use the uncensored version online rather than locally? | 1 | [removed] | 2023-08-12T14:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/15p68v1/is_there_some_place_where_i_can_use_the/ | MasterDisillusioned | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p68v1 | false | null | t3_15p68v1 | /r/LocalLLaMA/comments/15p68v1/is_there_some_place_where_i_can_use_the/ | false | false | self | 1 | null |
What's the best (and cheap) way to try out all the new LLMs on cloud services. | 1 | I want to try out the LLMs but do not have proper infrastructure. So i thought to use AWS or Azure or some other cloud service. What are the CPU , GPU and RAM requirements I need to run any of the 70B LLMs? | 2023-08-12T15:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/15p7qrh/whats_the_best_and_cheap_way_to_try_out_all_the/ | timedacorn369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p7qrh | false | null | t3_15p7qrh | /r/LocalLLaMA/comments/15p7qrh/whats_the_best_and_cheap_way_to_try_out_all_the/ | false | false | self | 1 | null |
Expected inference speed? | 1 | What does token/seconds speeds actually translate into when doing inference? Let's say I'd like to use it to write summaries of documents, going from say 3000 tokes to a 2-300 token summary.
Is the math as simple as (tokens in + tokens out) / token speed?
For the given example with a 10t/s system, will I spend 300 seconds waiting, and then 20-30 seconds looking at text streaming back?
Or is it just (load-input-tokens-time-if-so-how-much?) + 20-30 seconds inference time?
I've only used chat GPT, but would like to invest in some hardware for local experimentation. But unsure of what to expect. | 2023-08-12T16:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/15p8baa/expected_inference_speed/ | gradientdancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p8baa | false | null | t3_15p8baa | /r/LocalLLaMA/comments/15p8baa/expected_inference_speed/ | false | false | self | 1 | null |
what is the best prompt on making realistic person because it not going the way I want LMAO | 1 | [removed] | 2023-08-12T16:10:08 | Small_Platypus4165 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15p8hgh | false | null | t3_15p8hgh | /r/LocalLLaMA/comments/15p8hgh/what_is_the_best_prompt_on_making_realistic/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'YVww1Bm93nA5gFqkOYOnzxr8-W3rH3I6IneySZkRuRc', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=108&crop=smart&auto=webp&s=636d786e9c17e60c277e7e4674dad6fe7619756a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=216&crop=smart&auto=webp&s=3efacaab302871e6fc09434280d9a2e53bdb8cdb', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=320&crop=smart&auto=webp&s=1e71ce5fea746fb9c1640cbf2290b9b0044be266', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=640&crop=smart&auto=webp&s=5e2a872fdf7005a5412f4801e39707cdb856f048', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=960&crop=smart&auto=webp&s=951300887ef28ee2771977833caa020ea226ce8a', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=1080&crop=smart&auto=webp&s=44d200880940e777fc41ad45a5b44691ff5d1541', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/344f50gaephb1.png?auto=webp&s=a1cea85d922a2f554143afe1be20de81cf700153', 'width': 1920}, 'variants': {}}]} |
||
Clarify the issues of WizardMath, and share official online demos. | 1 |
Thanks for your attention of WizardMath!
We share you two online demos of WizardMath **7B** V1.0 model.
7B D-1: **http://777957f.r10.cpolar.top**
7B D-2: **http://2be2671b.r10.cpolar.top**
🚫For the **simple** math questions (such as **1+1=?**), we do **NOT** recommend to use the CoT prompt.
We will update more demos of **70B and 13B** tomorrow and please refer to (https://github.com/nlpxucan/WizardLM/tree/main/WizardMath) for ***the latest URLs***.
We welcome everyone to use your professional and difficult instructions to evaluate WizardMath, and show us examples of poor performance and your suggestions.
❗❗❗ ***Note****: Please use the* ***same systems prompts strictly*** *with us, and we do not guarantee the accuracy of the* ***quantified versions****.*
​
For WizardMath, the prompts should be as following:
***Default version:***
*"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n\\n### Instruction:\\n{instruction}\\n\\n### Response:"*
***CoT Version:***(❗For the \*\*simple\*\* math questions, we do NOT recommend to use the CoT prompt.)
*"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n\\n### Instruction:\\n{instruction}\\n\\n### Response: Let's think step by step."*
​
https://preview.redd.it/fz8txy8okphb1.png?width=1994&format=png&auto=webp&s=e987ccc2510e62684accf25419bfae38175b56a6
*Processing img t17ylw8okphb1...*
https://preview.redd.it/b93nyx8okphb1.png?width=1920&format=png&auto=webp&s=568a3d98f4a1d054e1f6f371fdb40d739d63a519 | 2023-08-12T16:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/15p9gfl/clarify_the_issues_of_wizardmath_and_share/ | ApprehensiveLunch453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p9gfl | false | null | t3_15p9gfl | /r/LocalLLaMA/comments/15p9gfl/clarify_the_issues_of_wizardmath_and_share/ | false | false | 1 | null |
|
For researchers, and model trainers | 1 | [removed] | 2023-08-12T17:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/15p9xyj/for_researchers_and_model_trainers/ | JaysonGent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p9xyj | false | null | t3_15p9xyj | /r/LocalLLaMA/comments/15p9xyj/for_researchers_and_model_trainers/ | false | false | self | 1 | null |
I think I'm ready to call llama2 almost unusable because of the repitition thing | 1 | Anyone else? It's like a carrot on a stick, because obviously there are many aspects where it shows that it's much better than llama1. But then the repetition destroys pretty much any use case.
And this is not just about getting stuck in a loop, repeating messages in a conversation. You ask it to do one thing, and it makes mistakes, and you can't tell it to fix them. It will just be stuck because apparently the previous response is SO sticky. Like, it formatted something with the wrong brackets. It is, imho, impossible for deterministic llama2 models to correct that after being told.
Same goes for describing some syntax that the model can use. You know how powerful examples are. But you can't use them. It will be completely stuck repeating the examples no matter how much time you spend explaining that it must not use the example input and must think of original input. I have even tried formulating all these explanations without negations, but it still does not work at all.
In case you have less of these problems, it is probably due to temperature. But the temp-0 response is just the actual, real quality the model produces, you can't really fix anything with randomness around a wrong target. It will just never become a solution that does not "sometimes" require regeneration, at least.
Idk. I don't even feel like I have to say what finetune I tried most of this stuff with. It's just llama-2, even if some are better at getting around that.
Oh and as bonus observation: I changed from q4_1 to q5_1 and I think the impact of quantization is largely talked down or overlooked. To some extent, it was almost like talking to a different model. I think there's a lot going on, even if those perplexity scores don't move that much. I once suggested that maybe a more efficient model means quantization is more harmful? Just thought it was a good time to repeat that.
Anyway, with the better version of the model, I had *more* problems with the repetition. Seems to have eliminated some errorous, temperature-like fuzzing that the quantization causes, so I ran into more such problems, just like when you reduce the temperature.
Kay, thanks for listening to my rant. I tried to make it somewhat constructive. If anyone does a bit more complex things with llama2 and has some tips&tricks to share to combat all that, it would be very much appreciated.
For completeness, my latest observations are from airoboros 2.0 13B with ggml K quants. Tried up to q_6_K. But I really don't blame airoboros since I have still gotten the best results with that model so far (also trying m2.0). It is smart enough to maybe get away without examples, but that doesn't really fix it until after the first usage either. | 2023-08-12T17:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/15pa5zd/i_think_im_ready_to_call_llama2_almost_unusable/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pa5zd | false | null | t3_15pa5zd | /r/LocalLLaMA/comments/15pa5zd/i_think_im_ready_to_call_llama2_almost_unusable/ | false | false | self | 1 | null |
LlongOrca-7b-16k is here! and some light spoilers! :D | 1 | Today we are releasing LlongOrca-7B-16k!
​
This 7B model is our first long context release, able to handle 16,000 tokens at once!
We've done this while achieving >99% the performance of the best 7B models available today (which are all limited to 4k tokens).
​
[https://huggingface.co/Open-Orca/LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k)
​
This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our OpenOrcaxOpenChat-Preview2-13B model.
​
This release reveals that stacking our training on an existing long context fine-tuned model yields significant improvements to model performance. We measured this with BigBench-Hard and AGIEval results, finding \~134% of the base Llongma2-16k model's performance on average. As well, we've found that it may be the first 7B model to score over 60% on SAT English evaluation, more than a 2X improvement over base Llama2-7B!
​
We did this training as part of testing integration of OpenChat's MultiPack algorithm into the Axolotl trainer. MultiPack achieves 99.85% bin-packing efficiency on our dataset. This has significantly reduced training time, with efficiency improvement of 3-10X over traditional methods.
​
We have this running unquantized on fast GPUs for you to play with now in your browser:
[https://huggingface.co/spaces/Open-Orca/LlongOrca-7B-16k](https://huggingface.co/spaces/Open-Orca/LlongOrca-7B-16k)
(the preview card below is erroneously showing the name of our Preview2 release, but rest assured the link is to the LlongOrca-7B-16k space)
​
Many thanks to Enrico Shippole, emozilla, and kaiokendev1 for the fine work on creating the LlongMA-2-7b-16k model this was trained on top of!
​
We are proud to be pushing the envelope of what small models that can run easily on modest hardware can achieve!
​
Stay tuned for another big announcement from our Platypus-wielding friends Ariel Lee, ColeJHunter, Natanielruizg very soon too!
follow along at our development server, and pitch in if you want to learn more about our many other projects (seriously some of them are wild) all the links can be found at AlignmentLab.ai | 2023-08-12T18:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/15pbhcx/llongorca7b16k_is_here_and_some_light_spoilers_d/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pbhcx | false | null | t3_15pbhcx | /r/LocalLLaMA/comments/15pbhcx/llongorca7b16k_is_here_and_some_light_spoilers_d/ | true | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'PWRlymRVhoVc55SaWi7XBaBFOCAk_F49maYO8ReHxgI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=108&crop=smart&auto=webp&s=42732f8fb985f6329580bdd8134286909b29cd19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=216&crop=smart&auto=webp&s=07ad9899bd66cdfd4d56977e5c5745614225a84d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=320&crop=smart&auto=webp&s=0f529a1cf996943a5f2c29a0872f6794221f57ac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=640&crop=smart&auto=webp&s=80190b09a0118c4fb2485dc7b971d549cb0a848c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=960&crop=smart&auto=webp&s=071dcc3b70a192b2c578fda6f607369cf89969b3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=1080&crop=smart&auto=webp&s=1abd1be3f076a6d5e4dd850c82386a115e1c9abe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?auto=webp&s=66ef275763679aa6ec227c3073c7457deea2601c', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b84124f80ba5c86e0326fd2adaae1abac73724f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=33dc7fc50fbf3151e26d7768e29d71e934214d31', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e87c3eb9a39e5bcf7a049da1dbbff0e3fc43a033', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=aaded894198ce3d29a24eb010ae8a0e5a480119e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=56b877cfeb43d8212c6fd540c074af071acfe08b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=9a718f565c796190b35c865ca8180395894101d2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?blur=40&format=pjpg&auto=webp&s=c49a7ac9e33140791419c619edb4bfb4315a23b0', 'width': 1200}}}}]} |
Welp. Since they didn't recommend CoT with simple math questions... Temperature 0. | 1 | 2023-08-12T18:39:39 | bot-333 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15pc25a | false | null | t3_15pc25a | /r/LocalLLaMA/comments/15pc25a/welp_since_they_didnt_recommend_cot_with_simple/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'k38h6gvybYGIWWXIL4SDjxqZlHNC_XNf5aJ5H0DQhnY', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=108&crop=smart&auto=webp&s=20015fe1881bc46a5358e02e47b08e455fb3e005', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=216&crop=smart&auto=webp&s=09d69491adf272012917cc3d8117af1ffa8f41eb', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=320&crop=smart&auto=webp&s=4bde54536c9368c9e61cbdfd8eeb515288b9d107', 'width': 320}, {'height': 330, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=640&crop=smart&auto=webp&s=695812dde1a7363efd1de36685367a7a20c792fb', 'width': 640}, {'height': 495, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=960&crop=smart&auto=webp&s=0c35751d488ae8f411f8500e4b6268cef40b72fa', 'width': 960}, {'height': 557, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=1080&crop=smart&auto=webp&s=46d690b654c409b4b2601f5a659ad4a575af67f8', 'width': 1080}], 'source': {'height': 1360, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?auto=webp&s=122debe661a7d30dbc1236f1e33f36eff63505c7', 'width': 2634}, 'variants': {}}]} |
|||
Adding LLaMa2.c support for Web with GGML.JS | 1 | Hey guys!
ggml.js is a JavaScript framework that lets you to power web application with Language Models (or LLM). The model runs on browser using WebAssembly and currently is supports GGML models with addition to....
In my latest release of **ggml.js,** I've added support for Karapathy's [llama2.c](https://github.com/karpathy/llama2.c) model.
You can head over to the demo to try out the llama2.c tinystories example.
LLaMa 2 Demo: [https://rahuldshetty.github.io/ggml.js-examples/llama2\_tinystories.html](https://rahuldshetty.github.io/ggml.js-examples/llama2_tinystories.html)
Documentation: [https://rahuldshetty.github.io/ggml.js](https://rahuldshetty.github.io/ggml.js)
​ | 2023-08-12T18:39:55 | https://www.reddit.com/r/LocalLLaMA/comments/15pc2d3/adding_llama2c_support_for_web_with_ggmljs/ | AnonymousD3vil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pc2d3 | false | null | t3_15pc2d3 | /r/LocalLLaMA/comments/15pc2d3/adding_llama2c_support_for_web_with_ggmljs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fuysvROS0w0fAkvWAFuBmJ507qgm68vfA5btZZybPNs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=108&crop=smart&auto=webp&s=f4de47905326b71d5b4b0299156cd8429590f373', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=216&crop=smart&auto=webp&s=e6f6c866c0cfbfed175cee14fdc88d1a02e2e1c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=320&crop=smart&auto=webp&s=6d8044d7c02ecb0e1568350f64a8a6d3f202c406', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=640&crop=smart&auto=webp&s=de7fafe23a18cea71d9d219f3f9938caeac6b346', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=960&crop=smart&auto=webp&s=aa822fa312021e92d38ec49fcc9ffa9e71653768', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=1080&crop=smart&auto=webp&s=09f5be1febd7ff5911ae5c49113a09bcbcc24193', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?auto=webp&s=3b73c6953b00ba46f7d35881aa0ceec5d9d71c25', 'width': 1200}, 'variants': {}}]} |
Current best codebase for pretraining a model from scratch? | 1 | Hello, does anyone know if there is a codebase that supports pretraining with FlashAttention2, grouped query attention, and rotary embeddings? | 2023-08-12T19:02:17 | https://www.reddit.com/r/LocalLLaMA/comments/15pcm26/current_best_codebase_for_pretraining_a_model/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pcm26 | false | null | t3_15pcm26 | /r/LocalLLaMA/comments/15pcm26/current_best_codebase_for_pretraining_a_model/ | false | false | self | 1 | null |
what does a loss of 1e+9 mean? | 1 | I'm trying to finetune llama2-7B and my loss [appears to be out of control](https://i.imgur.com/0N8Momf.png).
but. what does that actually mean? Pausing the training to test some output, the LLM seems coherent and picked up some of the style of my training data. Nothing seems "broken" other than this number. | 2023-08-12T19:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/15pd8sx/what_does_a_loss_of_1e9_mean/ | scibot9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pd8sx | false | null | t3_15pd8sx | /r/LocalLLaMA/comments/15pd8sx/what_does_a_loss_of_1e9_mean/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GfNyyU8vCykXPHu-Ru2Rd0wbbiID_z4JTvgy_P-lN7A', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?width=108&crop=smart&auto=webp&s=0fb6fe69febc016423d368e20d64547c2792a47c', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?width=216&crop=smart&auto=webp&s=a30f80c0631bd9c111bf9a64341b1b2473c1f885', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?width=320&crop=smart&auto=webp&s=4acffa0e0bbfd4b7ef06da4947fcc0c3e6f2056e', 'width': 320}], 'source': {'height': 247, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?auto=webp&s=e7c67769be4cf7c6341beccb0af19878662f73c9', 'width': 340}, 'variants': {}}]} |
Does local llama2 remember all the conversations and make it my customised assistant? | 1 | I'm thinking to setup Llama2 in my local machine and make all the personal related conversations in one chat session. Will Llama2 remember all the history conversations and response based on it? Not sure if it any limitations on how long and how many the conversations history will keep. | 2023-08-12T20:15:50 | https://www.reddit.com/r/LocalLLaMA/comments/15peebg/does_local_llama2_remember_all_the_conversations/ | newfire1112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15peebg | false | null | t3_15peebg | /r/LocalLLaMA/comments/15peebg/does_local_llama2_remember_all_the_conversations/ | false | false | self | 1 | null |
Vicuna on AMD APU via Vulkan & MLC | 1 | After much trial and error I got this working so thought I'd jot down some notes. Both for myself & perhaps it helps others (esp since AMD APU LLMs is not something I've seen on here).
On a 4700U (AMD Radeon RX Vega 7) so we're talking APU on a low TDP processor...and passively cooled in my case. Unsurprisingly it's not winning the speed race:
>Statistics: prefill: 7.5 tok/s, decode: 2.2 tok/s
...but this is a headless server so the GPU part of APU is literally idle 24/7. Free performance haha.
----------
**Includes some really ugly hacks because I have no idea what I'm doing :p You've been warned.**
Also, this is on proxmox. If you're on vanila debian/ubuntu chances are you'll need less hacky stuff. Hope I got everything...pulled this out of cli history that had lots of noise from trial & error.
----------
Check that we've got the APU listed:
apt install lshw -y
lshw -c video
OpenCL install:
apt install ocl-icd-libopencl1 mesa-opencl-icd clinfo -y
clinfo
Mesa drivers:
apt install libvulkan1 mesa-vulkan-drivers vulkan-tools
Vulkan SDK. It seems to require specifically the SDK. Just Vulkan didn't work for me. pytorch couldn't pick it up.
apt update
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add -
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | apt-key add -
wget -qO /etc/apt/sources.list.d/lunarg-vulkan-focal.list http://packages.lunarg.com/vulkan/lunarg-vulkan-focal.list
apt update
apt upgrade -y
apt install vulkan-sdk
If you're lucky that'll just worked. For me I get that did not work. I was missing libjsoncpp1_1.7.4 which I just installed as a deb. qt5-default metapackage I could get installed at all (likely due to proxmox) due to vulkancapsviewer module refusing to install. I won't need that so just installed everything in the meta package except that:
echo "vulkancapsviewer" >> dont-want.txt
apt-cache depends vulkan-sdk | awk '$1 == "Depends:" {print $2}' | grep -vFf dont-want.txt
apt install vulkan-headers libvulkan-dev vulkan-validationlayers vulkan-validationlayers-dev vulkan-tools lunarg-via lunarg-vkconfig lunarg-vulkan-layers spirv-headers spirv-tools spirv-cross spirv-cross-dev glslang-tools glslang-dev shaderc lunarg-gfxreconstruct dxc spirv-reflect vulkan-extensionlayer vulkan-profiles volk vma
Check if it worked:
vulkaninfo
To get pytorch to pick up vulkan we need to recompile it with vulkan.
git clone https://github.com/pytorch/pytorch.git
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python3 setup.py install
The github version didn't compile for me. So had to edit the code. Specifically:
/root/pytorch/aten/src/ATen/native/vulkan/impl/Arithmetic.cpp
Around line 10 the case statement needed a default case:
default:
// Handle any other unspecified cases
throw std::invalid_argument("Invalid OpType provided");
After that the above compile line worked. This one:
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python3 setup.py install
That means vulkan-tools shows, but wasn't enough to get pytorch pick up vulkan.
import torch
print(torch.is_vulkan_available())
If everything worked then you'll get a TRUE.
You'll likely also need change the amount of memory allocated to GPU in your bios. In my case that was called UMA Frame buffer. Mine seems to be limited to 8GB much to my dismay (was hoping 16gb).
You can check that it worked via:
clinfo | grep Global
Alternative check htop...the total memory shown will have reduced.
Next I installed MLC-AI [here](https://mlc.ai/package/). Installed the CPU package.
Next tried their MLC [chat app](https://mlc.ai/mlc-llm/docs/get_started/try_out.html). The default llama2 model was using vulkan but generating gibberish (?!?). Switched to mlc-chat-vicuna-v1-7b-q3f16_0 instead and now it works. :)
System automatically detected device: vulkan
Using model folder: /root/dist/prebuilt/mlc-chat-vicuna-v1-7b-q3f16_0
Using mlc chat config: /root/dist/prebuilt/mlc-chat-vicuna-v1-7b-q3f16_0/mlc-chat-config.json
Using library model: /root/dist/prebuilt/lib/vicuna-v1-7b-q3f16_0-vulkan.so | 2023-08-12T23:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/15pipso/vicuna_on_amd_apu_via_vulkan_mlc/ | AnomalyNexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pipso | false | null | t3_15pipso | /r/LocalLLaMA/comments/15pipso/vicuna_on_amd_apu_via_vulkan_mlc/ | false | false | self | 1 | null |
Running Llama Faster | 1 | I am currently trying to run llama-7b-chat but the 8bit version on GPU and it is taking about 20 seconds to load a response each time is there anyway to make this faster? | 2023-08-13T00:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/15pjtnd/running_llama_faster/ | Grand-Garage-6479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pjtnd | false | null | t3_15pjtnd | /r/LocalLLaMA/comments/15pjtnd/running_llama_faster/ | false | false | self | 1 | null |
EverythingLM-13b-16k: New uncensored model trained on experimental new dataset | 1 | [https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k)
Trained on a LIMA style dataset of only 1k samples. The dataset combines principles from WizardLM (evol-instruct), and Orca (systems prompts & CoT). From my testing the model performs well, however treat this as a preview model. I have a lot of future plans for better models.
GPTQ's & GGML's are available thanks to TheBloke, links are on the HF page. The ggml's are buggy and is an issue I am working on, so use GPTQ's if you can for now. | 2023-08-13T00:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/15pk0ia/everythinglm13b16k_new_uncensored_model_trained/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pk0ia | false | null | t3_15pk0ia | /r/LocalLLaMA/comments/15pk0ia/everythinglm13b16k_new_uncensored_model_trained/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AZvlMlPQKij9jyNTa1Fec2KKfNfs6cOECEgEvphnk_Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=108&crop=smart&auto=webp&s=e58720b4e47f2e35477d17c5adc1942ac5689792', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=216&crop=smart&auto=webp&s=b0f7e87fbfeb088221eaa0cac0e6f6d0b277e5c5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=320&crop=smart&auto=webp&s=0d148d927c6bd6ca47e16f0598e50543d70cddaf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=640&crop=smart&auto=webp&s=fcedf0cedfbf36cd68cf1834484c37ccf69d7067', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=960&crop=smart&auto=webp&s=33e6962e4bada6896d2d8b5f07b7de8d968bb76c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=1080&crop=smart&auto=webp&s=dbd8c63454f58dd69485f6240a886839885f6c30', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?auto=webp&s=2911b4d72de75af0c0da931754af9ffd81ea9c2a', 'width': 1200}, 'variants': {}}]} |
🎨🦙I Finetuned LLAMA2 on SD Prompts | 1 | 2023-08-13T00:19:45 | https://youtu.be/dg_8cGzzfY4 | ImpactFrames-YT | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 15pk70f | false | {'oembed': {'author_name': 'ImpactFrames', 'author_url': 'https://www.youtube.com/@impactframes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/dg_8cGzzfY4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="🎨👩🏻\u200d🎨LLM for SD prompts IF_PromptMKR_GPTQ 🦙🦙"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/dg_8cGzzfY4/hqdefault.jpg', 'thumbnail_width': 480, 'title': '🎨👩🏻\u200d🎨LLM for SD prompts IF_PromptMKR_GPTQ 🦙🦙', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15pk70f | /r/LocalLLaMA/comments/15pk70f/i_finetuned_llama2_on_sd_prompts/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'SywhsScbWVzef9Co4jGVFSa8xdCQa_H3Msft-PCJa7U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?width=108&crop=smart&auto=webp&s=940d2de0c93e4cb904320a27e6c83cdb7b9bba6e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?width=216&crop=smart&auto=webp&s=7005cebecc21f2bde630792644df96c9a0570387', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?width=320&crop=smart&auto=webp&s=2daf4d6b308174b862835f9845936239b60ddde2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?auto=webp&s=25bf8b2cc21013da8fc57b4a66972af1de2515cc', 'width': 480}, 'variants': {}}]} |
||
Improving the speed of a GGML model running on GPU | 1 | I am using Vicuna 1.5 13b quantized to 8 bits in llama.cpp. All layers have been offloaded to GPU. I had tried earlier with the 5 bit quantized model but its performance was lacking so using the 8 bit one now. To get longer answers, I also increased the max_tokens to 1000 from 250.
Have noticed significant slowdowns when increasing the max_tokens. Is it due to the autoregressive nature of the generation, where as its output becomes larger, it has to consume a larger amount of text to produce the next token?
I've tried model_n_batch=1024 to see if larger number of parallel tokens helps improve speed. I am seeing a plateau here where the same value that worked for the 5 bit model continues to work well here, with higher values not being helpful.
Any other settings that might be helpful here? | 2023-08-13T00:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/15pktxn/improving_the_speed_of_a_ggml_model_running_on_gpu/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pktxn | false | null | t3_15pktxn | /r/LocalLLaMA/comments/15pktxn/improving_the_speed_of_a_ggml_model_running_on_gpu/ | false | false | self | 1 | null |
Core Dumped error when loading model KoboldCPP | 1 | [https://github.com/YellowRoseCx/koboldcpp-rocm](https://github.com/YellowRoseCx/koboldcpp-rocm)I
dentified as LLAMA model: (ver 5)
Attempting to Load...
\---
Using automatic RoPE scaling (scale:1.000, base:10000.0)
System Info: AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512\_VBMI = 1 | AVX512\_VNNI = 1 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
llama.cpp: loading model from /home/??????/Downloads/chronos-hermes-13b-v2.ggmlv3.q4\_0.bin
llama\_model\_load\_internal: format = ggjt v3 (latest)
llama\_model\_load\_internal: n\_vocab = 32032
llama\_model\_load\_internal: n\_ctx = 512
llama\_model\_load\_internal: n\_embd = 5120
llama\_model\_load\_internal: n\_mult = 6912
llama\_model\_load\_internal: n\_head = 40
llama\_model\_load\_internal: n\_head\_kv = 40
llama\_model\_load\_internal: n\_layer = 40
llama\_model\_load\_internal: n\_rot = 128
llama\_model\_load\_internal: n\_gqa = 1
llama\_model\_load\_internal: rnorm\_eps = 5.0e-06
llama\_model\_load\_internal: n\_ff = 13824
llama\_model\_load\_internal: freq\_base = 10000.0
llama\_model\_load\_internal: freq\_scale = 1
llama\_model\_load\_internal: ftype = 2 (mostly Q4\_0)
llama\_model\_load\_internal: model size = 13B
llama\_model\_load\_internal: ggml ctx size = 0.11 MB
ggml\_init\_cublas: found 2 CUDA devices:
Device 0: AMD Radeon RX 6900 XT, compute capability 10.3
Device 1: AMD Radeon Graphics, compute capability 10.3
llama\_model\_load\_internal: using CUDA for GPU acceleration
ggml\_cuda\_set\_main\_device: using device 0 (AMD Radeon RX 6900 XT) as main device
llama\_model\_load\_internal: mem required = 594.09 MB (+ 400.00 MB per state)
llama\_model\_load\_internal: allocating batch\_size x (640 kB + n\_ctx x 160 B) = 360 MB VRAM for the scratch buffer
llama\_model\_load\_internal: offloading 40 repeating layers to GPU
llama\_model\_load\_internal: offloading non-repeating layers to GPU
llama\_model\_load\_internal: offloading v cache to GPU
llama\_model\_load\_internal: offloading k cache to GPU
llama\_model\_load\_internal: offloaded 43/43 layers to GPU
llama\_model\_load\_internal: total VRAM used: 7656 MB
llama\_new\_context\_with\_model: kv self size = 400.00 MB
Segmentation fault (core dumped)
​ | 2023-08-13T01:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/15plxzx/core_dumped_error_when_loading_model_koboldcpp/ | meutron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15plxzx | false | null | t3_15plxzx | /r/LocalLLaMA/comments/15plxzx/core_dumped_error_when_loading_model_koboldcpp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'P1smCGPSCZsuOp4Te2lbGteOQjrcfy6j7e0DMacnxKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=108&crop=smart&auto=webp&s=52647eb7a82946dce5c2d509054ecbb7810f6f41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=216&crop=smart&auto=webp&s=0e72e3d00429f8008d201dae83463b5705d30124', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=320&crop=smart&auto=webp&s=71132008d17895a8669da7174fed7ef454375a2c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=640&crop=smart&auto=webp&s=e2fc632147f5039fb503e911d0c3b67035b2352d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=960&crop=smart&auto=webp&s=996682acaa0c315cd34ecc5fb89a8827eab4b7cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=1080&crop=smart&auto=webp&s=a22baaca9f63519f77be4ea5d6adfa4540e5a101', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?auto=webp&s=58ac0ff38047da84b93a0bcb7d27310dd8739429', 'width': 1200}, 'variants': {}}]} |
Weird llama.cpp failure | 1 | I've been very happily using llama.cpp for inference with various GGML-formatted models for months, and just yesterday my model-downloader script finished pulling down TheBloke's starcoderplus-GGML, yaay! I was looking forward to playing with it over the weekend.
When I ran the model card's example prompt through it, though, llama.cpp's main failed to load the model, claiming unexpected end of file:
ttk@kirov:/home/ttk/tools/ai$ llama.cpp.git/main -m models.local/starcoderplus.ggmlv3.q4_1.bin -n 300 -p "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world'3:30 tomorrow')<fim_middle>"
main: build = 978 (f64d44a)
main: seed = 1691890406
llama.cpp: loading model from models.local/starcoderplus.ggmlv3.q4_1.bin
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models.local/starcoderplus.ggmlv3.q4_1.bin'
main: error: unable to load model
I "git pull"'d to update my copy of llama.cpp, recompiled main, and there was no change. I can still load and use other 4-bit quantized GGML models, like guanaco-7b.
I checked the sha256 checksum of the model file, and it matches the checksum in the huggingface repo's LFS pointer, so it's a faithful copy of what's on huggingface:
ttk@kirov:/raid/models$ shasum -a 256 starcoderplus-GGML.git/starcoderplus.ggmlv3.q4_1.bin
78c612e4ebd7a49de32b085dc7b05afca88c132f63a7231e037dbcc175bd9b3e starcoderplus-GGML.git/starcoderplus.ggmlv3.q4_1.bin
ttk@kirov:/raid/models$ grep sha256 starcoderplus-GGML.git/starcoderplus.ggmlv3.q4_1.bin.orig
oid
sha256:78c612e4ebd7a49de32b085dc7b05afca88c132f63a7231e037dbcc175bd9b3e
Here's a full transcript of updating llama.cpp, rebuilding main, and re-running the prompt, also "uname -a" output:
http://ciar.org/h/llamacpp_fail.txt
Has anyone else been using this model successfully?
u/The-Bloke does this make any sense to you?
That machine has 32GB of RAM, which should be plenty for CPU inference. | 2023-08-13T02:38:32 | https://www.reddit.com/r/LocalLLaMA/comments/15pn3mx/weird_llamacpp_failure/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pn3mx | false | null | t3_15pn3mx | /r/LocalLLaMA/comments/15pn3mx/weird_llamacpp_failure/ | false | false | self | 1 | null |
LLM trained on fiction / literature? | 1 | Is there an LLM that was trained on literature specifically? To use as an editor / corrector for example? | 2023-08-13T03:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/15pnmny/llm_trained_on_fiction_literature/ | myreptilianbrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pnmny | false | null | t3_15pnmny | /r/LocalLLaMA/comments/15pnmny/llm_trained_on_fiction_literature/ | false | false | self | 1 | null |
5X speed boost on oobabooga when using a set seed. | 1 | I was quite surprised to find that while playing around with the settings I got a 5x speed boost on exllama if i had a set seed. I went from 6.6 tk/s to 35.2 tk/s. not sure why this would have that great of an impact but it did. | 2023-08-13T03:27:26 | https://www.reddit.com/r/LocalLLaMA/comments/15po3ex/5x_speed_boost_on_oobabooga_when_using_a_set_seed/ | BackyardAnarchist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15po3ex | false | null | t3_15po3ex | /r/LocalLLaMA/comments/15po3ex/5x_speed_boost_on_oobabooga_when_using_a_set_seed/ | false | false | self | 1 | null |
Multi GPU performance | 1 | I've read all the posts I could find on this but haven't seen many actual numbers. Has anyone run a 33b model with multiple 8-16gb GPUs? If yes, what kind of t/s are you able to get? I'm getting 3-4 t/s at 2k context and am wondering if it's worth adding something like a P100 or 3060. I know a 3090 would be much better but if I can get decent performance without it that would be preferable.
I also have an 8gb 5700xt - I'm assuming I can't use it with the 4070 but lmk if I'm wrong. I thought I saw someone say they got it working on linux but I can't find the post.
I'd appreciate any insight anyone has, even if you don't have specific numbers. Thanks in advance.
13700
4070ti 12gb
32gb DDR5 5600
Windows 10
​
oobabooga
llama.cpp
n-gpu-layers: 36
threads: 9 | 2023-08-13T04:38:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ppgz8/multi_gpu_performance/ | alyssa1055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ppgz8 | false | null | t3_15ppgz8 | /r/LocalLLaMA/comments/15ppgz8/multi_gpu_performance/ | false | false | self | 1 | null |
Training LLM on Call Center Data | 1 | So, I have a call center and I am looking to test automating those calls. I wanted to know the way to go about it. I have call recordings available. What model would be the best to be used as a call center agent? And what would be the go about it? Any suggestions would be appreciated. | 2023-08-13T05:52:04 | https://www.reddit.com/r/LocalLLaMA/comments/15pqsws/training_llm_on_call_center_data/ | nolovenoshame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pqsws | false | null | t3_15pqsws | /r/LocalLLaMA/comments/15pqsws/training_llm_on_call_center_data/ | false | false | self | 1 | null |
LLama2 with GeForce 1080 8Gb | 1 | Hi. I am trying to run LLama2 on my server which has mentioned nvidia card. It's a simple hello world case you can [find here](https://huggingface.co/blog/llama2). However I am constantly running into memory issues:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 0; 7.92 GiB total capacity; 7.12 GiB already allocated; 241.62 MiB free; 7.18 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I tried
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
but same effect. Is there anything I can do? | 2023-08-13T06:28:22 | https://www.reddit.com/r/LocalLLaMA/comments/15prfwe/llama2_with_geforce_1080_8gb/ | vonGlick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15prfwe | false | null | t3_15prfwe | /r/LocalLLaMA/comments/15prfwe/llama2_with_geforce_1080_8gb/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'urd-gOpHx6DzqXeQqsy2yaeJA0EJHFkUW198WyZ0Q3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=108&crop=smart&auto=webp&s=3a8143bf595d2a1bee3d138841856378eb2e0030', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=216&crop=smart&auto=webp&s=b2a753604d8f09eca2670fe6aa3e3d68577676b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=320&crop=smart&auto=webp&s=4d730223a776274cf6188d25e0d0f65f9ac64601', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=640&crop=smart&auto=webp&s=cdcc131f68e029b2b0c16d30dea4d25aac49879f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=960&crop=smart&auto=webp&s=2b0b7a4430320f0e902efa0cc656d9422388c6ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=1080&crop=smart&auto=webp&s=5510c87c7c86d94614d6999ee5c231cae5686436', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?auto=webp&s=328b1af048abef43ece61400b0e074f168198bf7', 'width': 2320}, 'variants': {}}]} |
Making an app for GPT and llama | 1 | [removed] | 2023-08-13T06:51:42 | https://www.reddit.com/r/LocalLLaMA/comments/15pru0g/making_an_app_for_gpt_and_llama/ | Ok-Face3238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pru0g | false | null | t3_15pru0g | /r/LocalLLaMA/comments/15pru0g/making_an_app_for_gpt_and_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IJqeWGcgeQRTo0q7zvjRC-E6cTtR03hgdDsQVwQnFEQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=108&crop=smart&auto=webp&s=758f56d3751ab0cd454d1f17f976d90ec666a701', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=216&crop=smart&auto=webp&s=d07698a3cfd19f2df4d189ccb354ecbdabb79221', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=320&crop=smart&auto=webp&s=918a2f462d5111a000e890b0e98a2ba75a49f939', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=640&crop=smart&auto=webp&s=b1527f91fe66c024dfdedf8565569b0d52cb2b80', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=960&crop=smart&auto=webp&s=b99652a7432ddecea17383f243fd1c86450dd78e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=1080&crop=smart&auto=webp&s=1024375d1bf439afd7bfe9206e4fb15473b05796', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?auto=webp&s=4b8497256d0fd51d8b679772d0e4c95648d9feea', 'width': 1200}, 'variants': {}}]} |
Anyone went to prod with LLAMA-70B? Without quantization? | 1 | I am aiming at implementing LLAMA-V2 on prod with full precision (the problem is here the quality of the output). Are you aware of any resources what tricks and tweaks should I follow? We will use p4dn instances, and maybe fine tune some features also… | 2023-08-13T07:32:45 | https://www.reddit.com/r/LocalLLaMA/comments/15psj9h/anyone_went_to_prod_with_llama70b_without/ | at_nlp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15psj9h | false | null | t3_15psj9h | /r/LocalLLaMA/comments/15psj9h/anyone_went_to_prod_with_llama70b_without/ | false | false | self | 1 | null |
where is the code | 1 | i dont get it, it said that its open source, but where is the source code of it? Is [init.py](https://init.py), [generation.py](https://generation.py), [model.py](https://model.py), [tokenizer.py](https://tokenizer.py) the source code in github? i really dont think it would make a whole llm with just 4 python files | 2023-08-13T07:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/15pskyg/where_is_the_code/ | bull_shit123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pskyg | false | null | t3_15pskyg | /r/LocalLLaMA/comments/15pskyg/where_is_the_code/ | false | false | self | 1 | null |
Whisper ccp GGML Quantized? | 1 | https://github.com/ggerganov/whisper.cpp
I tried this out when it was first released, and I was looking forward to q4 versions of the "large" model being released.
https://huggingface.co/ggerganov/whisper.cpp/tree/main
Given its a GGML model, I thought I could use quantize.exe, but I'm getting an error when I try to run it on the whisper models.
Whisper v2 models have been released as well, but don't have corresponding GGML models:
https://huggingface.co/openai/whisper-large-v2/tree/main
Are there any repos where I can get 1) whispercpp GGML quantized and 2) whispercpp v2 GGML ? | 2023-08-13T07:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/15psrtg/whisper_ccp_ggml_quantized/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15psrtg | false | null | t3_15psrtg | /r/LocalLLaMA/comments/15psrtg/whisper_ccp_ggml_quantized/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=108&crop=smart&auto=webp&s=8d42dd9100bbdc2edde65dc3abbd30b2aaa8cbdc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=216&crop=smart&auto=webp&s=f1a2d34385f5584cd8fc7e5cc2fbb992be921503', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=320&crop=smart&auto=webp&s=59a1a96a9da29f68c162532a0ce542d6e629224d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=640&crop=smart&auto=webp&s=8fd95c995b5a899d724fa260b810b9cbc2609c03', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=960&crop=smart&auto=webp&s=8332634a9d923a986621ae45c99b4634017c4eaa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=1080&crop=smart&auto=webp&s=90445891ce31b4cef62c63ebaf9629a9480ff806', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?auto=webp&s=2b0dc6b19f7b2d2b10ded3abce89062ede2d04f3', 'width': 1280}, 'variants': {}}]} |
How to fine-tune btlm 3b | 1 | [removed] | 2023-08-13T08:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/15pt64k/how_to_finetune_btlm_3b/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pt64k | false | null | t3_15pt64k | /r/LocalLLaMA/comments/15pt64k/how_to_finetune_btlm_3b/ | false | false | self | 1 | null |
[Critique this idea] distilling from a larger model to a smaller one | 1 | Hello,
I have been wondering about the application of knowledge distillation from a stronger to a smaller model.
The crux of the idea is to collect responses from the stronger model on a lot of prompts. these may be standalone or also be augmented with retrievals.
These prompt+response pairs are then used to perform a round of training/finetuning on a weaker model.
This idea may be applied to, for e.g., distilling responses from 70b to 13b, or from unquantized to quantized versions. Further, probably an approach like qlora will be needed to allow this operation to happen on typical hardware, so it's "strength" or "impact" is to be expected to be similar to finetuning.
An argument against this could be made that the weaker models are already at "capacity" with what they've learnt, but a counter to that is in other ML domains ( at least how i understand it) distilling from a stronger model has been able to take a weaker model beyond its vanilla performance. Another loose argument could be that finetuning on domain specific questions can pull the weaker model to give better performance on the domain at the cost of general performance, which is also what we expect in general finetuning as well.
Would love to get people's opinions on this. any pitfalls you anticipate? has this already been done? | 2023-08-13T09:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/15pu9rd/critique_this_idea_distilling_from_a_larger_model/ | T_hank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pu9rd | false | null | t3_15pu9rd | /r/LocalLLaMA/comments/15pu9rd/critique_this_idea_distilling_from_a_larger_model/ | false | false | self | 1 | null |
what are the different types of training/finetuning we do? | 1 | trying to make sense of the different training and finetuning options that are usually suggested.
- we can train on raw text. this is mostly to complete stubs or starting passages. but i think this is mostly used for pretraining or is this idea also helpful in finetuning?
- we can collect many prompt/response pairs from a GPT4 , keep high quality ones and finetune on those.
Also finetuning requires we do not disturb too many parameters, so we use lora. I've also seen qlora mentioned, is that when we have quantized models?
Are there other ways of finetuning that exist or are applicable in specific cases? | 2023-08-13T09:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/15puei4/what_are_the_different_types_of/ | olaconquistador | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15puei4 | false | null | t3_15puei4 | /r/LocalLLaMA/comments/15puei4/what_are_the_different_types_of/ | false | false | self | 1 | null |
EE_Ion my attempt to an English to Spanish translator | 1 | I am frustrated on how wrong Google Translate or Deepl are in certain situations, mostly when dealing with technical, specialised terms or when dealing with all kind of markup (html, xml, Markdown) or named placeholders (Hello ${name} - it is clear that I do not want name to be translated).
So, I was thinking that I need a way to be able to translate in context. I curated a dataset of 250 000 examples with in context translations and I started to fine tune Llama-1 and 2.
After many tries and failures with 7B models I managed to get promising results on a Llama-2 13B model. It is still a long way to go, but it is good enough for an alpha release.
So, I present to you: **EE\_Ion 13B English to Spanish in context translator.** [https://huggingface.co/iongpt/EE\_Ion\_en\_es-v1\_0\_alpha-fp16/settings](https://huggingface.co/iongpt/EE_Ion_en_es-v1_0_alpha-fp16/settings)
It is able to respond in the first block with the actual translation, but it starts spitting garbage after that. For me it is usable because I am using a script to translate apps so I am just ignoring anything after the the first block.
I also did a 4bit GPTQ quant, but that is performing badly (it is not always responding with the correct translation in the first block), probably my quant script was off.
My plan is to continue fine tuning this one and after it works flawlessly to move to other languages. I am already collecting data for the French and German datasets
I am not sure if anyone else is interested in this so I am posting this here to understand if there is a need for something like this. | 2023-08-13T10:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/15pvfqg/ee_ion_my_attempt_to_an_english_to_spanish/ | Ion_GPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pvfqg | false | null | t3_15pvfqg | /r/LocalLLaMA/comments/15pvfqg/ee_ion_my_attempt_to_an_english_to_spanish/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
is there any model/prompt that can generate r/HFY style stories? | 1 | [removed] | 2023-08-13T10:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/15pvlja/is_there_any_modelprompt_that_can_generate_rhfy/ | happydadinau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pvlja | false | null | t3_15pvlja | /r/LocalLLaMA/comments/15pvlja/is_there_any_modelprompt_that_can_generate_rhfy/ | false | false | self | 1 | null |
"Lost and Llama-less: Join the Search for the Missing LLAMA - A Heartfelt Plea for Help in The Great LLAMA Hunt" | 1 | "Lost and Llama-less: Join the Search for the Missing LLAMA - A Heartfelt Plea for Help in The Great LLAMA Hunt"
So as a llama-less lost newbie in the world of AI who is hunting/searching for his llama for a long time , I have a simple question. How can I get this llama on my poor pc, without spending a guizilian dollars on new hardware?
1.How can I run LLama2 13b model on my R5 5600G?
2.I don't have a discrete GPU, not planning on getting one any soon.
3.How much RAM I need? Currently I got 16GB, I am fine with buying more. I can tolerate getting more RAM.
4.Please give me some steps to follow or a link, even if it's not detailed, but providing some steps will help me start from there.
5. Thank you in advance 🙂
- Sakamoto, the man pursuing the llama dream for weeks, and still hunting.
Ain't no llama gonna defeat me, am gonna get this llama asap 😎 | 2023-08-13T11:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/15pwp53/lost_and_llamaless_join_the_search_for_the/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pwp53 | false | null | t3_15pwp53 | /r/LocalLLaMA/comments/15pwp53/lost_and_llamaless_join_the_search_for_the/ | false | false | default | 1 | null |
I am confused. So many models to choose from. | 1 | I have been looking at different models from hugginh face and is getting ovewhelmed by all these different models that I am unable to differentiate them.
Airoboros, Guanaco, Vicuna, Orca, Wizard, Platypus, Beluga, Chronos, Hermes, LlongMa, etc.
I mean what are the differences between them? They seem to all strive to become all around AI models and the differences are too technical for me to understand. Is there an easier way to differentiate them to know which one is better? Or do I really have to try them one by one?
May aim is simple. I am looking for a 13B llama-2 based GGML model (q4\_k\_s preferrably) for a simple AI assistant with tweaked personality of my choice (I use oobabooga character chat settings). Nothing extremely hard but I want my AI to be consistent to the context assigned to them while being an AI assistant (ie: tsundere or mischievous personality etc). Any insight to those who are more experienced is greatly appreciated. Thank you! | 2023-08-13T12:10:48 | https://www.reddit.com/r/LocalLLaMA/comments/15px9x1/i_am_confused_so_many_models_to_choose_from/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15px9x1 | false | null | t3_15px9x1 | /r/LocalLLaMA/comments/15px9x1/i_am_confused_so_many_models_to_choose_from/ | false | false | self | 1 | null |
A local alternative to the code-search-ada-code-001? | 1 | Does anyone know the local alternative to the code-search-ada-code-001 or any embedding generator for the source code?
Thanks! | 2023-08-13T12:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/15pxmrl/a_local_alternative_to_the_codesearchadacode001/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pxmrl | false | null | t3_15pxmrl | /r/LocalLLaMA/comments/15pxmrl/a_local_alternative_to_the_codesearchadacode001/ | false | false | self | 1 | null |
🧠💻Using Code to Boost ChatGPT's Thinking: Can We Teach the LLAMA Model to Do the Same? Share Your Thoughts! | 1 | Hello :)
It seems that chatgpt is more intelligent if it leverages coding to help it think:
[https://chat.openai.com/share/af8e1bdb-b0cc-4e04-b539-6546e67e35c1](https://chat.openai.com/share/af8e1bdb-b0cc-4e04-b539-6546e67e35c1)
[https://chat.openai.com/share/972c2129-3614-40c5-a133-2403ce7bc9b2](https://chat.openai.com/share/972c2129-3614-40c5-a133-2403ce7bc9b2)
​
**Is there a way to tell a LLAMA model to write code to help it think?**
​
thanks :) | 2023-08-13T12:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/15py0no/using_code_to_boost_chatgpts_thinking_can_we/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15py0no | false | null | t3_15py0no | /r/LocalLLaMA/comments/15py0no/using_code_to_boost_chatgpts_thinking_can_we/ | false | false | self | 1 | null |
Llama 2 70B's Response when asked about Llama | 1 | I was thinking to write an article and was bit lazy when I was almost near to finishing it. So thought why not ask the model itself what to write about it. And this is what happened when asked it to tell some key points about Llama(it's kinda more focusing on that Leak part)
Website: chat.nbox.ai | 2023-08-13T13:07:09 | Automatic-Net-757 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15pygpf | false | null | t3_15pygpf | /r/LocalLLaMA/comments/15pygpf/llama_2_70bs_response_when_asked_about_llama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'FkiVgLhTOmKn-35ats9vfpKz-c4Thw4THbwzbCONhXI', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=108&crop=smart&auto=webp&s=2ff51fb3c7861979a3c9e98e9aef2e565d6cd005', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=216&crop=smart&auto=webp&s=9acb87d6522fe9eb37f026d5af4d1e8fcf0e26fb', 'width': 216}, {'height': 136, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=320&crop=smart&auto=webp&s=4ac09c12e0150daddd7caa0d4e0580fcd9486d52', 'width': 320}, {'height': 272, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=640&crop=smart&auto=webp&s=8d53d9c891b1404b68880f6f3a86115910b21ee0', 'width': 640}, {'height': 408, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=960&crop=smart&auto=webp&s=0480f2dd0f0192bee5d32827da1dd4390e267c17', 'width': 960}, {'height': 460, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=1080&crop=smart&auto=webp&s=3223d4df8463720c0daa6cfa955c8cc837c4bc3d', 'width': 1080}], 'source': {'height': 460, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?auto=webp&s=7129965339b1a7813a6db3d11ad3f31bbee314b9', 'width': 1080}, 'variants': {}}]} |
||
Run LLama-2 13B, very fast, Locally on Low-Cost Intel ARC GPU | 1 | 2023-08-13T13:12:59 | https://youtu.be/FRWy7rzOsRs | reps_up | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 15pylcd | false | {'oembed': {'author_name': 'AI Tarun', 'author_url': 'https://www.youtube.com/@aitarun', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FRWy7rzOsRs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Run LLama-2 13B, very fast, Locally on Low Cost Intel's ARC GPU , iGPU and on CPU"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FRWy7rzOsRs/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Run LLama-2 13B, very fast, Locally on Low Cost Intel's ARC GPU , iGPU and on CPU", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15pylcd | /r/LocalLLaMA/comments/15pylcd/run_llama2_13b_very_fast_locally_on_lowcost_intel/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'kSQFhAvJQ8IEQAJTvXBRh3sOntWdaff5gmY-OsSQGe4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?width=108&crop=smart&auto=webp&s=64c55e32f22dc19fd9f0597cb11f0b6632a4a104', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?width=216&crop=smart&auto=webp&s=c4136fab824134b5de8a3b9f7b5627e87a5212e5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?width=320&crop=smart&auto=webp&s=8aeef0d151332450e6191ad298e16c1b9effc776', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?auto=webp&s=85c6941ee292b62efe6c79e36356fae4a14a2047', 'width': 480}, 'variants': {}}]} |
|
Why I am disappointed by Llama(2) | 1 | [removed] | 2023-08-13T13:16:16 | https://www.reddit.com/r/LocalLLaMA/comments/15pynzd/why_i_am_disappointed_by_llama2/ | SecretOk9644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pynzd | false | null | t3_15pynzd | /r/LocalLLaMA/comments/15pynzd/why_i_am_disappointed_by_llama2/ | false | false | self | 1 | null |
Fine-tune the llama 2 via SFT and DPO | 1 | Finally, I managed to get out from my addiction to Diablo 4 and found some time to work on the llama2 :p. Here is the repo containing the scripts for my experiments with fine-tuning the llama2 base model for my grammar corrector app. So far, the performance of llama2 13b seems as good as llama1 33b. However, I'm not really like with the results after applying DPO alignment that aligns with human preferences. somehow it makes the model's output kind of off-purpose.
[https://github.com/mzbac/llama2-fine-tune](https://github.com/mzbac/llama2-fine-tune)
Hope the code can help you guys to try out the DPO on your custom dataset. | 2023-08-13T14:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/15q07a1/finetune_the_llama_2_via_sft_and_dpo/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q07a1 | false | null | t3_15q07a1 | /r/LocalLLaMA/comments/15q07a1/finetune_the_llama_2_via_sft_and_dpo/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mzlpUxLxgdIHkm_czLL4hXE5jvTQpS4GujfRXdQg7Rs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=108&crop=smart&auto=webp&s=5a504b706b5c91366e8aee7f19537543552d16d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=216&crop=smart&auto=webp&s=3d2f21e0187dc1d935e559a05bacf9fb66c5e2c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=320&crop=smart&auto=webp&s=f597a8e18bef463fd1d11ce3c759dd45e25897ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=640&crop=smart&auto=webp&s=290285f6edfacdb69f799b68d5cc7154fd3ff9bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=960&crop=smart&auto=webp&s=e0be13ab7222ba907ee6d05cc8c8e66ae9441b01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=1080&crop=smart&auto=webp&s=aa83b23eaf5a4d1555d5ae90d8f9eb0adf3bb299', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?auto=webp&s=fc5f4772ce55a86ec85a0f93d86993a8b574c751', 'width': 1200}, 'variants': {}}]} |
Newbie is confused about how to train Llama 2, needs hand-holding. | 1 | [removed] | 2023-08-13T14:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/15q0iyc/newbie_is_confused_about_how_to_train_llama_2/ | emotionalHunterEx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q0iyc | false | null | t3_15q0iyc | /r/LocalLLaMA/comments/15q0iyc/newbie_is_confused_about_how_to_train_llama_2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'z2HdRfGrX_QS4_TnwDeHjTgrpOd2uGmfmEZQf63iZWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=108&crop=smart&auto=webp&s=d840bf220765e7b6df8c36771f071c82dc53eee4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=216&crop=smart&auto=webp&s=714db9b135c12543746691b8a956acfd07122580', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=320&crop=smart&auto=webp&s=e1a8f89ae830c69fa429ef112b425aba1b64bdf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=640&crop=smart&auto=webp&s=31e2c79449868e179793a1f2d70f5d78de751d08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=960&crop=smart&auto=webp&s=262b4daf154aadda8f746529eb973650ecbe9e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=1080&crop=smart&auto=webp&s=700bfff52f422ffd0ff53c1ea12551bbdee98a62', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?auto=webp&s=c2f80796e75ceb2043e71b915e84ad78ae348afa', 'width': 2024}, 'variants': {}}]} |
How to get up to speed? | 1 | There is a knowledge pyramid here, and some people who know everything there is to know about LLaMA (the ones who made it) at the top, and me at the bottom. I’m ignorant.
Is there a single book that I can read that will get me to the level of the pyramid, not at the tip top, but the level where one is generally proficient at installing, using, training and deploying LLaMA models?
I’m a CS student about to graduate and none of my classes touched on LLMs. | 2023-08-13T14:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/15q0lmw/how_to_get_up_to_speed/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q0lmw | false | null | t3_15q0lmw | /r/LocalLLaMA/comments/15q0lmw/how_to_get_up_to_speed/ | false | false | self | 1 | null |
Can someone recommend a basic setup for a 4090 and 128 RAM? | 1 | I would like to have something similar to ChatGPT running locally. Its mostly used to improve emails and social media posts. My GPU is often executing 3ds max rendering and SD imagery, so I can't have something that will remain loaded in the VRAM.
I've got Oogabooga running a while ago, and its OK, but the interface and configuration is confusing.
I like the idea of performing these tasks offline and have tried, but so many models and specific tasks for coding, I get bogged down in technical jargon. Thank you! | 2023-08-13T15:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/15q2e8c/can_someone_recommend_a_basic_setup_for_a_4090/ | Sweet_Baby_Moses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q2e8c | false | null | t3_15q2e8c | /r/LocalLLaMA/comments/15q2e8c/can_someone_recommend_a_basic_setup_for_a_4090/ | false | false | self | 1 | null |
Noob question: How do I make use of all my VRAM with llama.cpp in oobabooga webui? | 1 | I have two GPUs with 12GB VRAM each. Offloading 28 layers, I get almost 12GB usage on one card, and around 8.5GB on the second, during inference. Setting n-gpu-layers any higher gives me an out of memory error. How can I make use of that remaining 3.5 GB on the second GPU? | 2023-08-13T16:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/15q2vql/noob_question_how_do_i_make_use_of_all_my_vram/ | Acceptable-Trade-46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q2vql | false | null | t3_15q2vql | /r/LocalLLaMA/comments/15q2vql/noob_question_how_do_i_make_use_of_all_my_vram/ | false | false | self | 1 | null |
Complete noob: chatting with a set of local documents? | 1 | Hi, I discovered the LocalGPT and Chatdocs projects on github a while ago and really liked the idea of "chatting" with my growing pdf library and receive answers.
However I'm very new to the LM/AI world so I hope you don't mind my question... What would be the easiest option to run something like that locally? Would my current laptop (amd 6800HS + Nvidia RTX 4060, 32gb ram) be enough, or do I need better hardware, different models, etc?
Thanks in advance | 2023-08-13T16:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/15q2y80/complete_noob_chatting_with_a_set_of_local/ | TheGlobinKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q2y80 | false | null | t3_15q2y80 | /r/LocalLLaMA/comments/15q2y80/complete_noob_chatting_with_a_set_of_local/ | false | false | self | 1 | null |
Llama for document matching? | 1 | [removed] | 2023-08-13T16:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/15q3fg1/llama_for_document_matching/ | BoxLazy8046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q3fg1 | false | null | t3_15q3fg1 | /r/LocalLLaMA/comments/15q3fg1/llama_for_document_matching/ | false | false | self | 1 | null |
How should I preprocess my text to optimize text embeddings? | 1 | Pretty much the title.
I've heard of preprocessing strategies such as lowercasing, stop word removal, stemming, lemmatization, punctuation removal, special characters removal, and regular expression removal. However, many of these strategies seem like they might remove semantically relevant information about the text. For example, wouldn't lowercasing, lemmatization, punctuation removal, and stemming get rid of important grammatical information that the embedding model could use to more accurately vectorize text?
My guess is that I should try to preprocess my text using the same methods used in the embedding model's training data. What do y'all think? | 2023-08-13T17:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/15q4ze6/how_should_i_preprocess_my_text_to_optimize_text/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q4ze6 | false | null | t3_15q4ze6 | /r/LocalLLaMA/comments/15q4ze6/how_should_i_preprocess_my_text_to_optimize_text/ | false | false | self | 1 | null |
LLM Hardware Setup vs Speed Question | 1 | I've been using SillyTavern for a while and although I'm happy with the quality of the outputs, it is just too slow on my old GPU or laptop. I'm curious how worth it an investment into a 3090 or 4090 would be. I'm only using a model like this:
[mythomax-l2-13b.ggmlv3.q4\_K\_M.bin](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML/blob/main/mythomax-l2-13b.ggmlv3.q4_K_M.bin)
so it is quantized and has 13 billion parameters. If any of you have experience with this, I'd love to learn more about how fast a model like this would run and what setup you have respectively. Thanks for the help. | 2023-08-13T17:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/15q4zrd/llm_hardware_setup_vs_speed_question/ | Dramatic_Road3570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q4zrd | false | null | t3_15q4zrd | /r/LocalLLaMA/comments/15q4zrd/llm_hardware_setup_vs_speed_question/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oL3fMXfQ2MO77UAAm5ordmM6HOjTmLuuyhcTIG7-kag', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=108&crop=smart&auto=webp&s=abd8b47541465ae92daa7d48de36f185ad7df83e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=216&crop=smart&auto=webp&s=bf56208def4d316d27b536a50605345f5ed8100a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=320&crop=smart&auto=webp&s=6595de4493b07b69df5b749a6260560ad0ddf080', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=640&crop=smart&auto=webp&s=a29443628b49f33d3a6398595400da4d7e4b2be3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=960&crop=smart&auto=webp&s=24353a123ef4c28e1a1d115e5de2ee27a89d903d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=1080&crop=smart&auto=webp&s=226d229b2113866132cda77a23d6fd3c74f7c3d5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?auto=webp&s=088c47855c8d6415b20b4ede92e4f71d19d4a859', 'width': 1200}, 'variants': {}}]} |
has anyone been able to create production level model? | 1 | has anyone been able to create production level model? I have been researching on the LLaMa and others and unfortunately everyone is complaining.
I have a bunch of documents lets say 10000 pages and i want to the model to be an expert in it and answer questions which chatpdf will not be able to answer. What are my options? is it possible to train the model on all pages and not having to use search and feed in the LLM? | 2023-08-13T17:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/15q5iz2/has_anyone_been_able_to_create_production_level/ | affilitebabra998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q5iz2 | false | null | t3_15q5iz2 | /r/LocalLLaMA/comments/15q5iz2/has_anyone_been_able_to_create_production_level/ | false | false | self | 1 | null |
How should I chunk text from a textbook for the best embedding results? | 6 | My guess is that I should follow the natural structure of the textbook and chunk my text by chapter, section, subsection, etc while retaining the relevant metadata. The problem is that I have no idea how to do that lol.
​
Can someone tell me a better way to chunk a textbook or give me the basic guidelines so I can ask ChatGPT? | 2023-08-13T17:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/15q5j48/how_should_i_chunk_text_from_a_textbook_for_the/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q5j48 | false | null | t3_15q5j48 | /r/LocalLLaMA/comments/15q5j48/how_should_i_chunk_text_from_a_textbook_for_the/ | false | false | default | 6 | null |
Thoughts on having a MAC MINI Powered local server setup? | 1 | I was looking into building a local server LLM setup and was considering going with two used MAC MINIs or a MAC Studio.
Is there a better price to performance ratio that I can achieve, or any other thoughts about this approach? | 2023-08-13T18:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/15q5wsg/thoughts_on_having_a_mac_mini_powered_local/ | Gravy_Pouch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q5wsg | false | null | t3_15q5wsg | /r/LocalLLaMA/comments/15q5wsg/thoughts_on_having_a_mac_mini_powered_local/ | false | false | self | 1 | null |
How important is choosing an embedding model? | 1 | Does it really matter which embedding model I choose for RAG? So far, I've been blindly picking the top overall MTEB models from hugging face. I have no knowledge of what any of the benchmarks mean, but it doesn't seem like the models differ much in performance. | 2023-08-13T18:23:32 | https://www.reddit.com/r/LocalLLaMA/comments/15q66z3/how_important_is_choosing_an_embedding_model/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q66z3 | false | null | t3_15q66z3 | /r/LocalLLaMA/comments/15q66z3/how_important_is_choosing_an_embedding_model/ | false | false | self | 1 | null |
Bright Eye: free mobile IOS app that generates text and art! | 1 | 2023-08-13T18:56:17 | https://v.redd.it/30ad0zhhcxhb1 | AI4MI | /r/LocalLLaMA/comments/15q6zx1/bright_eye_free_mobile_ios_app_that_generates/ | 1970-01-01T00:00:00 | 0 | {} | 15q6zx1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/30ad0zhhcxhb1/DASHPlaylist.mpd?a=1694631382%2CYzZiMDIwM2Q1YmZlOTVmMTExNGQ3NDE1YWM4NjAyODc1Zjc3ZmM2ZGRkYTcxNDE4MzcyYWYwZTYwZjg2MDM0OA%3D%3D&v=1&f=sd', 'duration': 84, 'fallback_url': 'https://v.redd.it/30ad0zhhcxhb1/DASH_720.mp4?source=fallback', 'height': 1280, 'hls_url': 'https://v.redd.it/30ad0zhhcxhb1/HLSPlaylist.m3u8?a=1694631382%2CZTY2OTRiNjg2MzBiZGU4YjNlN2VmMzlmNWVhMjQyYmEzNWM1NzEwNGM5MWU4OTIwYzIzNzYwYjc2ZGQ5OTY5Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/30ad0zhhcxhb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 592}} | t3_15q6zx1 | /r/LocalLLaMA/comments/15q6zx1/bright_eye_free_mobile_ios_app_that_generates/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=108&crop=smart&format=pjpg&auto=webp&s=b90bf8bbc0220225066c52d2e68c8cb25eeaf420', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=216&crop=smart&format=pjpg&auto=webp&s=1ff593595727c346802557f609c3620c47641ff2', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=320&crop=smart&format=pjpg&auto=webp&s=51d0ba0838a5e275dffe70c1a8c443bb5db280f2', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=640&crop=smart&format=pjpg&auto=webp&s=3a708ff7ed6078b26ef2bf89318e54d65c2c68d7', 'width': 640}], 'source': {'height': 1792, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?format=pjpg&auto=webp&s=96f4ddbfc6bc6886a3631a30d19ec9d778bf9192', 'width': 828}, 'variants': {}}]} |
||
Repetition penalty application in proportion to historical token frequency. Thoughts? | 1 | I have a model that I run for long periods of time on the same context. Days, or hundreds of thousands of tokens will go through the same context
One of the issues I see when the model has been running for a while, is that the model slowly drifts from it's base. One of the biggest issues is that the model will frequently stop using words like "I", "and", and "to". Eventually this will snowball the generated text into nothing but a list of words, which effectively ruins the session. As the model drops these "linking" words it starts to learn that sentences are strings of nouns/verbs, which then becomes "that would be bad mean evil terrible horrible undesirable [...]" and the context is irrevocably damaged. A hard reset is required.
I've been kind of toying with the idea of an "inverse repetition penalty" for a while, or you could call it an "infrequency" penalty. There are certain words or tokens I want to appear at a particular frequency, and I've even selected one of these tokens and written a quick "test". For this token, the longer a single response goes on, the more of a bias is applied, until the token appears. Specifically I've selected the "*" token to enforce an RP style of response. After the changes, the model no longer deviates from including role-playing. I consider this a massive success.
So back to the other problem, I decided I'd start by simply excluding tokens like "and", "if", and "I" from repetition penalty. I pulled up SSMS (I log all token returns to a database) and noticed that all of the tokens I wanted to exclude from repetition penalty were by far the most frequently returned. 5000-10,000 instances VS less than hundreds for almost every other token.
Then I got to thinking, would/should I compare the text as it's being generated, to a historical frequency of tokens (sanitized) and apply an adjustment based on the expected vs. actual return rate? It almost seems like exactly the sort of thing I would want to do, but it also seems like the kind of thing that could have some adverse long term side effects.
Historically I have I think 30,000 messages to and from the bot logged right now, so that's a sizable sample size. I imagine I could pre-filter by checking the perplexity against the base model, and then calculate the average token frequencies, and then apply a slight adjustment during generation to guide the model towards matching that historical frequency. A bit of a "mini guidance"
I'm curious as to whether or not anyone sees any immediate issues with this method, as well as the potential for this method to be used as a form of guidance as a whole. | 2023-08-13T18:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/15q707k/repetition_penalty_application_in_proportion_to/ | mrjackspade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q707k | false | null | t3_15q707k | /r/LocalLLaMA/comments/15q707k/repetition_penalty_application_in_proportion_to/ | false | false | self | 1 | null |
Is there a way to add positive and negative keywords? | 6 | So, when generating images with Stable Diffusion, you add positive and negative prompts to guide the weighting. You fill the positive prompt with a rough idea of what you want, usually a list of keywords. And you add keywords to the negative prompts in order to avoid certain things, like nudity, blood or extra limbs.
When using Ooba(for example), you provide the prompt, and the model attempts to complete your prompt for you. You can format it like this in order to guide it:
Summary: An explorer lands on an alien planet, finds plants that thrive on geothermal vents instead of sunlight, and they turn out to be intelligent.
Story: Captain James steps off the access ramp of the ship
This prompt will generate a story from that point with most models. Some obviously better than others. But fairly often, the plants will turn out hostile and we're back in blood and gore land. Or if the model is a frisky one, we've got hentai. If this were Stable Diffusion, I could just add blood, gore, nudity, etc to the negative prompt. But I can't seem to find a way to make it avoid topics or terms I don't want.
For things I *do* want, it is possible to add "Tags: " somewhere in the prompt, which helps. But is often ignored.
So, I ask: Can I add negative prompts somewhere? Can I add positive prompts somewhere, in addition to the prompt that becomes the start of the story? | 2023-08-13T20:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/15q9dq0/is_there_a_way_to_add_positive_and_negative/ | Pumpkim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q9dq0 | false | null | t3_15q9dq0 | /r/LocalLLaMA/comments/15q9dq0/is_there_a_way_to_add_positive_and_negative/ | false | false | self | 6 | null |
Oh No, I just Realized Something | 1 | I just realized something about AI.
If you have (now or in a few years) an AI friend / assistant /etc with a character and a personality and you built a long time relationship with him/her.
If that AI is hosted on a company's server:
The company might get hacked
Or simply will just move from AI altogether to something else. And will shutdown all AIs
And if you have your AI friend locally in your pc/phone:
Your pc might get hacked and the hacker might literally lock your AI friend using a ransomware/virus etc and threaten you, "money or I'll kill it", that might sound scifi for you, but think about few years fron now or even months from now, everyone might have an AI assistant.
Or simply your Hard Drive or SSD might die suddenly for no reason.
Or your phone might ger drawn in water somehow and never gets turned on again.
Just think about the emotional damage here.
Side Note: I am not saying am with or against AI here, that is not my intention while writing this,
I am just trying to share my thoughts/risks. | 2023-08-13T20:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/15q9rs0/oh_no_i_just_realized_something/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q9rs0 | false | null | t3_15q9rs0 | /r/LocalLLaMA/comments/15q9rs0/oh_no_i_just_realized_something/ | false | false | self | 1 | null |
CUDA out of memory when finetuning 3B model | 1 | Hello everyone I'm trying to finetune a 3b model on Colab's T4 but I'm getting CUDA out of memory issues. I'm using batch size of 1 and bfloat16. What did I do wrong?
CODE:
[https://gist.github.com/AmgadHasan/72b06cb8adc2d2217cca8c6790858685](https://gist.github.com/AmgadHasan/72b06cb8adc2d2217cca8c6790858685) | 2023-08-13T21:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/15qaeom/cuda_out_of_memory_when_finetuning_3b_model/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qaeom | false | null | t3_15qaeom | /r/LocalLLaMA/comments/15qaeom/cuda_out_of_memory_when_finetuning_3b_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]} |
Another Beginner Post: Local LLM + LangChain Question | 1 | Hi! I'm very new to AI/ML, so feel free to let me know if I misuse terminology or need to clarify. I'm looking to host an LLM that I can train on private pdfs, later finetune with csv files, and at the end have its outputs appear on a website I'm contributing to. I looked into how to properly do that, but a lot of sources tend to use openai or have it running completely locally, so it's not necessarily applicable. From what I've found so far, I'd like to use a Llama2 model, LangChain for pdf ingestion, and I'm deciding whether or not I could use together.ai or runpod for later fine-tuning and cloud hosting.
My question is if anyone doing something similar had suggestions or success with a certain combination of resources. I thought of using LocalGPT since it already incorporates pdf ingestion, but I'm not sure if I can simply use a base model instead of the chat-optimized one since I heard it's better to finetune a default model.
Let me know what you think, thank you!
​
​ | 2023-08-13T22:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/15qbvug/another_beginner_post_local_llm_langchain_question/ | lankymarionette | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qbvug | false | null | t3_15qbvug | /r/LocalLLaMA/comments/15qbvug/another_beginner_post_local_llm_langchain_question/ | false | false | self | 1 | null |
Should I fine tune a LLM to overcome context length limitation? | 1 | I need to accomplish a task, and I'm uncertain about how to proceed.
I have two tables at my disposal. The first is the "canon" table, housing the official IDs and store names for all popcorn stores within a specific region.
Canon Pop Corn Stores (17k stores) canon\_id, store\_name, store\_address
The second table is the "external" one, featuring "unofficial" IDs and store names. External Pop Corn Stores external\_id, store\_name, store\_address
It's worth noting that the store names and addresses may not always correspond. My task is to map the Canon IDs to the External IDs, and for this, I plan to utilize a text-to-text generative AI. This AI will use both the internal and external store names and addresses to determine the best match.
I have successfully employed ChatGPT (GPT-4) to map all the stores in a specific city (45 stores), and it worked perfectly. However, I aim to expand this and map all the stores. The context-limit poses an obstacle, and I want to steer clear of incurring costs for the GPT-4 API.
Currently, I'm focusing on a particular External table, but there will be more in the future, so the solution needs to be relatively generic.
I've considered fine-tuning an open-source model like llama 70b, but I'm unsure if this is the proper method since it might be excessive? I also pondered increasing the context limit by using vector databases, but I'm at a loss about how to proceed with that or entirely unfamiliar with the process. | 2023-08-13T22:57:11 | https://www.reddit.com/r/LocalLLaMA/comments/15qd0as/should_i_fine_tune_a_llm_to_overcome_context/ | Pop-Huge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qd0as | false | null | t3_15qd0as | /r/LocalLLaMA/comments/15qd0as/should_i_fine_tune_a_llm_to_overcome_context/ | false | false | self | 1 | null |
Online Host service for AI/ML | 1 | I'm currently working on a personal project where I must host an AI service based on llama2 in order to build an API. As you might know, Artificial intelligence requires an specific (not cheap) hardware to be ran in.
Does anybody know any company that could solve this problem with reasonable prices?
PD: I'm currently a beginner in this world, so if anyone wants to add some correction to my speech, please feel free to do so. | 2023-08-13T23:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/15qe7oy/online_host_service_for_aiml/ | Responsible-Sky8889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qe7oy | false | null | t3_15qe7oy | /r/LocalLLaMA/comments/15qe7oy/online_host_service_for_aiml/ | false | false | self | 1 | null |
creating a source of truth for models | 1 | i have a general idea of how this would play out but just wondering if something like this has been done before?
I found [this curated list of python algorithms](https://github.com/TheAlgorithms/Python) and thought I could leverage it to build a knowledge bank for my local LLMs. I'm running Wizardcoder-15B, which is `very` capable of acting as a Code Interpreter but there are still a lot cases where it just does not have enough understanding about a algorithms to complete a given task, so I thought to compliment that.
Basically just create an index to store the embeddings for each script, where they will each have a description of what the script does that will be used for similarity search on the user's prompt. Then the model can use the reference script to base it's answer off of-- essentially to using the scripts as "tools" to complete the task.
Would this potentially give me any better results than trying to fine-tune the model on these scripts? | 2023-08-14T00:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/15qf6gm/creating_a_source_of_truth_for_models/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qf6gm | false | null | t3_15qf6gm | /r/LocalLLaMA/comments/15qf6gm/creating_a_source_of_truth_for_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'onDFjxN-rn4OIO8z5nETEYsWQ8GOY1EpVh8qDmNFZ54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=108&crop=smart&auto=webp&s=80d0b4282eece9219246b53307b689f858a1346f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=216&crop=smart&auto=webp&s=7920fa5d9f47d88f0132a046984699c24c2336fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=320&crop=smart&auto=webp&s=a37cf6f57eb36b8f863cc7335a24aebf05e8ff9a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=640&crop=smart&auto=webp&s=6c78c295029c790de72408bd1743ae3526914ac9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=960&crop=smart&auto=webp&s=30ed2f327a43a4077d07a4329dec7ba3839f646e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=1080&crop=smart&auto=webp&s=ca60984eb6f8701f9bc1132e697f7e4fbabda57c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?auto=webp&s=f8d0bf0b2c284c2a464f0bfcda1808cd52ce86c0', 'width': 1200}, 'variants': {}}]} |
I'm confused. Can someone ELI5 what the various system requirements are for LLMs | 2 | HI LocalLLAaMA,
Could someone explain how to understand system requirements for LLMs and how these requirements change as the parameter counts increase.
Questions I'm interested in.
Is it all about how big your graphics card is?
Is it all about CPU?
What role does RAM play?
Can you estimate based on system specs what sort of LLM you can run? | 2023-08-14T01:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/15qgjt9/im_confused_can_someone_eli5_what_the_various/ | Extreme-Snow-5888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qgjt9 | false | null | t3_15qgjt9 | /r/LocalLLaMA/comments/15qgjt9/im_confused_can_someone_eli5_what_the_various/ | false | false | default | 2 | null |
Finally making the switch! Can anyone recommend models specifically for coding? | 1 | I’m cancelling my GPT-4 plus membership as it is no longer useful for debugging code. At first I started seeing people constantly complaining of the performance decrease recently. I didn’t really notice until last week as I only used it for debugging which it was really useful. Now I’m constantly getting hallucinations and it’s literally forgetting the message before last. I didn’t mind paying $20 a month for the convenience of not needing to run a private server but that convenience has since past.
Does anyone have any good recommendations? Right now I’m looking at variations of Falcon, Codegen, and WizardCoder. I’m thinking of testing each with privateGPT so I can have it ingest all my project documents as well and see how it performs. Also is the approach dated because I know how the space changes almost daily? (Also I’m working with about 44GB VRAM CUDA for my personal server.) | 2023-08-14T01:54:48 | https://www.reddit.com/r/LocalLLaMA/comments/15qh0sy/finally_making_the_switch_can_anyone_recommend/ | ETHwillbeatBTC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qh0sy | false | null | t3_15qh0sy | /r/LocalLLaMA/comments/15qh0sy/finally_making_the_switch_can_anyone_recommend/ | false | false | self | 1 | null |
How good is GPT-4 vs GitHub Co-Pilot? | 1 | How good is GPT-4 vs GitHub Co-Pilot for a non programming background person? If I have to pay for one subscription, which would be better for code writing for my project and debugging? | 2023-08-14T05:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/15qlbfu/how_good_is_gpt4_vs_github_copilot/ | nolovenoshame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qlbfu | false | null | t3_15qlbfu | /r/LocalLLaMA/comments/15qlbfu/how_good_is_gpt4_vs_github_copilot/ | false | false | self | 1 | null |
Ai Dungeon style ai generation? | 16 | Hey there folks. I've recently started to use oogabooga and a couple of models as alternative to GPT-4. I have so far used both the chat and instruct features, but I want something more in between these two, not necessarily just a chat bot, but at the same time not just a bit which creates a whole ass story with no input from me whatsoever. Is there anything which will allow for me to generate stories the way in which aidungeon does (ie. The Story progresses along the context which you give, but you continously change the story proactively as if you were playing a well.made DND campaign.) In other words, I want to be able to generate a story, whilst being able to continue the story by writing in some lines by myself to sort of the guide thr bit along the road I want it to go instead of it writing off into Narnia.
TL:DR: Is an Ai Dungeon style chat style avaliable for use with oogabooga or other similar programs. | 2023-08-14T06:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/15qm4ng/ai_dungeon_style_ai_generation/ | Grucciman69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qm4ng | false | null | t3_15qm4ng | /r/LocalLLaMA/comments/15qm4ng/ai_dungeon_style_ai_generation/ | false | false | self | 16 | null |
How to create Retrieval Augmented Generation (RAG) locally without using LangChain PineCone etc? | 1 | I am currently working on RAG, but I wonder how to build it locally without going through LangChain, PineCone or Haystack. It will be great if you can share, how to build it from scratch. | 2023-08-14T07:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/15qnqga/how_to_create_retrieval_augmented_generation_rag/ | Think_Blackberry4114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qnqga | false | null | t3_15qnqga | /r/LocalLLaMA/comments/15qnqga/how_to_create_retrieval_augmented_generation_rag/ | false | false | self | 1 | null |
Quantifying Content Coverage | 1 | Suppose you have a document. How would you use the predicted and true summaries independently to quantify content coverage? I had an idea to use some sort of a sentence selection histogram or a heatmap showing that the predicted summary covers more than the true summary. What do y'all suggest? | 2023-08-14T08:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/15qohcs/quantifying_content_coverage/ | psj_2908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qohcs | false | null | t3_15qohcs | /r/LocalLLaMA/comments/15qohcs/quantifying_content_coverage/ | false | false | self | 1 | null |
Programmer getting into the AI word, where do I start? | 1 | Hello, I'm a software developer with a degree in computer science, and I've been making software for the last 10 years, mostly gaming-related.
I've been playing with Chat-GPT for the last few months, and overall, I'm very impressed.
Now, I would love to get into the AI world and start to delve deep into this technology.
My goal would be to create a chatbot-game like this one: https://gandalf.lakera.ai/ where basically the goal of the game is to get the AI to reveal a password, and each level is more difficult than the previous one.
Now, where do I get started? Are there any books, courses, videos, or anything that can help me get into the right direction?
I've seen that there are many pieces of the puzzle, and I just need a way to see the bigger picture and then start to focus and learn all the necessary steps to make what I intend to do. | 2023-08-14T08:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/15qol73/programmer_getting_into_the_ai_word_where_do_i/ | ExtremeMarco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qol73 | false | null | t3_15qol73 | /r/LocalLLaMA/comments/15qol73/programmer_getting_into_the_ai_word_where_do_i/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ATQL74MV_g1rXmHjE8PuwjaUEKg4dNbmOnVODSiNEzI', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=108&crop=smart&auto=webp&s=bf1a99ded14081cf9251439f19ed8c225f1c4643', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=216&crop=smart&auto=webp&s=2e2f9108bde7bef826a41ccadc83c57bc64639d8', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=320&crop=smart&auto=webp&s=6298bfef3f848679e8ecf4787cdd114478ac297d', 'width': 320}, {'height': 521, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=640&crop=smart&auto=webp&s=62570bca73beeb1a494d35c89c4d035274999638', 'width': 640}], 'source': {'height': 652, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?auto=webp&s=56c4eb010d20f26e17ff9672bdf9f2da89dcc97b', 'width': 800}, 'variants': {}}]} |
Deepspeed with remote servers | 1 | Hello,
I have 5 GPUs (2x P100 & 3x T4) installed on different servers (DL380 gen9). Would I be able to use deepspeed for training across multiple servers? What about inference with say, FastChat?
Thanks | 2023-08-14T09:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/15qpt3x/deepspeed_with_remote_servers/ | sgt_banana1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qpt3x | false | null | t3_15qpt3x | /r/LocalLLaMA/comments/15qpt3x/deepspeed_with_remote_servers/ | false | false | self | 1 | null |
Llama V2 as Langchain react agent | 1 | Any good results? I tried with 7B and it wasn't ideal. Will try again with a larger model. I wanted to use it so I can trigger RAG based on the query instead of relying on trigger keywords using spacy. | 2023-08-14T09:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/15qpxbh/llama_v2_as_langchain_react_agent/ | sgt_banana1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qpxbh | false | null | t3_15qpxbh | /r/LocalLLaMA/comments/15qpxbh/llama_v2_as_langchain_react_agent/ | false | false | self | 1 | null |
What are the different finetunes for the LLaMa 2 model? How do you all find them? | 1 | So far, I found the ones listed in [https://www.reddit.com/r/LocalLLaMA/wiki/models/](https://www.reddit.com/r/LocalLLaMA/wiki/models/), but how do you all find them online? I tried "LLaMa 2 finetuned models", "LLaMA 2 finetuned models huggingface", etc. But I only find ones like some "uncensored" version on huggingface with like no downloads or no model card or anything at all. | 2023-08-14T10:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/15qqe63/what_are_the_different_finetunes_for_the_llama_2/ | ImNotLegitLol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qqe63 | false | null | t3_15qqe63 | /r/LocalLLaMA/comments/15qqe63/what_are_the_different_finetunes_for_the_llama_2/ | false | false | self | 1 | null |
LLaMA-2-chat vs LLaMa-2-base? | 1 | What are the advantages and disadvantages of each? Is the base version more capable of following instructions or are they both just as capable except the chat model has an extra advantage of being able to do dialogue? If so, is there any reason to use the base model? | 2023-08-14T10:04:36 | https://www.reddit.com/r/LocalLLaMA/comments/15qqfrf/llama2chat_vs_llama2base/ | ImNotLegitLol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qqfrf | false | null | t3_15qqfrf | /r/LocalLLaMA/comments/15qqfrf/llama2chat_vs_llama2base/ | false | false | self | 1 | null |
Best Model for Natural Language to SQL? | 1 | Anyone know what the best model for NL to SQL is? Are there any off the shelf model that works for complex queries or better to finetune LlaMA2 on some dataset? | 2023-08-14T11:22:20 | https://www.reddit.com/r/LocalLLaMA/comments/15qry0n/best_model_for_natural_language_to_sql/ | perseus_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qry0n | false | null | t3_15qry0n | /r/LocalLLaMA/comments/15qry0n/best_model_for_natural_language_to_sql/ | false | false | self | 1 | null |
How to expose a model into an API? | 1 | I've a PC with a RTX 3090 and I would like to use it for models like Llama2. I would like to open a port and offer the inference power of that PC to other apps running LangChain outside the home network.
I've thought about combine FastAPI with HF local package but I believe that there are other options out there much better. | 2023-08-14T11:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/15qs2br/how_to_expose_a_model_into_an_api/ | angeljdm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qs2br | false | null | t3_15qs2br | /r/LocalLLaMA/comments/15qs2br/how_to_expose_a_model_into_an_api/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.