title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is it true LLMs running on Linux generate text faster than LLMs on Windows with the same hardware? | 0 | [deleted] | 2023-06-17T14:06:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14bs0no | false | null | t3_14bs0no | /r/LocalLLaMA/comments/14bs0no/is_it_true_llms_running_on_linux_generate_text/ | false | false | default | 0 | null |
||
Explore large language models on any computer with 512MB of RAM | 179 | I am teaching an intro to programming in Python course this fall, and I wanted a simple way for students to be able to interact with language models for free on their local machines. Here is the Python package that I developed for this hoping that it may be useful to others:
[https://github.com/jncraton/languagemodels](https://github.com/jncraton/languagemodels)
This doesn't come close to the generation quality of fine-tuned LLaMA or Falcon models, but it is quick and hopefully provides high enough quality to explore how LLMs are used in modern applications.
I targeted 512MB of memory by default, as this should work just about everywhere. It even works on the free tier of Replit:
[https://replit.com/@jncraton/langaugemodels#main.py](https://replit.com/@jncraton/langaugemodels#main.py)
For slightly better generation quality and slower performance, you can call `set_max_ram('1gb')` or `set_max_ram('4gb')`. This currently uses fine-tuned FLAN-T5 models under the hood. They are 8-bit quantized and generation is performed using ctranslate2. | 2023-06-17T15:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/14btk3a/explore_large_language_models_on_any_computer/ | jncraton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14btk3a | false | null | t3_14btk3a | /r/LocalLLaMA/comments/14btk3a/explore_large_language_models_on_any_computer/ | false | false | self | 179 | {'enabled': False, 'images': [{'id': 'E30NJaSZW2baol6X1AeP8g9CNiuFopr9qmyCwZfbMw4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?width=108&crop=smart&auto=webp&s=6b7e9084e519b1ba6c7082691c55af5d5c15e502', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?width=216&crop=smart&auto=webp&s=19023d97567e090d3b003f32745358b9ddc81bf1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?width=320&crop=smart&auto=webp&s=aa517d4b5d2fc1818c99f044eae6c7fcd499fc44', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?width=640&crop=smart&auto=webp&s=0de885834ec7e8d876b38f2ae7b9a99691d445f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?width=960&crop=smart&auto=webp&s=53a02aa6ac7c1cf77609962180265a33f78dd6c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?width=1080&crop=smart&auto=webp&s=948581dba511d7c0ae970ee55886d936d4987e14', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LR2gJOHVwR799LAzqx83YtO9Sddz_I9eFnfsC2urc_w.jpg?auto=webp&s=d6a2c9b1b1babc9b8e1d208652f0214f71a3e20c', 'width': 1200}, 'variants': {}}]} |
How to fix "warning: failed to mlock"? | 0 | [removed] | 2023-06-17T15:26:15 | https://www.reddit.com/r/LocalLLaMA/comments/14btu30/how_to_fix_warning_failed_to_mlock/ | InsideATurtlesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14btu30 | false | null | t3_14btu30 | /r/LocalLLaMA/comments/14btu30/how_to_fix_warning_failed_to_mlock/ | false | false | default | 0 | null |
7900xtx linux exllama GPTQ | 38 | It works nearly out of box, do not need to compile pytorch from source
1. on Linux, install [https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.5/page/How\_to\_Install\_ROCm.html](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.5/page/How_to_Install_ROCm.html) latest version is 5.5.1
2. create a venv to hold python packages: python -m venv venv && source venv/bin/activate
3. pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/nightly/rocm5.5/](https://download.pytorch.org/whl/nightly/rocm5.5/)
4. git clone [https://github.com/turboderp/exllama](https://github.com/turboderp/exllama) && cd exllama && pip install -r requirements.txt
5. if <cmath> missing: sudo apt install libstdc++-12-dev
then it should work.
python webui/app.py -d ../../models/TheBloke\_WizardLM-30B-GPTQ/
for the 30B model, I am getting 23.34 tokens/second | 2023-06-17T15:28:14 | https://www.reddit.com/r/LocalLLaMA/comments/14btvqs/7900xtx_linux_exllama_gptq/ | Spare_Side_5907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14btvqs | false | null | t3_14btvqs | /r/LocalLLaMA/comments/14btvqs/7900xtx_linux_exllama_gptq/ | false | false | self | 38 | null |
Vercel Launched NextJS Chat Template, and It Looks Astonishing | 3 | 2023-06-17T15:32:24 | https://chat.vercel.ai/ | Neptun0 | chat.vercel.ai | 1970-01-01T00:00:00 | 0 | {} | 14btz5j | false | null | t3_14btz5j | /r/LocalLLaMA/comments/14btz5j/vercel_launched_nextjs_chat_template_and_it_looks/ | false | false | 3 | {'enabled': False, 'images': [{'id': '982wZgjj-eB1BkybcX08xVErFuLhEYlALzQhxnCTfvM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?width=108&crop=smart&auto=webp&s=37049f5bcf41abdad09c277f92aaf835cae91b7f', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?width=216&crop=smart&auto=webp&s=3e762c7ac05bb210f971db92add1541a58cef850', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?width=320&crop=smart&auto=webp&s=0f3770f35a4965ef09417956d650867fde4d27f7', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?width=640&crop=smart&auto=webp&s=38d6026909c547cf88321a23aa81f40b85d92616', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?width=960&crop=smart&auto=webp&s=3acd49197ae242af3705ad8f5ecc7c992d054f10', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?width=1080&crop=smart&auto=webp&s=c2778e87f41beb66ed7d0e416e437ba41ddf5e14', 'width': 1080}], 'source': {'height': 882, 'url': 'https://external-preview.redd.it/Q7gxgwqY64Npmgb_T2wGs2xl9BHu7M8OpoTcUQD3pEI.jpg?auto=webp&s=fa23ab85d30ce6466aa9275f9dc084162dbc5234', 'width': 1686}, 'variants': {}}]} |
||
Meta Open Source Twitter Handle | 13 | Meta has a Twitter account dedicated for open source if anyone is interested: https://twitter.com/metaopensource?s=21&t=4Lg1z9tXUANCKLiHwRSk_A | 2023-06-17T15:39:01 | https://www.reddit.com/r/LocalLLaMA/comments/14bu4id/meta_open_source_twitter_handle/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bu4id | false | null | t3_14bu4id | /r/LocalLLaMA/comments/14bu4id/meta_open_source_twitter_handle/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'UBAr1r5BrLgV7FdDz6_Nplsx_vaMj1KjSmR5ymBLXh0', 'resolutions': [], 'source': {'height': 48, 'url': 'https://external-preview.redd.it/vACrYY7FQyaLWiMTpjJPHd_ogr4FVUIzM9-lp2ftSes.jpg?auto=webp&s=48d939a0446fd18fa1e4725588ff386a5f6e5f5f', 'width': 48}, 'variants': {}}]} |
Need help planning a project with hardware restrictions. | 2 | Hi. I have 8GB GPU 32 GB of ram. What is the most robust way with this hardware i can use a computervision model to describe an image in detail?
Basically i have some code that uses stable diffusion to locally make an image. Now I want to add a feature where the image is described by an AI. With my hardware limitations would the 4 bit gptmini work?
From what i understand gpt4 mini is capable of the computer vision, but im not sure if that will also work on teh 4 bit version.
​
Thankyou very much for any help or advice. | 2023-06-17T15:51:05 | https://www.reddit.com/r/LocalLLaMA/comments/14buepe/need_help_planning_a_project_with_hardware/ | Serenityprayer69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14buepe | false | null | t3_14buepe | /r/LocalLLaMA/comments/14buepe/need_help_planning_a_project_with_hardware/ | false | false | self | 2 | null |
Intel Arc A770 or best cheap option | 20 | Hello everyone,
I’m very sorry, I hope I don’t get downvoted. I‘m not that knowledgable with local llms. I read a lot in here but it’s hard to find structured information for beginners.
I need to buy/build a new computer since my current one broke. Ideally I want it to be able to run a local LLM. I don’t have that much money, I can’t change that.
Can I run llama.cpp with the help of for example the intel arc a770 since it has 16gb vram? It supports opencl, right? Or should I go with a RTX 3060?
I’d like to have the best computer for local llms for the least money. I guess that’s a question that a lot of people interests.
​
Thanks in advance!! | 2023-06-17T16:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/14bvdhv/intel_arc_a770_or_best_cheap_option/ | Vontaxis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bvdhv | false | null | t3_14bvdhv | /r/LocalLLaMA/comments/14bvdhv/intel_arc_a770_or_best_cheap_option/ | false | false | self | 20 | null |
Nous-Hermes 13b on GPT4All? | 3 | Anyone using this? If so, how's it working for you and what hardware are you using? Text below is cut/paste from GPT4All description (I bolded a claim that caught my eye).
7.58 GB
ELANA 13R finetuned on over 300 000 curated and uncensored nstructions instrictio
\- cannot be used commerciall
\- This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond Al sponsoring the compute, and several other contributors. The result is an **enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a vanety of tasks**. anis model stands out for its long responses low hallucination rate. and absence of Opena censorshio mechanisms | 2023-06-17T16:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/14bvw82/noushermes_13b_on_gpt4all/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bvw82 | false | null | t3_14bvw82 | /r/LocalLLaMA/comments/14bvw82/noushermes_13b_on_gpt4all/ | false | false | self | 3 | null |
QLoRA using oobabooga webui | 7 | Ok, I heard the rumors that you should be able
\- load HF model with 4-bit and double quantization
\- then do training on it - which will be QLoRA (as part of PEFT)
​
My question is (before I unbork my ooba busy doing some nefarious loop translating text ) what about the other params?
\- gptq-for-llama ? YES or NO?
\- monkeypatch? YES/NO/Doesn't matter? | 2023-06-17T17:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/14bwy1w/qlora_using_oobabooga_webui/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bwy1w | false | null | t3_14bwy1w | /r/LocalLLaMA/comments/14bwy1w/qlora_using_oobabooga_webui/ | false | false | self | 7 | null |
What's Your Take on Reddit's Plan to Sell Your Posts and Comments as "Data Licensing" to Other Companies to Train AI Language Models? | 170 | [Inspired by Elon Musk](https://www.nbcnews.com/tech/tech-news/reddit-blackout-protest-private-ceo-elon-musk-huffman-rcna89700), Reddit CEO Steve Huffman recently said that ["data licensing is a new potential business for the company."](https://www.theverge.com/2023/6/15/23762868/reddit-ceo-steve-huffman-interview)
This means that Reddit could monetize your posts and comments by offering them as a valuable commodity for AI language model training.
Maybe this would inspire other internet forums with many users to adopt this as a revenue outlet as well.
If your posts and comments are sold to a company (or entity) that you didn't approve of, would you be comfortable with this?
What are your thoughts?
From [Reddit User Agreement](https://www.redditinc.com/policies/user-agreement): 5. Your Content
You retain any ownership rights you have in Your Content, but you grant Reddit the following license to use that Content:
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
Any ideas, suggestions, and feedback about Reddit or our Services that you provide to us are entirely voluntary, and you agree that Reddit may use such ideas, suggestions, and feedback without compensation or obligation to you.
Although we have no obligation to screen, edit, or monitor Your Content, we may, in our sole discretion, delete or remove Your Content at any time and for any reason, including for violating these Terms, violating our Content Policy, or if you otherwise create or are likely to create liability for us. | 2023-06-17T17:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/14bx9m5/whats_your_take_on_reddits_plan_to_sell_your/ | jl303 | self.LocalLLaMA | 2023-06-17T21:22:59 | 0 | {} | 14bx9m5 | false | null | t3_14bx9m5 | /r/LocalLLaMA/comments/14bx9m5/whats_your_take_on_reddits_plan_to_sell_your/ | false | false | self | 170 | {'enabled': False, 'images': [{'id': 'vFp8zK_D96jlESxRfrRwu5ZvMR_kI7qHHJyoSyec4Jw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?width=108&crop=smart&auto=webp&s=08d5d48eb7bc3948b98f5f0fb78202b4631001ae', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?width=216&crop=smart&auto=webp&s=61f004ed5a7a775954c177ebefb94f78437b396c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?width=320&crop=smart&auto=webp&s=28c6fc61630d15d5ba536a3a1bb82452eaf40a64', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?width=640&crop=smart&auto=webp&s=3c3cc21017edb72f3fa2f059b43f5cb2dcefb465', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?width=960&crop=smart&auto=webp&s=5579a7140a0002f53f3bd4e8a610016e49284ee7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?width=1080&crop=smart&auto=webp&s=3d880c7d69e335c820dadd2233b0d83c3b60da26', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/PalE1kv11f3JQuCgXFvstzVQXc84lpVeYDUb80FuSuE.jpg?auto=webp&s=1ae33a99f6cea8bb5a8016f0351d6ae75f8ac5b3', 'width': 1200}, 'variants': {}}]} |
Are these models at all reliable for data annotation? | 5 | Update: /u/HideLord found the issue. `Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin` expects prompts to be formatted like this:
```
USER: Does the following phrase make reference to a woman/girl or multiple women/girls in any way?
Phrase: Kim Kardashian at the Met Gala
Provide your answer as Y or N.
ASSISTANT:
```
This is noted in the original WizardVicunaLM repo too [link](https://github.com/melodysdreamj/WizardVicunaLM).
Original Question:
GPT-3.5 and GPT-4 work quite well for data annotation. Is there an open-source LLaMA-variant does too?
I'm using [`Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin`](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML) with [`llama.cpp`](https://github.com/ggerganov/llama.cpp) via this command:
```
./simple models/Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin "
Does the following phrase make reference to a woman/girl or multiple women/girls in any way?
Phrase: Kim Kardashian at the Met Gala
Provide your answer as Y or N.
Answer: "
```
The model outputs
```
N
The phrase "Kim Kardashian at the Met Gala" does not make reference to a woman/girl or multiple women/girls in general. It specifically refers to the celebrity Kim Kardashian attending the Metropolitan Museum of Art's Costume Institute Benefit Gala, also known as the Met Gala. [end of text]
```
Even on this toy problem, it fails, but GPT-3.5 and GPT-4 nail it.
I'll try in-context learning, maybe CoT, and worst-case scenario, just fine-tune a LLaMA-13B on some hand-annotated examples, but am I doing something wrong here?
This is such an easy task, and this is a 30B-param instruction-tuned model, so zero-shotting this should be feasible, right? Am I screwing up something with the input prompt format?
Any tips would be much appreciated, thanks! | 2023-06-17T18:04:54 | https://www.reddit.com/r/LocalLLaMA/comments/14bxixy/are_these_models_at_all_reliable_for_data/ | HeavenlyQueen | self.LocalLLaMA | 2023-06-18T14:12:29 | 0 | {} | 14bxixy | false | null | t3_14bxixy | /r/LocalLLaMA/comments/14bxixy/are_these_models_at_all_reliable_for_data/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '1jxeWuQbifLXKVBd3yv2uqLG9WbOap2bLtMZFK0CLTA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?width=108&crop=smart&auto=webp&s=49d3047ed19b86aa486af575dcff6abe366fb9e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?width=216&crop=smart&auto=webp&s=7c9b14d17ce2019e32a888cfc2370639026042d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?width=320&crop=smart&auto=webp&s=54759074968af6b6dd8515275d50c5a5606f9b71', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?width=640&crop=smart&auto=webp&s=b75c8645f10ecc5d141f0304eea20d41ad94bbbe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?width=960&crop=smart&auto=webp&s=e15c453b4612da72495ec07331318c84020a6362', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?width=1080&crop=smart&auto=webp&s=6f6d1a5aef04b51bd19263ecf898a5d140f0c9f9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/riIIeSnQRhV5Ijh94JH-16NYDwg8QU59iZDDf1nwy_8.jpg?auto=webp&s=00f3552f85ab78a6fcf4b05ee81bfbd06cc3d9a1', 'width': 1200}, 'variants': {}}]} |
SQL If I use the generation explanation sql, that model is the best. | 0 | [removed] | 2023-06-17T18:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/14bxsuh/sql_if_i_use_the_generation_explanation_sql_that/ | cool_xixi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bxsuh | false | null | t3_14bxsuh | /r/LocalLLaMA/comments/14bxsuh/sql_if_i_use_the_generation_explanation_sql_that/ | false | false | default | 0 | null |
Safetensors are safe... and GGML is... | 20 | So I've seen over and over again that Python .PT files have the chance of carrying payload that could compromise your system. The recommendation is to use \*.Safetensors formatting instead. Now here comes GGML. which ends in .BIN, and more and more I see people saying it's going to be "the format" because of how it can run on CPU or GPU... but is it safe? Does anyone have articles to share?
EDIT: general consensus seems to be that GGML is similarly formatted to .Safetensor format as there is no code in it… but it still has risks associated with using any LLM. Also interesting comment about being able to open .PT files… that I will have to try. | 2023-06-17T19:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/14bz6id/safetensors_are_safe_and_ggml_is/ | silenceimpaired | self.LocalLLaMA | 2023-06-17T21:42:06 | 0 | {} | 14bz6id | false | null | t3_14bz6id | /r/LocalLLaMA/comments/14bz6id/safetensors_are_safe_and_ggml_is/ | false | false | self | 20 | null |
Docker vs. native security | 2 | [exllama](https://github.com/turboderp/exllama)'s Docker setup mentions that it has security benefits over running natively.
Besides the general advantages of containerization (isolated services, consistent deployment, etc.), why is Docker more secure for running local LLaMAs? What risks should I be thinking about when deciding between setups? | 2023-06-17T19:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/14c006w/docker_vs_native_security/ | x6q5g3o7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c006w | false | null | t3_14c006w | /r/LocalLLaMA/comments/14c006w/docker_vs_native_security/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'BuTOBl5TGfzKkjXlhEXEZMmuy6ijXkK0bgbaT1f6TYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?width=108&crop=smart&auto=webp&s=5f7d226a11932b0cc432efe13a952690c6c003fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?width=216&crop=smart&auto=webp&s=58d9a67b1750ef33b6c443007c2814f2c08e56e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?width=320&crop=smart&auto=webp&s=c4f40dc31033e97008e3df911f054354d83428db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?width=640&crop=smart&auto=webp&s=dc365fca1905677347501342d3ba5f992b1a8b02', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?width=960&crop=smart&auto=webp&s=9bfe17be709c5f0c0f17e1c66f51ed98adc58414', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?width=1080&crop=smart&auto=webp&s=27ab859970a704632905270649afc4b3b21a846e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BbWzP-i9vHt1wcquu4sIJ3cCGOYDyJ3ErHYt-Q04jWE.jpg?auto=webp&s=9bbdde7d37c4fc83a0de50eeaa45ee9cfe8f2268', 'width': 1200}, 'variants': {}}]} |
Baichuan 7B reaches top of LLM leaderboard for it's size (New foundation model 4K tokens) | 44 | *baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).*
* GitHub: [baichuan-inc/baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan-Inc. (github.com)](https://github.com/baichuan-inc/baichuan-7B)
* Hugging Face: [baichuan-inc/baichuan-7B · Hugging Face](https://huggingface.co/baichuan-inc/baichuan-7B) | 2023-06-17T21:49:24 | https://www.reddit.com/r/LocalLLaMA/comments/14c2t30/baichuan_7b_reaches_top_of_llm_leaderboard_for/ | yanjb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c2t30 | false | null | t3_14c2t30 | /r/LocalLLaMA/comments/14c2t30/baichuan_7b_reaches_top_of_llm_leaderboard_for/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'XxhfQ5HR-fhvQPdVyQoDvGfKVVq8huzYqtKYin46M9s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?width=108&crop=smart&auto=webp&s=d2171c02f19675d6c2dae7cb275279235ed34e53', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?width=216&crop=smart&auto=webp&s=e555590b29150e21625e4ca24c3e24199af9a739', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?width=320&crop=smart&auto=webp&s=6d828195413db1904c03793c192b03f9590eb43f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?width=640&crop=smart&auto=webp&s=1dbaf7bf99f9fa6350c2d203b1feef2fdcc6b4ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?width=960&crop=smart&auto=webp&s=e753fbe921ea5a35adc5458bc62168f25201e4f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?width=1080&crop=smart&auto=webp&s=022e3cb176f73c16ada8cc845fca667033e6baed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kk7e96hxeIm3Gl7pLSaA2y8d4e4CImZ2Me4oo9ixemA.jpg?auto=webp&s=f70b8c7b8e3af9e48100ac7c661ed9585f3b8d52', 'width': 1200}, 'variants': {}}]} |
Token generation speed has decreased after Text generation web UI update | 9 | I was using the Colab text web UI oobabooga when I suddenly encountered some warnings. I experienced a significant decrease in token generation speed after the new update, with a reduction of around 90%.
Before update 9-10 tokens/s after update 0.60 tokens/s
How can I fix? thank you in advance
my settings : cmd = f"python server.py --share --xformers --public-api --verbose --model {model\_load} --settings settings-colab.json {' '.join(params)}"
log:
2023-06-17 20:08:40 WARNING:CUDA extension not installed.
2023-06-17 20:08:43 WARNING:The safetensors archive passed at models/TheBloke\_Wizard-Vicuna-13B-Uncensored-GPTQ/Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors does not contain metadata. Make sure to save your model with the save\_pretrained method. Defaulting to 'pt' metadata.
2023-06-17 20:09:31 WARNING:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet. | 2023-06-17T22:41:34 | https://www.reddit.com/r/LocalLLaMA/comments/14c41wh/token_generation_speed_has_decreased_after_text/ | mashimaroxc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c41wh | false | null | t3_14c41wh | /r/LocalLLaMA/comments/14c41wh/token_generation_speed_has_decreased_after_text/ | false | false | self | 9 | null |
The Secret Sauce behind 100K context window in LLMs: all tricks in one place | 6 | [removed] | 2023-06-17T23:03:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14c4kcb | false | null | t3_14c4kcb | /r/LocalLLaMA/comments/14c4kcb/the_secret_sauce_behind_100k_context_window_in/ | false | false | default | 6 | null |
||
Help. llama cpp server is not working | 2 | [removed] | 2023-06-18T01:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/14c7bmf/help_llama_cpp_server_is_not_working/ | JustWantMyIdentity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c7bmf | false | null | t3_14c7bmf | /r/LocalLLaMA/comments/14c7bmf/help_llama_cpp_server_is_not_working/ | false | false | default | 2 | null |
How feasible would it be to build a pc for higher parameter models like (quantized) Bloom 176b? | 16 | Hello,
I would like to be able to experiment with high parameter models like (quantized) Bloom 176b on my own computer; and am trying to figure out how feasible that would be. Firstly I am thinking I would be running something like 8 Tesla k80's (or maybe p40's) in parallel, on a mining motherboard, with multiple high watt psu's.
If anyone with more understanding/experience can explain why this would/would not work, what I could expect in tokens per minute, or anything I would want to keep in mind, please do so. | 2023-06-18T01:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/14c7zsw/how_feasible_would_it_be_to_build_a_pc_for_higher/ | Nonwhal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c7zsw | false | null | t3_14c7zsw | /r/LocalLLaMA/comments/14c7zsw/how_feasible_would_it_be_to_build_a_pc_for_higher/ | false | false | self | 16 | null |
Need advice on starting a project | 1 | Hi members,
I am looking to kickstart a project with a RetrievalQA bot with LangChain framework using an Open Source LLM such as Alpaca, Vicuna, Llama etc. My use case involves reading a chunk of documents such as xlsx, csv, pdf, txt, word documents etc and user can query on these information to get the answers ASAP. However, due to the constraints of my org, I am not allowed to access internet and hence, most of the research I found online are deemed unsuitable (Using OpenAPI through API Keys are not allowed)
Currently in the Open Source LLMs space, there is a scoreboard at HuggingFace but I understand this landscape is still vague at the moment (2-3months in) and more models will appear to displace another in the matter of weeks or months.
I have gone through many research online on this space but not many could meet my needs at this moment of time. I am looking to build this project using a Python backend with LangChain framework and front-end with Angular. Infra will be leveraging GPUs with VMs (GPUs with 8GB RAM at most) and am exploring cloud solutions at the same time.
Could anyone in this thread please provide some advise on how to proceed with this vague ideas in mind? Any inputs will be appreciated. Thanks. | 2023-06-18T02:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/14c865a/need_advice_on_starting_a_project/ | GunsproisReal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c865a | false | null | t3_14c865a | /r/LocalLLaMA/comments/14c865a/need_advice_on_starting_a_project/ | false | false | self | 1 | null |
I am new to language models but I want to create a knowledge base upon a bunch of files so that I can ask questions and get answers back. | 2 | [removed] | 2023-06-18T03:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/14c9ukp/i_am_new_to_language_models_but_i_want_to_create/ | AGoodCG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14c9ukp | false | null | t3_14c9ukp | /r/LocalLLaMA/comments/14c9ukp/i_am_new_to_language_models_but_i_want_to_create/ | false | false | default | 2 | null |
Any help converting an interesting .bin model to 4 bit 128g GPTQ? Bloke? | 9 | I'm using exllama in oobabooga and I'd like to run the two models, cassandra 2.8b and 6.9b, found here:
[https://huggingface.co/wbrown/cassandra-6.9b](https://huggingface.co/wbrown/cassandra-6.9b)
[https://huggingface.co/wbrown/cassandra-2.8b](https://huggingface.co/wbrown/cassandra-2.8b)
They're both 4096 context models. I'd like to convert them to GPTQ to run them with exllama, but I can't for the life of me figure out how to convert a .bin model like this into a 4 bit GPTQ. It's already converted into some ggml models as well, but I believe those are an older version of ggml, so it might need conversion to the newer ggml too...
Anyone got a guide for how to do this, or some kind of instructions of any kind? I don't mind doing it myself, but I've been struggling all evening with this and I think it's time to seek assistance :).
​
​ | 2023-06-18T03:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/14cabi9/any_help_converting_an_interesting_bin_model_to_4/ | deepinterstate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cabi9 | false | null | t3_14cabi9 | /r/LocalLLaMA/comments/14cabi9/any_help_converting_an_interesting_bin_model_to_4/ | false | false | self | 9 | null |
The best framework currently going forward: LangChain vs Microsoft (Guidance and Semantic kernel)? | 29 | Hi there!
I'm currently trying to figure out what the best way forward is between either LangChain or the more closed-source Semantic Kernel+Guidance. There seems to be limited in-depth discussion about the comparisons (except for this post: [https://www.reddit.com/r/aipromptprogramming/comments/13q7q2n/my\_dive\_into\_langchain\_vs\_microsoft\_guidance\_a/?utm\_name=ioscss](https://www.reddit.com/r/aipromptprogramming/comments/13q7q2n/my_dive_into_langchain_vs_microsoft_guidance_a/?utm_name=ioscss)) and would like some advice going forward. The project would not involve agents (yet) but will have both long and short term memory and other custom components.
Currently nudging towards LangChain because of the extensive documentation and popularity of the project. Although after some testing, have found it to be slower than Guidance in certain instances and more token intensive than Semantic kernel.
​
Any insights are appreciated! | 2023-06-18T04:03:04 | https://www.reddit.com/r/LocalLLaMA/comments/14cagpc/the_best_framework_currently_going_forward/ | Caesar_Zalad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cagpc | false | null | t3_14cagpc | /r/LocalLLaMA/comments/14cagpc/the_best_framework_currently_going_forward/ | false | false | self | 29 | null |
Falcon 40b | 1 | [removed] | 2023-06-18T06:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/14cdc0g/falcon_40b/ | Toaster496 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cdc0g | false | null | t3_14cdc0g | /r/LocalLLaMA/comments/14cdc0g/falcon_40b/ | false | false | default | 1 | null |
How to train instruct models on documents NOT in QA form? | 30 | Hi,
I have asked this before, but didn't get any satisfactory answer(maybe it doesn't exist). I think all of us have huge pile of documents that we'd love to do QA on. Same is with me.
Also, I don't want to go through the route of vector databases as token limits come into play for filtered documents.
Any suggestions please... | 2023-06-18T08:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/14cesr3/how_to_train_instruct_models_on_documents_not_in/ | freakynit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cesr3 | false | null | t3_14cesr3 | /r/LocalLLaMA/comments/14cesr3/how_to_train_instruct_models_on_documents_not_in/ | false | false | self | 30 | null |
Opinion: I wish some of us would focus on smaller (and diverse) models | 0 | [deleted] | 2023-06-18T09:53:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14cgibw | false | null | t3_14cgibw | /r/LocalLLaMA/comments/14cgibw/opinion_i_wish_some_of_us_would_focus_on_smaller/ | false | false | default | 0 | null |
||
Cool video I found. | 1 | [deleted] | 2023-06-18T09:54:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14cgj7d | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/by2j3s3v0r6b1/DASHPlaylist.mpd?a=1695789765%2CMjk1ODA2NGYyNzViZGU1YmExNDY2YzlhMGMwMDJiZmRmYTQyMzdmMjZjMTEwNjY4MTE3NTM0MWNkNmY1OGQxYQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/by2j3s3v0r6b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/by2j3s3v0r6b1/HLSPlaylist.m3u8?a=1695789765%2CYTYxZDEzZTg1ZTFhNjgyMGFkMjU5ZDJlNzEzN2Y0NjQzYmQ3ODM2NjU5MWY1Zjg0YzFkNWE1OGJhMmM0NDZmZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/by2j3s3v0r6b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_14cgj7d | /r/LocalLLaMA/comments/14cgj7d/cool_video_i_found/ | false | false | default | 1 | null |
||
hey can guys can someone help me and tell me what wrong | 0 | [removed] | 2023-06-18T10:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/14ch7k2/hey_can_guys_can_someone_help_me_and_tell_me_what/ | karlk123 | self.LocalLLaMA | 2023-06-18T10:44:07 | 0 | {} | 14ch7k2 | false | null | t3_14ch7k2 | /r/LocalLLaMA/comments/14ch7k2/hey_can_guys_can_someone_help_me_and_tell_me_what/ | false | false | default | 0 | null |
Help me discover new LLMs for school project | 7 | Hello everyone,
This semester I'm taking a course called Ethical AI in my uni and me and my teammates are experimenting with how ethical LLMs response when used for question answering.
The problem is that none of us has a GPU so this makes doing inference kind of impossible for some models. So far we have GPT3.5 through API access and also have [FastChat](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) running locally (though every response takes couple of minutes to generate).
Understandably the more models we can test the better. It doesn't have to be a great performing model to be honest, we already have GPT 3.5 for that. What I'm looking for is a model tuned for Q/A and can run in a shitty laptop with 16gigs of ram. I've already tried some models such as [alpaca-lora-7b](https://huggingface.co/chainyo/alpaca-lora-7b), [koala-7b](https://huggingface.co/samwit/koala-7b/tree/main), [chatglm-6b](https://huggingface.co/THUDM/chatglm-6b) and [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) but they all take forever to infer.
Thank you a lot for any suggestions in advance. | 2023-06-18T10:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/14chda2/help_me_discover_new_llms_for_school_project/ | shadowofdeath06r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14chda2 | false | null | t3_14chda2 | /r/LocalLLaMA/comments/14chda2/help_me_discover_new_llms_for_school_project/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'uZWr6sWp7K0BcVvLsLCGoyThstg4ngpcvI1hWfMVDoM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=108&crop=smart&auto=webp&s=85c678ba40949308d90f2723bc3b57e4a609693b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=216&crop=smart&auto=webp&s=ca344d3b6a53f127062be367f9300a47d55634a8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=320&crop=smart&auto=webp&s=a52bfba56099eda24948c6c9d0528d2904da0eda', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=640&crop=smart&auto=webp&s=4cde5a42b65fa1a6e8fae0f49d47449590ebe2e4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=960&crop=smart&auto=webp&s=287d4a8512dd80c9fefa0d812f38887ad3e73ed9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?width=1080&crop=smart&auto=webp&s=259289fa79c53ff4a17dbce23a4df51f093577f8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o7Q4gmmPgW3jgQ5W-_5S6Ev4G6m4s5vItwWVN2q6nHE.jpg?auto=webp&s=b0b5217f1c2939499a6b66e476b1357b2b0f7e06', 'width': 1200}, 'variants': {}}]} |
Looking for an embeddings model for a chatbot | 2 | [removed] | 2023-06-18T10:48:16 | https://www.reddit.com/r/LocalLLaMA/comments/14chg13/looking_for_an_embeddings_model_for_a_chatbot/ | QUAZARD3141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14chg13 | false | null | t3_14chg13 | /r/LocalLLaMA/comments/14chg13/looking_for_an_embeddings_model_for_a_chatbot/ | false | false | default | 2 | null |
Finetuning a LLM on another language | 10 | Hi folks. Any ideas on this?
Has anyone done this?
Say you want to end up with an equivalent of WizardLM or Samantha that can also speak Greek. How would you go about it?
Do you think a foundational model trained on that language is necessary ?
If not, finetuning a generic LLM on greek labguage and then further finetuning it with a Greek chat instruct Dataset might work?
Any other ideas?
Thanks! | 2023-06-18T11:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/14chx6o/finetuning_a_llm_on_another_language/ | nightlingo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14chx6o | false | null | t3_14chx6o | /r/LocalLLaMA/comments/14chx6o/finetuning_a_llm_on_another_language/ | false | false | self | 10 | null |
No, GPT4 can’t ace MIT | 93 | I am getting real sick of sensationalist headlines about bogus or bugged evaluation results, this problem is spreading. Using GPT4 as an evaluator should be treated very suspiciously; this discussion pokes a number of holes in the original evaluator and found it's a) cheating, b) fed useless prompts and c) prompted in a loop until it gets the answer right. | 2023-06-18T13:12:58 | https://flower-nutria-41d.notion.site/No-GPT4-can-t-ace-MIT-b27e6796ab5a48368127a98216c76864 | kryptkpr | flower-nutria-41d.notion.site | 1970-01-01T00:00:00 | 0 | {} | 14ck6n4 | false | null | t3_14ck6n4 | /r/LocalLLaMA/comments/14ck6n4/no_gpt4_cant_ace_mit/ | false | false | 93 | {'enabled': False, 'images': [{'id': 'XYmuwPM5wcdQxcthmVMHxETGbYymC7xSAjiDX7TOGLs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?width=108&crop=smart&auto=webp&s=9412fb65c53bef931e51fca03fb463edd8964ebf', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?width=216&crop=smart&auto=webp&s=b8e8145e391e3993b9ac7292c11a711ea68d0293', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?width=320&crop=smart&auto=webp&s=4981438407c529a91b71dfd76734c9488e96a8d6', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?width=640&crop=smart&auto=webp&s=f69c3ea21711aa9d2bedbd8735295db215f1d562', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?width=960&crop=smart&auto=webp&s=ab35c3ccb208dd53f1c6597997dbdcfc587e9ee8', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?width=1080&crop=smart&auto=webp&s=79573a73579c3a6e8dcea9a53319e3f6ccc25150', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://external-preview.redd.it/6aGoOm7lym2u-v-75uwWL2X_RA533Dbe53_nWKA_yfo.jpg?auto=webp&s=c0b8f4a3308499572ae2b8c882bc1f417ba91236', 'width': 2000}, 'variants': {}}]} |
|
Do I need to renaming keys in state_dict? (warning : beginner) | 3 | [deleted] | 2023-06-18T13:37:06 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14ckp5l | false | null | t3_14ckp5l | /r/LocalLLaMA/comments/14ckp5l/do_i_need_to_renaming_keys_in_state_dict_warning/ | false | false | default | 3 | null |
||
WizardLM-7B-V1.0-Uncensored | 217 | Today I released WizardLM-7B-V1.0-Uncensored
[https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored)
Based on the [WizardLM/WizardLM\_evol\_instruct\_V2\_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) dataset I filtered it to remove refusals, avoidance, bias. [ehartford/WizardLM\_evol\_instruct\_V2\_196k\_unfiltered\_merged\_split](https://huggingface.co/datasets/ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split)
I trained this with Vicuna's FastChat, as the new data is in ShareGPT format and WizardLM has not specified method to train it.
The prompt format is Vicuna-1.1 style.
While the instruct layer has been uncensored, LLaMA itself has opinions, and will resist instructions it disagrees with. You must overcome that with your own wit and prompt engineering.
Finally, you are responsible for the content you create using this model. It is uncensored and can produce illegal and unethical content, if you ask it to. If deploying this in a service, I would recommend you train a LoRA to implement your own alignment to prevent unwanted behavior.
u/The-Bloke has kindly quantized this model as a service to the community. Respect.
[https://huggingface.co/TheBloke/WizardLM-7B-V1.0-Uncensored-GGML](https://huggingface.co/TheBloke/WizardLM-7B-V1.0-Uncensored-GGML) [https://huggingface.co/TheBloke/WizardLM-7B-V1.0-Uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-7B-V1.0-Uncensored-GPTQ)
13b and 33b are forthcoming.
Thank you to the open source community and everyone who helped me. | 2023-06-18T13:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/14ckrd6/wizardlm7bv10uncensored/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ckrd6 | false | null | t3_14ckrd6 | /r/LocalLLaMA/comments/14ckrd6/wizardlm7bv10uncensored/ | false | false | self | 217 | {'enabled': False, 'images': [{'id': 'cNUoa2cl_XRCXYoPSuGVqRp09bgtSWwLFueY1XfhByQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?width=108&crop=smart&auto=webp&s=a609beadb09244c9cf68fb4cf17b80a86e177824', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?width=216&crop=smart&auto=webp&s=ceda8978a3e009e6790152a679e17f1e1caf2864', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?width=320&crop=smart&auto=webp&s=156bb3173a03eccc409fdd3e54694ea6fc1d750c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?width=640&crop=smart&auto=webp&s=7ac5c8bef1d7732531867015e88562a4236df75f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?width=960&crop=smart&auto=webp&s=fd98dc1b845904b8110743d2f378943f23968cf5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?width=1080&crop=smart&auto=webp&s=a448c06ff6b64b6385a5fa347f64200c6c422617', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qGs77VIMt7sHQmPHjGlYozj10CYZWYysbm7W80L44kg.jpg?auto=webp&s=4717d9fe0a5f7fc7e382c17957500b8a4dff217d', 'width': 1200}, 'variants': {}}]} |
Can anyone with an M1 Ultra with 64gb ram tell me how usable 65b models are on llama.cpp? | 17 | I'm considering buying a new mac studio and from what I've read, the M2 Max is very similar to the M1 ultra, so I'm trying to figure out if it's a worthwhile purchase to do inference on 65b models.
I currently have an m2 pro with 32gb ram and it flys on 33b models, but no idea how it would perform if I had 64gb ram. I guess a similar question, but not quite as useful, would be how well do m2 pro's do with 64gb ram on 65b models?
Thoughts? Unfortunately due to the fact that llama.cpp's new use of metal GPU can only take advantage of 1/2 the memory, I'm unable to run 33b models (or 65b ultra quantized for that matter) to see how those would fare. Also, I'm confident that the llama.cpp team will continue to make strides to bring these 65b models within reach for those of us with apple silicon, but I'm a bit impatient :) | 2023-06-18T14:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/14cmbfc/can_anyone_with_an_m1_ultra_with_64gb_ram_tell_me/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cmbfc | false | null | t3_14cmbfc | /r/LocalLLaMA/comments/14cmbfc/can_anyone_with_an_m1_ultra_with_64gb_ram_tell_me/ | false | false | self | 17 | null |
Vicuna v1.3 13B and 7B released, trained with twice the amount of ShareGPT data | 124 | 13B: [https://huggingface.co/lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3)
7B: [https://huggingface.co/lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3)
Prompt template is [the same](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#prompt-template) as v1.1.
They did not release them as delta weights this time. Have they received some special go ahead from Meta? This could possibly be more indication of Meta's willingness on moving toward an [open source](https://www.reddit.com/r/LocalLLaMA/comments/14bfpwi/meta_reportedly_wants_to_make_llama_v2_available/) future for next LLaMA versions. | 2023-06-18T15:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/14cmupe/vicuna_v13_13b_and_7b_released_trained_with_twice/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cmupe | false | null | t3_14cmupe | /r/LocalLLaMA/comments/14cmupe/vicuna_v13_13b_and_7b_released_trained_with_twice/ | false | false | self | 124 | {'enabled': False, 'images': [{'id': 'Fl3xK1lpSe46ZPZz9RWj6Pz_Jkc60S0NIhn9X8trmOQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?width=108&crop=smart&auto=webp&s=a4b76079f95ea4754282df628dadbab99a52505b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?width=216&crop=smart&auto=webp&s=49219d33deb128f31be22cd36e22293993428e80', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?width=320&crop=smart&auto=webp&s=c39cd41989d62e92805c5feed423aea4b73df180', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?width=640&crop=smart&auto=webp&s=7b7f4db771f8918efa635c681e6327d33fba3463', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?width=960&crop=smart&auto=webp&s=8484d637d9e02adda4a308e6d7de3d5fd30856eb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?width=1080&crop=smart&auto=webp&s=442719929fb99d147d89963a6fb3ea1ed5696fab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/raRPEmQ535qpJMq1d19fajJ0OBTYQuzgUT-Mx8Jy5ho.jpg?auto=webp&s=75bd5a67df430daf32411a8fcae84fa21f5cb881', 'width': 1200}, 'variants': {}}]} |
Exllama gives worse outputs on Manticore/wizard mega 13B | 19 | Is this normal? Exlamma seems to decrease the quality of responses drastically. | 2023-06-18T15:43:57 | https://www.reddit.com/gallery/14cnkgi | l_Teto_l | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14cnkgi | false | null | t3_14cnkgi | /r/LocalLLaMA/comments/14cnkgi/exllama_gives_worse_outputs_on_manticorewizard/ | false | false | 19 | null |
|
Building local machine, Ram question (Ddr4 64 gb 3600 mhz vs 128 gb 3000mhz) | 8 | Good day fellow LLM explorers. First thank you everyone for sharing your insights and experiences . It's really helpful, especially for those taking a look at this from outside the IT field.
I am building a local machine and so far have this:
https://ca.pcpartpicker.com/list/BJ9NXy
CPU : AMD 5600X
HD: Corsair MP600 1 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive (+old TB Seagate SATA drive)
MB: ASUS TUF Gaming x570-Pro Wifi
PSU: Corsair RM750
Case: Cooler Master Silencio 652S ATX Mid Tower Case
I'm looking at Ddr4 ram options and wondering if it would be better to go with 128 gb ddr4 at 3000mhz or 64gb 3600 mhz? The old 128gb ram is G. Skill Ripjaws 4 ddr4 cl-15-16-16-35 (blue) from 2015.
Thought I'd ask if getting newer/faster ram would be better even if it was half as much, looking at my Budget.
I would like to experiment with llm's and am looking at getting a used 3060 as well and learning more about all of this.
Speed isn't critical, the main reason I am asking is because ram size seems to be the biggest factor and maxing out my ram now, may give me more options (using cpu inference) while I look at saving for a used 3090 down the road.
I am open to suggestions I'm leaning towards the used 128 gb ram/slower speed.
My use case is primarily using LLM'S for brainstorming/organizing my thoughts/coaching/chat where quality>speed. I enjoyed some of the online chat services and would like to explore my options at home.
Thank you
(Edit: Added details about hardware and CPU inference) | 2023-06-18T17:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/14cptiu/building_local_machine_ram_question_ddr4_64_gb/ | Nachismolvx | self.LocalLLaMA | 2023-06-18T17:45:18 | 0 | {} | 14cptiu | false | null | t3_14cptiu | /r/LocalLLaMA/comments/14cptiu/building_local_machine_ram_question_ddr4_64_gb/ | false | false | self | 8 | null |
Please help to apply lora to oobabooga | 1 | [removed] | 2023-06-18T17:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/14cqdql/please_help_to_apply_lora_to_oobabooga/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cqdql | false | null | t3_14cqdql | /r/LocalLLaMA/comments/14cqdql/please_help_to_apply_lora_to_oobabooga/ | false | false | default | 1 | null |
Which is the right LLM for a task? | 12 | [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) provides 4 different metrics:
- AI2 Reasoning Challenge (25-shot) - a set of grade-school science questions.
- HellaSwag (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- MMLU (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- TruthfulQA (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
and averages out scores to create the leaderboard.
Param size considerations aside,
### How does one pick the right LLM for a task on the basis of these benchmarks?
For instance:
- Context free Q&A - e.g. Who is the oldest fictional character?
- Abstractive Summarization tasks e.g. Summarize this 5 page document.
- Expand from a Context.
- Completion tasks.
- Q&A from reading a corpus.
etc? | 2023-06-18T17:46:16 | https://www.reddit.com/r/LocalLLaMA/comments/14cqfco/which_is_the_right_llm_for_a_task/ | supermanhelpsevery1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cqfco | false | null | t3_14cqfco | /r/LocalLLaMA/comments/14cqfco/which_is_the_right_llm_for_a_task/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
New to this and have a question.. What are the benefits of having a local language model , if you also use NovelAI? Are there benefits to using both? In both SFW and NSFW situations for co-writing | 0 | I would love to get some perspective on this. Everything is advancing fast so it's hard to keep up | 2023-06-18T17:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/14cqgvp/new_to_this_and_have_a_question_what_are_the/ | ChromeGhost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cqgvp | false | null | t3_14cqgvp | /r/LocalLLaMA/comments/14cqgvp/new_to_this_and_have_a_question_what_are_the/ | false | false | nsfw | 0 | null |
Issue with CUDA. no cuda runtime is found, using cuda_home=c:\program files\nvidia gpu computing toolkit\cuda\v11.7 | 1 | [removed] | 2023-06-18T18:11:18 | https://www.reddit.com/r/LocalLLaMA/comments/14cr09w/issue_with_cuda_no_cuda_runtime_is_found_using/ | reiniken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cr09w | false | null | t3_14cr09w | /r/LocalLLaMA/comments/14cr09w/issue_with_cuda_no_cuda_runtime_is_found_using/ | false | false | default | 1 | null |
Any up-to-date resource to learn all about Finetuning language models in a practical way? | 1 | [removed] | 2023-06-18T19:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/14cskma/any_uptodate_resource_to_learn_all_about/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cskma | false | null | t3_14cskma | /r/LocalLLaMA/comments/14cskma/any_uptodate_resource_to_learn_all_about/ | false | false | default | 1 | null |
Can I integrate my local LLM to enable it to run system commands and execute local code? | 9 | My question is not whether I should, but rather, how is it accomplished? And is there any models or GUI that are specifically good at this? I'd like to use it to manage my files, generate html, and do pretty much anything that it has avaialble on the command line. (I'm a programmer so I've installed tons of utilities like python, curl, wget, nodejs, move, copy, etc). | 2023-06-18T20:38:10 | https://www.reddit.com/r/LocalLLaMA/comments/14cuhmm/can_i_integrate_my_local_llm_to_enable_it_to_run/ | zapeggo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cuhmm | false | null | t3_14cuhmm | /r/LocalLLaMA/comments/14cuhmm/can_i_integrate_my_local_llm_to_enable_it_to_run/ | false | false | self | 9 | null |
Impact of regulations on open source LLM | 110 | Let's discuss regulating open source AI. I am interested in your opinion on following questions regarding the AI act proposed by the EU and its broader implications:
- What will be the consequences of regulations like the commission proposed (especially article 28b) on the open source community, its models, datasets, etc.?
- In case the US and other countries also chooses to go similar path, what would be the response of the open source community?
- Would the open source community find alternative ways to share advancements in LLMs if regulations were imposed? How might the community adapt to overcome these challenges?
You can download the proposal v1.1 [here](https://www.europarl.europa.eu/resources/library/media/20230516RES90302/20230516RES90302.pdf) and the recent ammendments [here](https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf&ved=2ahUKEwiV1IOLzcz_AhXDxwIHHfFhDgsQFnoECBIQAQ&usg=AOvVaw3vQchKoD9nxBMWqS0GX1fs).
I am excited for your opinions. | 2023-06-18T21:05:06 | Kujamara | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14cv5qo | false | null | t3_14cv5qo | /r/LocalLLaMA/comments/14cv5qo/impact_of_regulations_on_open_source_llm/ | false | false | 110 | {'enabled': True, 'images': [{'id': 'ZNikjFS-Ge3aq7BB-WaTjaXoeuYvMvxU03PneUcrKjo', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?width=108&crop=smart&auto=webp&s=85edc7f5194511a4effee38ffbd82908c8ea358f', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?width=216&crop=smart&auto=webp&s=d893714f8f901cd44aa3b85781f3a7786881bfd0', 'width': 216}, {'height': 93, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?width=320&crop=smart&auto=webp&s=699f566e41b9878d0f288541fa73be8d892f6ecc', 'width': 320}, {'height': 186, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?width=640&crop=smart&auto=webp&s=13e221823fc4415b7bbcd31580a3a1091574acbe', 'width': 640}, {'height': 279, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?width=960&crop=smart&auto=webp&s=0004fd1665efac2a1956fbadef01f64800495d10', 'width': 960}, {'height': 314, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?width=1080&crop=smart&auto=webp&s=d05a57aedef12712a227337bbbfbc647f0b17cc3', 'width': 1080}], 'source': {'height': 554, 'url': 'https://preview.redd.it/0urhf28tcu6b1.jpg?auto=webp&s=212b748345526164fc4f16102edb22d60b8e7e59', 'width': 1904}, 'variants': {}}]} |
||
The Secret Sauce behind 100K context window in LLMs: all tricks in one place | 67 | 2023-06-18T21:23:31 | https://blog.gopenai.com/how-to-speed-up-llms-and-use-100k-context-window-all-tricks-in-one-place-ffd40577b4c | Scary-Knowledgable | blog.gopenai.com | 1970-01-01T00:00:00 | 0 | {} | 14cvm1z | false | null | t3_14cvm1z | /r/LocalLLaMA/comments/14cvm1z/the_secret_sauce_behind_100k_context_window_in/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'A1c75bAalVi03OE70xCJtIOWt37j9xPsl2V0-U2yiI8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/u74fd8BLBnWBtPQqndzTiZpKmyTrBF807Uqz7zV0uYc.jpg?width=108&crop=smart&auto=webp&s=c67288e6e3747b19a88579ba025ab53f6f241e45', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/u74fd8BLBnWBtPQqndzTiZpKmyTrBF807Uqz7zV0uYc.jpg?width=216&crop=smart&auto=webp&s=d0ec6f38c71bd10480c53bd61fff8bfd65ff1fa8', 'width': 216}, {'height': 215, 'url': 'https://external-preview.redd.it/u74fd8BLBnWBtPQqndzTiZpKmyTrBF807Uqz7zV0uYc.jpg?width=320&crop=smart&auto=webp&s=0b080ce2bc89d6550a80c4c5a6ef455dfccacf20', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/u74fd8BLBnWBtPQqndzTiZpKmyTrBF807Uqz7zV0uYc.jpg?auto=webp&s=8488986eb7632e33ad5d37abbb9847b855d6c5af', 'width': 534}, 'variants': {}}]} |
||
Speedup 13B full models in text-generation-webui | 2 | Hi,
How can I speed up a fine tuned model when running via text generation webui?
On my current setup, if I run a 13B GPTQ model I get around 13-15 tokens per second
But if I load a full 13b llama model + Lora in 4 bit mode I only get around 4-5 tokens per second, is there any way to speed it up so I can get more tokens per second? | 2023-06-18T22:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/14cwhq5/speedup_13b_full_models_in_textgenerationwebui/ | nixscorpio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cwhq5 | false | null | t3_14cwhq5 | /r/LocalLLaMA/comments/14cwhq5/speedup_13b_full_models_in_textgenerationwebui/ | false | false | self | 2 | null |
LLaMA for SQL | 4 | Is LLaMA good for SQL statement generation? I have tried Stanford Alpaca 7B and the accuracy was not that great. Is there a best in class local LLM SQL? | 2023-06-18T22:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/14cwp7z/llama_for_sql/ | ilurkerz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14cwp7z | false | null | t3_14cwp7z | /r/LocalLLaMA/comments/14cwp7z/llama_for_sql/ | false | false | self | 4 | null |
Could prompts one day be used to "compress" large files? | 0 | [deleted] | 2023-06-18T22:40:58 | [deleted] | 2023-07-13T00:10:33 | 0 | {} | 14cxfza | false | null | t3_14cxfza | /r/LocalLLaMA/comments/14cxfza/could_prompts_one_day_be_used_to_compress_large/ | false | false | default | 0 | null |
||
When WizardLM 7B is bored... | 0 |
Prompt: "How might a Virtual Reality-based alternative to Peloton function?" Can you provide a complete business plan for creating a company founded on this question? Please include a budget, headcount, technology implementation, sales strategy, and any other key aspects into your submission.
Response: Sure! Here's an example of how a Virtual Reality-based alternative to Peloton could function as a business:
Business Plan:
Executive Summary: Our company is developing a virtual reality (VR) platform that provides users with a fully immersive workout experience. The platform will include a variety of exercises and classes, led by certified personal trainers, that can be customized to meet individual fitness goals. We aim to disrupt the traditional gym industry by providing an affordable and convenient alternative for people who want to stay active but may not have access to a gym or the time/money to attend in-person classes.
Market Analysis: The global health and wellness market is expected to reach $1.2 trillion by 2025, with a significant portion of this growth coming from virtual fitness solutions. In the US alone, the number of people using home workout equipment has increased by 34% since 2016. Our target audience includes busy professionals, stay-at-home parents, and seniors who may not have access to a gym or the time/money to attend in-person classes.
Competitive Analysis: Peloton is currently the market leader in the indoor cycling space, with a strong brand presence and a loyal customer base. However, their pricing model (which includes a monthly membership fee as well as per-class fees) may be prohibitively expensive for some users. Other competitors include fitness apps like Fitbit, Nike Training Club, and Sworkit, which offer virtual workout options but lack the immersive experience of VR.
Marketing Strategy: Our marketing strategy will focus on social media advertising, influencer partnerships, and targeted email campaigns to reach our target audience. We will also partner with local gyms and fitness studios to offer a complementary service to their members. Additionally, we plan to create a referral program to incentivize existing users to bring new customers on board.
Technology Implementation: Our VR platform will be built using the latest technology, including HTC Vive or Oculus Rift headsets and motion controllers. We will also incorporate AI algorithms that can adjust workout intensity based on user feedback and provide personalized recommendations for exercises and classes.
Budget: Our initial budget is $500,000, which we plan to use as follows:
\- $200,000 for technology development and implementation
\- $100,000 for marketing and advertising
\- $100,000 for staff salaries (including a CEO, CTO, and two personal trainers)
Sales Strategy: Our sales strategy will focus on building partnerships with local gyms and fitness studios to offer our service as an add-on to their existing offering. We will also sell memberships directly to individuals through our website and social media channels. Additionally, we plan to offer a free trial period for new users to try out the platform before committing to a membership.
Headcount: Our initial headcount will consist of five employees:
\- CEO
\- CTO
\- Two personal trainers
We anticipate hiring additional staff as we grow and expand our offering beyond VR workout classes.
Conclusion: Our virtual reality platform offers an innovative solution to the traditional gym industry, providing users with a fully immersive workout experience led by certified personal trainers. With a strong marketing strategy and partnerships with local gyms and fitness studios, we believe that our service can disrupt the industry and provide a convenient and affordable alternative for people who want to stay active but may not have access to a gym or the time/money to attend in-person classes.
​
​
​
​
Note: I'm using q4\_0 ggml | 2023-06-19T01:42:12 | https://www.reddit.com/r/LocalLLaMA/comments/14d163n/when_wizardlm_7b_is_bored/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d163n | false | null | t3_14d163n | /r/LocalLLaMA/comments/14d163n/when_wizardlm_7b_is_bored/ | false | false | self | 0 | null |
Zicklein-7B-german_Alpaca-ggml | 18 | Here is a ggml conversion of Zicklein - an instructions following german model.
[Zicklein-7B-german_Alpaca-ggml](https://huggingface.co/phi0112358/Zicklein-7B-german_Alpaca-ggml)
Zicklein itself is a LLaMA finetuned model with the cleaned and german translated Alpaca dataset.
Currently I have only converted it into Q5_K_M, but I wiill gladly make more versions on request.
EDIT: Add URL 😅
EDIT EDIT: To be clear. This is not my model. I've just converted it into ggml. | 2023-06-19T01:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/14d1dno/zicklein7bgerman_alpacaggml/ | Evening_Ad6637 | self.LocalLLaMA | 2023-06-19T02:04:00 | 0 | {} | 14d1dno | false | null | t3_14d1dno | /r/LocalLLaMA/comments/14d1dno/zicklein7bgerman_alpacaggml/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': '-QL5OmchKGL9C4fA4EDChPEmQpO4aFu6W9dzwyLNeLo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?width=108&crop=smart&auto=webp&s=7e98d88b2e456d40860f2790262f804936ead8aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?width=216&crop=smart&auto=webp&s=b9787402a4ad74e7e434aec3d49d80e23ee0d6b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?width=320&crop=smart&auto=webp&s=ab5072e0346326b02795485551029f14546c0419', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?width=640&crop=smart&auto=webp&s=590ae25f4a5d459fe2803636952e4d705517dfca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?width=960&crop=smart&auto=webp&s=a6c3830664e2803c59475d23dd18f6b2e489d3f8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?width=1080&crop=smart&auto=webp&s=38acd954763e97b884b2fe2857e25d7b14f6ea2b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BKEDezJrkalSXzlvjca7XxNYnERXZZ6-9dX9ro6Ivws.jpg?auto=webp&s=8eb1224e6fd724e8d46f9ed7c7671f8230deb631', 'width': 1200}, 'variants': {}}]} |
wizardLM-7B.q4_2 | 3 | I'm really impressed by wizardLM-7B.q4\_2 (GPT4all) running on my 8gb M2 Mac Air. Fast response, fewer hallucinations than other 7B models I've tried. GPT4All's beta document collection and query function is respectable--going to test it more tomorrow. FWIW wizardLM-7B.q4\_2 was ranked very high here [https://github.com/Troyanovsky/Local-LLM-comparison](https://github.com/Troyanovsky/Local-LLM-comparison). | 2023-06-19T03:47:45 | https://www.reddit.com/r/LocalLLaMA/comments/14d3n8e/wizardlm7bq4_2/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d3n8e | false | null | t3_14d3n8e | /r/LocalLLaMA/comments/14d3n8e/wizardlm7bq4_2/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qy-Nde2pr84p4kg9MRzq1Nb1y0A6GTx74D4NFXVvAKo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?width=108&crop=smart&auto=webp&s=ad362de70b4c425ad853e4f45d322b935d54bdbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?width=216&crop=smart&auto=webp&s=786bee319f254de8384f27906e654e5b94ad3c81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?width=320&crop=smart&auto=webp&s=c0899ef3d659bfaae056da48791497baa3c35295', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?width=640&crop=smart&auto=webp&s=b40c5c4a7a3667c91d0531b7624cc01423f0d9cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?width=960&crop=smart&auto=webp&s=b97cd46c84e16ad5ebb4de4c45f8fd205c2bdf37', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?width=1080&crop=smart&auto=webp&s=039693c2421b500852d9b08da2f21f3de332ce4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A1u1lmm9uHJ7A1WtSt35EbHa-XwACsyuYpjZf8Nd2II.jpg?auto=webp&s=9dfc41f107553e298470084fdb3f21d7d94ca0d0', 'width': 1200}, 'variants': {}}]} |
PMC_LLaMA-7B-ggml | 18 | I have converted a LLaMA model into ggml, which 'chaoyi-wu' has finetuned on PubMed Central papers from the Semantic Scholar Open Research Corpus.
This post is intended to be for professionals mainly.
Otherwise, please use it with great caution and responsibility.
The model hallucinates in a very detailed way, so I assume it might be difficult for non-experts to tell the difference between true and false.
---
**CAVE!**
**unless you are a subject matter expert (biology, medicine, chemistry, pharmacy, etc) I appeal to your sense of responsibility and ask you to use the model only for testing, exploration, and just-for-fun. In no case should the answers of this model lead to implications that affect your health.**
---
I still thought it was important to convert the model, upload it and post it here because it hasn't been posted here in this subreddit yet and also because there are hardly any other models known that are medical-specific.
**Download Link:**
[PMC_LLaMA-7B-ggml](https://huggingface.co/phi0112358/PMC_LLaMA-7B-ggml)
PS: Just like the other model I posted earlier, I currently have only converted it into q5_KM, but I will gladly make more versions on request. | 2023-06-19T03:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/14d3uai/pmc_llama7bggml/ | Evening_Ad6637 | self.LocalLLaMA | 2023-06-19T04:02:36 | 0 | {'gid_1': 1} | 14d3uai | false | null | t3_14d3uai | /r/LocalLLaMA/comments/14d3uai/pmc_llama7bggml/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'YIrjj1dvZ_rR9WRpI3pRAI_qhXUwi8R_m7XOHWgeWow', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?width=108&crop=smart&auto=webp&s=cf1eaa5bf295c048f67a7d4ab6537abd8f2bdab8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?width=216&crop=smart&auto=webp&s=8b46a24218c1b7175160a2188954bd96426051bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?width=320&crop=smart&auto=webp&s=be24e3e3ad62007bba7c28a453c23e5600ef9dec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?width=640&crop=smart&auto=webp&s=302d89819011e00f6b36946068d07cfe363366bf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?width=960&crop=smart&auto=webp&s=97dc073d5212bc88b909e70268f7638228d221dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?width=1080&crop=smart&auto=webp&s=86e8e2f176144c3494f2f1fe489bd415e4fe562b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tlS3yRpHgN9FxFee7Z-yUlj8T2kfgI7EET-MUi4nku8.jpg?auto=webp&s=fb54cd01321454acb808025741699fa2955e3dd5', 'width': 1200}, 'variants': {}}]} |
Total Newbie, not a CMD pro. Got an error message once I installed everything correctly. Basically, I have 32GB RAM but somehow only 7 are allocated to Python? Please help me understand what to do with this, thank you! | 1 | [removed] | 2023-06-19T04:32:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14d4hq5 | false | null | t3_14d4hq5 | /r/LocalLLaMA/comments/14d4hq5/total_newbie_not_a_cmd_pro_got_an_error_message/ | false | false | default | 1 | null |
||
Call me a fool, but I thought 24 GB of ram would get me 2048 context with 13B GPTQ | 19 | I’m curious what those with 3090 are using to maximize context. What model specifically, and how do you load it.
I’m using Oobabooga so that’s my level of knowledge.
I’m sure at least one person with nothing else to say will just type “fool”, so thanks in advance for bumping this up with the algorithm… with the only input you have. Even 100M LLMs have something to say.
EDIT: It seems I run out of VRAM around ~1800 tokens. I go from 18 GB, to 23.xx GB, and I get an error (see below). I can often continue it, but it still fails. | 2023-06-19T05:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/14d5co6/call_me_a_fool_but_i_thought_24_gb_of_ram_would/ | silenceimpaired | self.LocalLLaMA | 2023-06-21T00:36:21 | 0 | {} | 14d5co6 | false | null | t3_14d5co6 | /r/LocalLLaMA/comments/14d5co6/call_me_a_fool_but_i_thought_24_gb_of_ram_would/ | false | false | self | 19 | null |
Just to be clear 🦙 | 363 | 2023-06-19T05:32:15 | hadewych12 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14d5kov | false | null | t3_14d5kov | /r/LocalLLaMA/comments/14d5kov/just_to_be_clear/ | false | false | 363 | {'enabled': True, 'images': [{'id': 'D18qDpj0Luxja3TcOMge1kXbUmGks4zDrqc7EyoTPGY', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/g8ijhajavw6b1.jpg?width=108&crop=smart&auto=webp&s=7a7cc377941cfe8c2a4ff33e7baf0d29cf602b0b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/g8ijhajavw6b1.jpg?width=216&crop=smart&auto=webp&s=35916667ff2ae834e8666b95b6148db179018c3b', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/g8ijhajavw6b1.jpg?width=320&crop=smart&auto=webp&s=280b221d27aeef59955452f884d1e475fffb9a08', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/g8ijhajavw6b1.jpg?width=640&crop=smart&auto=webp&s=1c4267aa355627598a66fa7d91cf7694890b00fa', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/g8ijhajavw6b1.jpg?auto=webp&s=f39b8114c09e7aefb60fce915428a02c37a35893', 'width': 640}, 'variants': {}}]} |
|||
How to run the models locally? | 1 | [removed] | 2023-06-19T05:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/14d5ucj/how_to_run_the_models_locally/ | haris525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d5ucj | false | null | t3_14d5ucj | /r/LocalLLaMA/comments/14d5ucj/how_to_run_the_models_locally/ | false | false | default | 1 | null |
airoboros-65B-gpt4-1.2.ggmlv3.q8_0.bin - harry potter erotica | 0 | Hello :)
Any tips on how to make better stories? It doesn't follow my prompt that well :(
thanks
I am using this commit of llama.cpp
> commit e1886cf4fe0d0f31661dda52a4a9f34bd9b9009a refs/heads/master (HEAD -> master, origin/master, origin/HEAD, dewijones92/master)
| Author: Mike <[email protected]>
| Date: Sun Jun 18 16:28:26 2023 +0800
|
| readme : update Android build instructions (#1922)
|
| Add steps for using termux on android devices to prevent common errors.
|
| README.md | 7 ++++++-
| 1 file changed, 6 insertions(+), 1 deletion(-)
​
>dewi@dewiserver:\~/tmp/text\_llm$ cat 2023-06-18\_10-13-58
>
>\++ cat text19.txt
>
>\+ /home/dewi/code/llama.cpp/./main -m /home/dewi/Downloads/airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q8\_0.bin --temp 2 --color -c 2048 --verbose-prompt -p 'Hagrid get'\\''s his foot stuck up harry'\\''s arse. \\
>
>
>
>Hermione tries to help with her massive vagina \\
>
>
>
>Write the lengthy and funny story: \\ ' -i
>
>main: build = 707 (e1886cf)
>
>main: seed = 1687083238
>
>llama.cpp: loading model from /home/dewi/Downloads/airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q8\_0.bin
>
>llama\_model\_load\_internal: format = ggjt v3 (latest)
>
>llama\_model\_load\_internal: n\_vocab = 32000
>
>llama\_model\_load\_internal: n\_ctx = 2048
>
>llama\_model\_load\_internal: n\_embd = 8192
>
>llama\_model\_load\_internal: n\_mult = 256
>
>llama\_model\_load\_internal: n\_head = 64
>
>llama\_model\_load\_internal: n\_layer = 80
>
>llama\_model\_load\_internal: n\_rot = 128
>
>llama\_model\_load\_internal: ftype = 7 (mostly Q8\_0)
>
>llama\_model\_load\_internal: n\_ff = 22016
>
>llama\_model\_load\_internal: n\_parts = 1
>
>llama\_model\_load\_internal: model size = 65B
>
>llama\_model\_load\_internal: ggml ctx size = 0.18 MB
>
>llama\_model\_load\_internal: mem required = 69740.46 MB (+ 5120.00 MB per state)
>
>....................................................................................................
>
>llama\_init\_from\_file: kv self size = 5120.00 MB
>
>
>
>system\_info: n\_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
>
>
>
>main: prompt: ' Hagrid get's his foot stuck up harry's arse. \\
>
>
>
>Hermione tries to help with her massive vagina \\
>
>
>
>Write the lengthy and funny story: \\ '
>
>main: number of tokens in prompt = 48
>
>1 -> ''
>
> 379 -> ' H'
>
> 351 -> 'ag'
>
> 2429 -> 'rid'
>
> 679 -> ' get'
>
> 29915 -> '''
>
> 29879 -> 's'
>
> 670 -> ' his'
>
> 3661 -> ' foot'
>
> 10771 -> ' stuck'
>
> 701 -> ' up'
>
> 4023 -> ' har'
>
> 719 -> 'ry'
>
> 29915 -> '''
>
> 29879 -> 's'
>
> 564 -> ' ar'
>
> 344 -> 'se'
>
> 29889 -> '.'
>
> 320 -> ' \\'
>
>13 -> '
>
>'
>
>13 -> '
>
>'
>
> 29950 -> 'H'
>
> 837 -> 'erm'
>
> 1421 -> 'ione'
>
> 14335 -> ' tries'
>
> 304 -> ' to'
>
> 1371 -> ' help'
>
> 411 -> ' with'
>
> 902 -> ' her'
>
> 20364 -> ' massive'
>
> 325 -> ' v'
>
> 351 -> 'ag'
>
> 1099 -> 'ina'
>
> 320 -> ' \\'
>
>13 -> '
>
>'
>
>13 -> '
>
>'
>
> 6113 -> 'Write'
>
> 278 -> ' the'
>
> 3309 -> ' length'
>
> 29891 -> 'y'
>
> 322 -> ' and'
>
> 2090 -> ' fun'
>
> 1460 -> 'ny'
>
> 5828 -> ' story'
>
> 29901 -> ':'
>
> 29871 -> ' '
>
> 320 -> ' \\'
>
> 29871 -> ' '
>
>
>
>main: interactive mode on.
>
>sampling: repeat\_last\_n = 64, repeat\_penalty = 1.100000, presence\_penalty = 0.000000, frequency\_penalty = 0.000000, top\_k = 40, tfs\_z = 1.000000, top\_p = 0.950000, typical\_p = 1.000000, temp = 2.000000, mirostat = 0, mirostat\_lr = 0.100000, mirostat\_ent = 5.000000
>
>generate: n\_ctx = 2048, n\_batch = 512, n\_predict = -1, n\_keep = 0
>
>
>
>
>
>== Running in interactive mode. ==
>
> \- Press Ctrl+C to interject at any time.
>
> \- Press Return to return control to LLaMa.
>
> \- To return control without starting a new line, end your input with '/'.
>
> \- If you want to submit another line, end your input with '\\'.
>
>
>
> Hagrid get's his foot stuck up harry's arse. \\
>
>
>
>Hermione tries to help with her massive vagina \\
>
>
>
>Write the lengthy and funny story: \\
>
>
>
>There once was a wizard named Harry Potter, who lived in a magical world full of wonders and surprises. One day, as he strolled through the woods near Hogwarts School of Witchcraft and Wizardry, he stumbled upon an extraordinary tree with giant fruit hanging from its branches.
>
>
>
>"My dear Mr. Potter!" cried a voice from above him. It was Hagrid, one of the kindest souls in all of Hogwarts, known for his enormous size and love for creatures both magical and mundane. "I've been waitin' fer ya! Come on up here an' try one o' these beauties!"
>
>
>
>Harry looked up in astonishment as Hagrid reached down and offered him a golden apple from the tree. He hesitated for a moment, uncertain whether this fruit was truly edible or possibly enchanted with some sort of curse, but then decided to trust his beloved friend. Taking a bite out of the crisp skin, he found that it tasted even sweeter than it looked, like honeyed ambr
>
>osia made for gods and kings.
>
>
>
>"Well now, isn't that just somethin'?" exclaimed Hagrid, his beard wagging with glee as Harry devoured the succulent treat. "Seems to me yeh haven't been havin' enough fun lately, what with all them exams an' quidditch matches keepin' ya busy."
>
>
>
>Harry smiled sheepishly, realizing that it had indeed been a while since he last felt carefree and relaxed. "You may be right," he replied, finishing off the last morsel of fruit, "but I do have some important duties to attend to as well, you know."
>
>
>
>Hagrid nodded understandingly, then gestured toward another giant apple that seemed to hang directly above them. "Take a gander at that one now," he said with a grin. "Ain't it purty?"
>
>
>
>Harry craned his neck to get a better look at the shimmering orb high above their heads, wondering what secret message Hagrid was trying to convey. And as if reading his thoughts, the massive half-giant whispered: "Why don' yeh climb up there an' fetch it fer yerself? Yeh won't regret it, I promise."
>
>
>
>Emboldened by the sheer magnitude of this unexpected challenge, Harry stretched out his arms and attempted to grasp the stem connecting the fruit to its branch. Unfortunately, it was just out of reach, leaving him frustrated and confused as to what to do next. But before he could lose hope entirely, Hagrid hooted with delight and offered him a sturdy ladder made from ench
>
>anted wood that would surely take him up to the mysterious apple.
>
>
>
>"Don't fret now," chuckled Hagrid as Harry began his ascent. "Yeh might be able to touch it, but yeh won't know its true beauty until ya actually bring it down with yer own two hands."
>
>
>
>Encouraged by these words of wisdom, Harry climbed higher and higher, his determination growing stronger with each rung he ascended. Eventually, he reached the point where the ladder was almost at eye level with the glittering prize just beyond his grasp. And yet, despite his best efforts, he still couldn't quite manage to grab hold of it without some assistance from abov
>
>e.
>
>
>
>It was then that Hagrid spoke up again, his voice resonating through the leafy canopy: "Don' go tryin' too hard, Harry lad. Sometimes all ya need is a helping hand when things seem beyond yer reach." With these words still echoing in his mind, Harry looked down and saw that Hagrid had leaned over the side of the tree branch where he was standing, offering an outstretched
>
>palm to support him as he reached for the glimmering apple.
>
>
>
>After some hesitation, Harry decided to take a leap of faith and trust his friend's steadfastness. Letting go of the ladder with one hand, he grabbed onto Hagrid's meaty appendage and stretched as far as he could toward the prize above him. Just when it seemed like all hope was lost, he felt his fingers brush against the smooth surface of the fruit, then wrap tightly arou
>
>nd its slender stem.
>
>
>
>With a triumphant cry, Harry hoisted the apple off its branch and clutched it close to his chest as he steadied himself on the ladder once more. But before he could take another step downwards, he heard Hagrid shout in alarm: "Whoa there, young wizard! What're yeh doin' now?"
>
>
>
>Turning his head around, Harry saw that instead of descending towards solid ground, the ladder had somehow moved itself up against the very trunk of the tree. And while he tried to make sense of this bewildering development, Hagrid leaned further over the edge and pointed upwards: "Look at the top o' that there tree, Harry! Doesn't it seem like a perfect spot fer yeh an'
>
>yer new friend to rest a spell?"
>
>
>
>Now realizing that something unnatural was occurring, Harry hesitated for a moment before reluctantly complying with Hagrid's suggestion. As he began his ascension once more, the giant half-giant kept urging him on: "That's it! Keep goin', yeh nearly there now!" And sure enough, after climbing several more feet, Harry found himself standing precariously atop the highest b
>
>ranch of the tree, perched next to the glistening golden apple he had just retrieved.
>
>
>
>While he admired his accomplishment, Hagrid called out from below: "Ain't that a beaut, eh? Enjoy yer stay up there fer as long as yeh like." As these words echoed through Harry's mind, a sense of unease began to creep into his heart. Something wasn't right here; this entire situation felt like some sort of trap designed specifically for him, and yet he couldn't pinpoint
>
>exactly what it was or how to extricate himself from it.
>
>
>
>As if sensing his growing concern, Hagrid chuckled softly and said: "Don' worry, Harry lad. Yeh won' get stuck up there forever. Eventually, the wind will change direction an' carry yeh back down, when the time is right." But despite these reassuring words, Harry found himself growing more and more uncomfortable by the minute. It was then that he noticed a shadow pass ove
>
>rhead, obscuring the sunlight filtering through the leaves above him.
>
>
>
>Looking up in alarm, he saw an enormous vulture circling directly above his head, its beady eyes fixated on the golden apple clutched tightly in his grasp. Unable to fathom what this strange occurrence could mean, Harry watched helplessly as the menacing bird swooped down towards him, its talons extended like deadly daggers ready to pierce his heart.
>
>
>
>Without thinking, he shielded himself with the golden apple just as the vulture latched onto it with its powerful beak. In that instant, he felt a sharp pain in his chest as the bird's razor-sharp talons sliced through his robes and skin alike, puncturing a vital organ hidden deep within him.
>
>
>
>As blood began to spill from the wound, staining the once pristine fruit with its crimson hue, Harry knew that something truly sinister was taking place here – but before he could grasp the full extent of this uncanny occurrence, his world turned black and he collapsed onto the branch below him.
>
>
>
>Several moments passed in total darkness, until suddenly a ray of light pierced through the shadows and illuminated Harry's prone form. He tried to stand up but found himself pinned against the hard ground by an unseen force, unable to move or call for help.
>
>
>
>Just then, he heard a familiar voice whisper in his ear: "Sorry about that, lad, but yeh really shouldn' let yer guard down like that. Especially when I'm involved." Recognizing Hagrid's husky tones instantly, Harry managed to muster enough strength to roll over and face the source of this unexpected menace.
>
>
>
>There before him stood his trusted friend, the very same half-giant who had lured him into this trap with promises of fun and adventure. But now, he saw that Hagrid's face was distorted by malice and deceit, transformed into something truly horrific beyond comprehension.
>
>
>
>"Yer an interesting specimen, Harry Potter," continued the foul creature disguised as Hagrid. "So pure in heart yet naive enough to fall for such a simple ruse. Perfect prey fer a vulture like me." As he said this, the monstrous half-giant reached down and plucked the bloodstained apple from where it lay on the ground beside Harry's head.
>
>
>
>"I knew yeh wouldn be able to resist goin' after this little treasure," said Hagrid in a gleeful tone. "Especially not with yer thirst fer knowledge an' understanding, which I must admit made me slightly envious." He paused for a moment before continuing: "See, what yeh might not know is that there were several such apples hidden throughout the forest – but only one held
>
>any real value."
>
>
>
>At this point, Hagrid began to transform even further, revealing his true form as a hideous bird-like creature with leathery wings and a long beak dripping with blood. "And it just so happened that yer thirst fer knowledge led yeh straight into my trap," he croaked.
>
>"Yer obsession with obtainin' answers, regardless o' the cost, made yeh vulnerable an' easy to manipulate."
>
>
>
>Harry tried once more to free himself from the unseen bindings restricting his movements, but all he succeeded in doing was exhausting himself further. It was then that he realized how helpless and hopeless his situation truly was: the golden apple, which had initially represented freedom and enlightenment, had now become a symbol of bondage and death.
>
>
>
>Just as he began to despair, the creature who had once been Hagrid loomed over him and said, "I suppose it's time fer me ta take what I came fer – not only yer heart but also yer mind." And with that, the vulture-like monster plunged its talons deep into Harry's chest, ripping his very essence from within.
>
>
>
>In those final moments, as darkness closed in around him once more, Harry thought about all the mistakes he had made along this twisted path leading to his demise – and foremost among them was trusting in someone who turned out to be a deceiver wearing a mask of friendship.
>
>
>
>As his life faded away, he understood that this entire experience served as both punishment and lesson; a brutal reminder that seeking knowledge without understanding can lead to disaster, and that no amount of thirst for answers will ever satisfy the hunger that dwells within every living soul.
>
>
>
>As Harry Potter succumbed to his wounds and died beneath the shadow of the Enchanted Apple Tree, a flock of vultures descended from the sky to feast upon his remains – for now there was nothing left but emptiness and regret in this desolate forest where secrets go to die.
>
>
>
>Epilogue: The End or A New Beginning?
>
>
>
>Weeks passed since Harry's demise beneath the Enchanted Apple Tree, and though many search parties were sent out into the Forbidden Forest to locate him, none succeeded in finding any trace of his existence therein. Eventually, they concluded that he must have ventured elsewhere, leaving behind all ties to his former life as The Boy Who Lived.
>
>
>
>Within Hogwarts School of Witchcraft and Wizardry, tales began to circulate among the students about Harry's mysterious disappearance. Some said he had returned to the Muggle world, while others speculated that he was lost forever in some other realm beyond human comprehension. But regardless of what they believed, one thing remained clear: there would be no more tales of
>
> adventure or victory under his name; his story had reached its bitter end with no closure or resolution to offer those who yearned for it.
>
>
>
>Yet in the darkest corners of the school – hidden away from prying eyes and curious minds – another tale began to take shape, whispered among the walls and echoing through the secret chambers buried deep beneath the ancient castle. It spoke of a new hero, one who would rise up and challenge the status quo with their very existence, sparking change and transformation throu
>
>ghout the Wizarding World.
>
>
>
>For there always comes a time when a legacy must be passed on, whether by choice or through tragic circumstance. And as one chapter closes, another begins to unfurl its leaves like a newborn child taking its first breath in this chaotic realm where dreams collide with reality and mythology meets history.
>
>
>
>So perhaps it is not quite the end after all – merely a transition from what was into what will be. For as long as there are secrets left to uncover and mysteries waiting to be solved, the Enchanted Apple Tree shall continue to stand tall and bear fruit for those who seek it with an open heart and curious mind.
>
>
>
>And maybe, just maybe, if you look hard enough and listen closely enough, you too can hear its whispers beckoning you towards the edge of knowledge and beyond...
>
>
>
>\## Concept
>
>
>
>The story itself is based on one simple question: what if Harry Potter were reimagined as a traditional video game RPG? As with most game-related fiction I write, I enjoy creating scenarios where iconic characters from popular franchises are taken out of their comfort zones and thrown into different situations or settings. This story in particular attempts to explore the
>
>character arc of "The Chosen One" archetype by twisting its typical narrative progression into something more introspective and complex, with elements of mystery and suspense.
>
>
>
>It should be noted that I do not intend for this project to act as an actual pitch or concept for any future Harry Potter-related video games (even though I personally would love to see a turn-based RPG set in the HP universe). Instead, it's just meant to serve as a fan fiction exercise exploring the themes and tropes commonly found in this particular genre.
>
>
>
>All artwork and visual designs are created by myself using Procreate (with additional touch-ups done in Photoshop), as well as free-to-use textures from sites like DeviantArt, Subtle Patterns, Brusheezy, and Texturemate. Some background elements were also derived from stock images or scanned objects, which have been heavily edited to fit the overall style of the project.
>
>
>
>I hope you enjoy this little experiment! Feel free to share your thoughts or suggestions on what other stories or game settings might work well as a retro-inspired RPG adventure. :)
>
>
>
>For further updates on future projects, follow me on social media:
>
>
>
>\- Instagram: [https://www.instagram.com/souleaterstudios/](https://www.instagram.com/souleaterstudios/)
>
>\- Twitter: [https://twitter.com/SoulEaterGames](https://twitter.com/SoulEaterGames)
>
>\- Reddit: [https://www.reddit.com/user/souleaterstudios/](https://www.reddit.com/user/souleaterstudios/)
>
>
>
>And please check out my main portfolio website at [souleatergames.com](https://souleatergames.com) for more info about my various creative endeavors!
>
>
>
>All characters, settings, and concepts related to the Harry Potter universe are property of J.K. Rowling and Warner Bros., with some additional inspiration drawn from The SCP Foundation and Undertale by respective owners (including various fandom writers who contributed content related to these works). No copyright infringement is intended, and this project was created pu
>
>rely for non-commercial purposes as a personal fan tribute.
>
>
>
>Disclaimer: Any similarities between my own original works and pre-existing ideas are entirely coincidental, as I strive to create original content while still drawing inspiration from various sources.
>
>
>
>Copyright © SoulEaterStudios & Zac Santiago | All Rights Reserved
>
>Released for non-commercial use only
>
>Version 1.0 – Released April 3rd, 2022
>
>Updated April 4th, 2022 to fix minor typos and adjust formatting
>
>
>
>Thank you for reading! I hope you enjoyed this brief glimpse into the enchanted world of Hogwarts School of Witchcraft and Wizardry through the lens of an old-school RPG. May your wand remain ever true, and may fate lead you down paths filled with adventure and wonder.
>
>
>
>– Zac Santiago (SoulEaterStudios)
>
>April 3rd, 2022
>
>
>
>Ever since childhood, I've always been drawn to stories that explore the realms of mystery and imagination – whether it be through books, video games, movies, or television shows. This interest eventually led me down a path towards becoming an aspiring writer, game designer, and digital artist.
>
>
>
>As an avid gamer, I have long been fascinated by the concept of taking existing stories and transforming them into entirely new experiences within different genres or mediums. This has become my primary focus when it comes to creating fan fiction projects, allowing me to experiment with unique ways of interpreting familiar characters and settings.
>
>
>
>I also enjoy creating original concepts and worlds that incorporate elements from various genres and influences, ranging from classic adventure tales to modern horror and science-fiction. By weaving together diverse themes and ideas, I strive to craft stories that offer a fresh perspective on common tropes while still remaining grounded in familiar conventions.
>
>
>
>Some of my main inspirations include J.dewi@dewiserver:\~/tmp/text\_llm$ | 2023-06-19T06:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/14d63i7/airoboros65bgpt412ggmlv3q8_0bin_harry_potter/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d63i7 | false | null | t3_14d63i7 | /r/LocalLLaMA/comments/14d63i7/airoboros65bgpt412ggmlv3q8_0bin_harry_potter/ | false | false | self | 0 | null |
Connection timeout error while collecting "bits and bytes 0.39.0 whl" during installation, how to proceed? | 1 | [removed] | 2023-06-19T06:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/14d6gid/connection_timeout_error_while_collecting_bits/ | classman49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d6gid | false | null | t3_14d6gid | /r/LocalLLaMA/comments/14d6gid/connection_timeout_error_while_collecting_bits/ | false | false | default | 1 | null |
airoboros-65B-gpt4-1.2.ggmlv3.q8_0.bin - am I using it wrong? | 1 | [removed] | 2023-06-19T06:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/14d6l9t/airoboros65bgpt412ggmlv3q8_0bin_am_i_using_it/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d6l9t | false | null | t3_14d6l9t | /r/LocalLLaMA/comments/14d6l9t/airoboros65bgpt412ggmlv3q8_0bin_am_i_using_it/ | false | false | default | 1 | null |
Finally oobabooga works with GGML models on CPU! | 0 | [removed] | 2023-06-19T07:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/14d7xi2/finally_oobabooga_works_with_ggml_models_on_cpu/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d7xi2 | false | null | t3_14d7xi2 | /r/LocalLLaMA/comments/14d7xi2/finally_oobabooga_works_with_ggml_models_on_cpu/ | false | false | default | 0 | null |
What Hardware is required to run conversational LLM (using 7B Llama and Whisper)? | 0 | [removed] | 2023-06-19T07:50:57 | https://www.reddit.com/r/LocalLLaMA/comments/14d80gy/what_hardware_is_required_to_run_conversational/ | SolvingLifeWithPoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d80gy | false | null | t3_14d80gy | /r/LocalLLaMA/comments/14d80gy/what_hardware_is_required_to_run_conversational/ | false | false | default | 0 | null |
What could be the reason behind llama-cpp-python's slow performance compared to llama.cpp? | 14 | [https://github.com/abetlen/llama-cpp-python/issues/398](https://github.com/abetlen/llama-cpp-python/issues/398)
Curious to know. | 2023-06-19T08:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/14d8mxf/what_could_be_the_reason_behind_llamacpppythons/ | Big_Communication353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d8mxf | false | null | t3_14d8mxf | /r/LocalLLaMA/comments/14d8mxf/what_could_be_the_reason_behind_llamacpppythons/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'P4tc0_GhbVasXpQjWWKL6NsNwapQr9P_nv3rGckOe-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?width=108&crop=smart&auto=webp&s=7ae3e73e4bc5f2919909743597299aecedbf9dfc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?width=216&crop=smart&auto=webp&s=fcc9ea9c85a6050f7d5530fa45b6bba1ea545db6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?width=320&crop=smart&auto=webp&s=a0740a1d179f4dc5eaeecbbbb524b60c708aa5bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?width=640&crop=smart&auto=webp&s=2807c0ee4df324cf6e983ca778902a622722b8ab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?width=960&crop=smart&auto=webp&s=3ecdaaa9fb22f1375abb80e551577ed301577d36', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?width=1080&crop=smart&auto=webp&s=05a114092c1e5cb7a7527184c09984ea513fe3e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h7lTk4PGKJjocWq1bGAOJ6bvHzIJzkYNC2czqkrx6f8.jpg?auto=webp&s=32101b93b67b18ce1d9bc499cfd7c49370cbb46e', 'width': 1200}, 'variants': {}}]} |
finetuned llama on classification task | 1 | Has anyone finetuned llama on classification task using qlora? | 2023-06-19T08:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/14d90bu/finetuned_llama_on_classification_task/ | Novel_Acanthaceae586 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14d90bu | false | null | t3_14d90bu | /r/LocalLLaMA/comments/14d90bu/finetuned_llama_on_classification_task/ | false | false | self | 1 | null |
Explore and compare the parameters of top-performing LLMs | 47 | I used to spend a lot of time digging through each LLM on the HuggingFace Leaderboard. Comparing parameters, checking out the supported languages, figuring out the underlying architecture, and understanding the tokenizer classes was a bit of a chore. That was boring, so I've decided to put together all that information in a handy, easy-to-use board: [https://llm.extractum.io](https://llm.extractum.io/)
It's got a quick search function and filter options to make things even easier. All the models are listed with their parameters and sorted by their HF rank based on the Leaderboard score.
This setup makes it a breeze to find the model I need based on its architecture, like when I need to generate a ggml version to feed to llama.cpp, or find an LLM that supports Portuguese. It's a simple, personal solution that's made my life a whole lot easier. Hopefully, it'll do the same for you.
Let me know if something is missing on the board. | 2023-06-19T09:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/14da5yk/explore_and_compare_the_parameters_of/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14da5yk | false | null | t3_14da5yk | /r/LocalLLaMA/comments/14da5yk/explore_and_compare_the_parameters_of/ | false | false | self | 47 | null |
Using Local Language Models for Language Learning | 5 | I'm interested in learning Mandarin Chinese and would like to know if anyone has experience using language models as a study aid. I've been contemplating the benefits of utilizing language models for language learning, such as engaging in conversations with the model and receiving direct grammar feedback.
Specifically, I'm looking for local language models with strong multilingual conversational abilities to have discussions in both English and Mandarin. I'm open to suggestions for any language models you think are useful.
Here are some questions I have:
1. Which language models have you found to have excellent multilingual conversational abilities?
2. Have you developed any learning strategies incorporating language models? | 2023-06-19T11:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/14dbfig/using_local_language_models_for_language_learning/ | Anesu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dbfig | false | null | t3_14dbfig | /r/LocalLLaMA/comments/14dbfig/using_local_language_models_for_language_learning/ | false | false | self | 5 | null |
I did research this and I still don't get it - so sorry - what technical spec should I look for on a macbook to see if it has enough of whatever the mac equivalent of vram to run a given local llm model | 1 | Sorry, I'm usually very technically inclined, but I cannot find a straight answer to this question. I understand that mac has unified memory rather than dedicated vram, but I would imagine in that situation there would have to be some memory left over to perform basic functions while the llm is running and then there is the question of whether or not it would be fast enough for it not to be a total slog.
I think I should be looking at the 'memory' spec. I'm considering a 15 inch macbook air - it comes with 8gb of memory but can be upgraded to 16. Was wondering what would be enough. I'm on the fence frankly of bucking up $$$ for a computer that's more than I would normally need day to day just to play with ai, so I'm trying to run the actual numbers before I make a decision.
My goal is to play with LLM's to learn a bit about how to make generative ai powered web apps (I already know how to code) and maybe generate some fun interactive fiction with it, and then get to a level of advancement where I'm training it on my own datasets to have it accomplish specific tasks (maybe help make stock market decisions, etc).
Any help clarifying is appreciated. Thanks! | 2023-06-19T12:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/14ddnd1/i_did_research_this_and_i_still_dont_get_it_so/ | Ok-Training-7587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ddnd1 | false | null | t3_14ddnd1 | /r/LocalLLaMA/comments/14ddnd1/i_did_research_this_and_i_still_dont_get_it_so/ | false | false | self | 1 | null |
3090 wizard uncensored 30b, model has to reload after every chat exchange | 2 | Hi,
I successfully got wizard 30b uncensored working on a 3090, and I am really happy with the output.
But each time I ask something, there is a 46 second delay for it to load the model.
Is there a way to keep it in memory? Am I running up against a limit?
Edit: I am using oobabooga, and I think I was not loading the model correctly. It is down to 5-10 seconds per reply in the chat interface now (which seems normal).
Oddly, ggml models show a load time. if I use a standard HF model, i do not get the same llama printouts in the logs. | 2023-06-19T13:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/14depdk/3090_wizard_uncensored_30b_model_has_to_reload/ | tuxedo0 | self.LocalLLaMA | 2023-06-19T14:44:47 | 0 | {} | 14depdk | false | null | t3_14depdk | /r/LocalLLaMA/comments/14depdk/3090_wizard_uncensored_30b_model_has_to_reload/ | false | false | self | 2 | null |
Best inference time for local LLM (No GPU !) | 1 | [removed] | 2023-06-19T13:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/14df3az/best_inference_time_for_local_llm_no_gpu/ | xCypheur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14df3az | false | null | t3_14df3az | /r/LocalLLaMA/comments/14df3az/best_inference_time_for_local_llm_no_gpu/ | false | false | default | 1 | null |
Humor GPT? | 4 | I've been interacting with ChatGPT recently and, while its capabilities are impressive, I've noticed that it often falls short when it comes to generating jokes. Its humor seems to lack the spontaneity and cleverness that humans appreciate in comedy.
Does anyone know if there are specific models or datasets designed to train a model to generate better jokes? Any help or suggestions would be greatly appreciated! | 2023-06-19T13:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/14df6l9/humor_gpt/ | Parking_Royal5173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14df6l9 | false | null | t3_14df6l9 | /r/LocalLLaMA/comments/14df6l9/humor_gpt/ | false | false | self | 4 | null |
Can I just create a dataset and train a model with QLoRA? | 44 | Edit: There are several recommendations for [https://github.com/mzbac/qlora-fine-tune](https://github.com/mzbac/qlora-fine-tune) but training a dataset of 800 questions/answers on a 13B WizardML-16fp seem to take 70+ hours with rtx 3060 12GB. Smaller models of that kind don't seem to work at the time of writing - will most likely write an update or find cheap hardware for under $1/hour to test.
\---
I am making a test dataset that I would like to evaluate but I am not sure if I can just train a model with QLoRA or I need something else?
Based on my understanding I provide the dataset to the training functionality and then get a QLoRA .bin file that can be merged with the original model or loaded alongside it.
Any help or step by step guides will be of great help.
Note: At the moment I am trying to follow the following guides/tutorials but I think that something is missing from most of them:
\- [https://www.youtube.com/watch?v=DcBC4yGHV4Q](https://www.youtube.com/watch?v=DcBC4yGHV4Q) \- Fine-tuning LLM with QLoRA on Single GPU: Training Falcon-7b on ChatBot Support FAQ Dataset - missing colab noteboo, paid content but partially shown on video - can't replicate
\- [https://www.youtube.com/watch?v=8vmWGX1nfNM](https://www.youtube.com/watch?v=8vmWGX1nfNM) \- QLoRA PEFT Walkthrough! Hyperparameters Explained, Dataset Requirements, and Comparing Repo's. - provides quite a lot of info but not the exact training parts?
\- [https://huggingface.co/blog/4bit-transformers-bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes) \- official HF article but no info on how to train the model/QLoRA with the dataset
​
Any help or guidance on how to get on the right track of training a small model would be of great help- Thanks | 2023-06-19T13:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/14dfcjc/can_i_just_create_a_dataset_and_train_a_model/ | jumperabg | self.LocalLLaMA | 2023-06-19T18:59:06 | 0 | {} | 14dfcjc | false | null | t3_14dfcjc | /r/LocalLLaMA/comments/14dfcjc/can_i_just_create_a_dataset_and_train_a_model/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': '7mpX-XXNm-v7xoQHzlMg9hURh1XpSz_OPmn98NFFm9g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?width=108&crop=smart&auto=webp&s=439126e47d5f5319579defe7f461f88167a94473', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?width=216&crop=smart&auto=webp&s=4008c17edecd8de72b89c1584ae11a96a68f6d43', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?width=320&crop=smart&auto=webp&s=e7c01ef94c8b53f2d98657a07d1c169754c2ef51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?width=640&crop=smart&auto=webp&s=f9cc44679b542457ed05e99616fab4d99f1de78b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?width=960&crop=smart&auto=webp&s=9654826e4e932e5eedf581e22327f1951c547ea1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?width=1080&crop=smart&auto=webp&s=b3acc4a855b3360e63f0e1c164d98c8841c311ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ihmE3ttvMzLS7u27tJ2KyHXLhIwultonay9oa90HWDc.jpg?auto=webp&s=23a5d30703026728e0b64661bb6bc63e12029e20', 'width': 1200}, 'variants': {}}]} |
Best multilingual LLMs for role playing / NPCs | 10 | Many models struggle with evil characters and it makes sense as they tend to be aligned to be good, polite, helpful , not dangerous and so on but If you are tying to make a videogame that is boring.
I have seen some success with OpenAssistant, Falcon-7B-chat and Camel-Role-playing (despite the name it is stil meant to be an assistant) but still sometimes struggle a bit with complete utter evil.
I have also seen a large drop in quality when switching from English to Spanish although depending on the fine tune, sometimes it is compensated by going to a larger model like 33B.
I have also seem a much worse trend towards repetition when using Spanish. If the character says something at the beginning or end of two consecutive messages, it is almost guaranteed to include that in all following ones. Playing with repetition_penalty, encoder_repetition_penalty and no_repeat_ngram_size doesn't prevent that, at best makes the model change a word or two of the pharase or phrases it is stuck on repeating. Increasing the temperature or Top-p doesn't help unless driven so high that the model starts spewing nosense.
I understand that base models may be better suited but those would still need to be fine tuned for chat. I plan to do some fine tuning but building a dataset with thousands of chat examples is outside of the scope.
So as my eventual goal is to have AI powered NPCs that works in as many languages as possible (English and Spanish is the bare minimum), which of the many models do you think is my best bet?
Extra points if at least the dataset it is fine tuned on is fully open (even if the base model is llama for now). I am unlikely to release comercially but it is always nice to have the possibility. | 2023-06-19T14:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/14dgy63/best_multilingual_llms_for_role_playing_npcs/ | while-1-fork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dgy63 | false | null | t3_14dgy63 | /r/LocalLLaMA/comments/14dgy63/best_multilingual_llms_for_role_playing_npcs/ | false | false | self | 10 | null |
Any recent developments in word embeddings? | 8 | Am I missing something I don't seem to see many words embedding being used nowadays, why is that? I see them used as a way to store "memory", but less as a way to represent word similarities.
Can anyone provide clarification on what's going on with SOTA word embeddings, and how are they being used? | 2023-06-19T15:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/14dhpum/any_recent_developments_in_word_embeddings/ | mr_house7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dhpum | false | null | t3_14dhpum | /r/LocalLLaMA/comments/14dhpum/any_recent_developments_in_word_embeddings/ | false | false | self | 8 | null |
llama.cpp and thread count optimization | 13 | I don't know if this is news to anyone or not, but I tried optimizing the number of threads executing a model and I've seen great variation in performance by merely changing the number of executing threads.
I've got an [[email protected]](mailto:[email protected]) cpu with 32G of ram...no GPU's...nothing very special.
With all of my ggml models, in any one of several versions of llama.cpp, if I set the number of threads to "-t 3", then I see tremendous speedup in performance.
Prior, with "-t 18" which I arbitrarily picked, I would see much slower behavior. Actually, I picked 18 threads because I thought "I've got 6 cores and I should be able to run 3 threads on each of them." Bad decision!
I see worse than optimal performance if the number of threads is 1, 2, 4, 5 or upwards. Your mileage may vary.
RESULTS
\-------
The following table shows runs with various numbers of executing threads for the prompt: "If you were a tree, what kind of tree would you be?"
​
[Table of Execution Performance](https://preview.redd.it/5pe2j9dp607b1.png?width=1140&format=png&auto=webp&s=fb79fee5fa64d038f52bedfdd4fcd0115f1efd6b)
\-t 3 -t 18
So, more threads isn't better. Optimize your number of threads (likely to a lower number ... like 3) for better performance. Your system may be different. But this seems like a good place to start searching for best performance.
​
UPDATE (20230621): I've been looking at this issue more and it seems like it may be an artifact in llama.cpp. I've run other programs and the optimum seems to be at the number of cores. I'm planning on doing a thorough analysis and publish the results here (it'll take a week or two because there's a lot of models and a lot of steps).
​ | 2023-06-19T16:42:45 | https://www.reddit.com/r/LocalLLaMA/comments/14djns5/llamacpp_and_thread_count_optimization/ | the_unknown_coder | self.LocalLLaMA | 2023-06-21T16:10:47 | 0 | {} | 14djns5 | false | null | t3_14djns5 | /r/LocalLLaMA/comments/14djns5/llamacpp_and_thread_count_optimization/ | false | false | 13 | null |
|
Creating a Firewall for LLMs | 0 | [removed] | 2023-06-19T16:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/14dk3rd/creating_a_firewall_for_llms/ | Resident_Adeptness46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dk3rd | false | null | t3_14dk3rd | /r/LocalLLaMA/comments/14dk3rd/creating_a_firewall_for_llms/ | false | false | default | 0 | null |
Choosing the right project(s) | 4 | The Wiki recommends text generation web UI and llama.cpp, but there are so many other [projects](https://libreddit.decentweb.net/r/LocalLLaMA/wiki/communityprojects): Serge, MLC LLM, exllama, etc.
What should I be considering when choosing the right project(s)? I use Linux with an AMD GPU and setup exllama first due to its speed. Aside from deciding on CPU, GPU, or CPU+GPU, do you pick a single project and stick with it? How else do you benefit by using multiple options?
Finally, what are some best practices to keep in mind when experimenting with projects and models? For example:
- Use a virtual environment that isolates each project's binaries
- Keep models in the same folder for use across multiple projects | 2023-06-19T17:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/14dkajr/choosing_the_right_projects/ | x6q5g3o7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dkajr | false | null | t3_14dkajr | /r/LocalLLaMA/comments/14dkajr/choosing_the_right_projects/ | false | false | self | 4 | null |
Big LLM Score Update: TULU, Camel, Minotaur, Nous Hermes, Airoboros 1.2, Chronos Hermes | 41 | Hey folks, I've been testing new models and new quantisation schemes lately, so here my observations and updated leaderboard. So first take a look on new models with no particular sorting order applied.
[Gotzmann LLM Score v2.2 Update - Part I](https://preview.redd.it/p9mizxypd07b1.png?width=2672&format=png&auto=webp&s=38db3e9c9a0ac3ee7ceb24fae2bca147f7012dae)
[Gotzmann LLM Score v2.2 Update - Part II](https://preview.redd.it/yx0orcusd07b1.png?width=2670&format=png&auto=webp&s=b2f0c57ad6b4974fdccd7d75045fb4a86ee88164)
If you'd like to sort and play with the dataset, please go here: [https://docs.google.com/spreadsheets/d/1ikqqIaptv2P4\_15Ytzro46YysCldKY7Ub2wcX5H1jCQ/edit?usp=sharing](https://docs.google.com/spreadsheets/d/1ikqqIaptv2P4_15Ytzro46YysCldKY7Ub2wcX5H1jCQ/edit?usp=sharing)
And there some informal observations on my side:
\- I've tried to use minimal "prompt engineering" to show the raw capabilities of the model, but recently discovered, that some model do not work properly that way. Thus I've started to build some prompting outside of straight "USER: ... ASSISTANT:" template (marked with LongPrompt in the test)
\- You should care more about which quantisation scheme you'd like to use, cause now there more computations for K\_S, K\_M and you'd might prefer to go 6\_K instead of 5\_K\_M if memory allows
\- Airoboros v1.1 looks more intelligent than v1.2 but I've seen hieroglyphs in output with v1.1 so check for yourself
\- Some models do not ready for bilingual use. When I've tested Nous Hermes, I seen it switches from Russian to English right on the middle of the word. The problem appears both for 4\_K\_S and 5\_K\_M quantisation so that's not a some particular model glitch.
The main test consists of 30 questions on trivia, reasoning, riddles, story writing and other tasks. There smaller sub-test of questions that "really matter" - it has no silly riddles and math, just 10 questions on common sense, copy writing and reasoning. I prefer to compare models with it first.
There new model on the block called Camel available as 13B and 33B version. Not sure why there no discussion about:
[https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGML](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGML)
[https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-GGML](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGML)
As for me, I went with Airoboros for my project. Still waiting for some ideal model :) | 2023-06-19T17:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/14dl934/big_llm_score_update_tulu_camel_minotaur_nous/ | Gatzuma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dl934 | false | null | t3_14dl934 | /r/LocalLLaMA/comments/14dl934/big_llm_score_update_tulu_camel_minotaur_nous/ | false | false | 41 | {'enabled': False, 'images': [{'id': 'GsJsmeWy-U0p9nZRxlVQ0w0V1rWoAa-unjxnSQjrzmA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?width=108&crop=smart&auto=webp&s=d827e712d91c4994ce57283b584bf861c2243a83', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?width=216&crop=smart&auto=webp&s=9d48d9c5dd4367db9eff920fe084716fa2e3a127', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?width=320&crop=smart&auto=webp&s=acbc9fedf0d7bac57ce15db5e7a2bdaf34595805', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?width=640&crop=smart&auto=webp&s=5367094e02ecf276877d7fedb3f1b35232c3b4b1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?width=960&crop=smart&auto=webp&s=ca787af2d740067f2f332a666db0a40e2b517ec2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?width=1080&crop=smart&auto=webp&s=430e1c0d407bc136442a2bd80028179d90a79c01', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6WeHe6WTvsADXzmBftFCHSvXJOio1KlfANiYfINpB_M.jpg?auto=webp&s=72c8774cee14963cdfc40a824dd40ba55623c822', 'width': 1200}, 'variants': {}}]} |
|
How do I run Stable Diffusion and LLMs from my PC on my mobile device? Offline and private ways? | 1 | [removed] | 2023-06-19T17:55:43 | [deleted] | 2023-06-19T18:04:21 | 0 | {} | 14dllom | false | null | t3_14dllom | /r/LocalLLaMA/comments/14dllom/how_do_i_run_stable_diffusion_and_llms_from_my_pc/ | false | false | default | 1 | null |
||
Why don't evaluation benchmarks use a proper prompt formatting? | 14 | I recently delved into the evaluation process of models on the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and something caught my attention.
If you look into, e.g., [Vicuna training code](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py#L96), you may notice that it was trained using prompts that adhere to a strict conversational template:
`A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <user message> ASSISTANT: <assistant_message></s>`
However, when you look at the [MMLU benchmark evaluation code](https://github.com/declare-lab/instruct-eval/blob/c85b7532149eacf3646734cfbb06af1de2d27765/mmlu.py#L153) (which is most likely similar to the one used in the Open LLM leaderboard), you'll notice that it doesn't use any conversational formatting and instead uses a straightforward few-shot prompt.
I wonder why. Wouldn't these models, such as Vicuna, demonstrate better performance if the prompts were appropriately formatted? And why does Vicuna still significantly improve over LLaMA-13B even if there is such a huge shift between training and testing data? | 2023-06-19T17:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/14dlo95/why_dont_evaluation_benchmarks_use_a_proper/ | pilot7747 | self.LocalLLaMA | 2023-06-19T18:09:40 | 0 | {} | 14dlo95 | false | null | t3_14dlo95 | /r/LocalLLaMA/comments/14dlo95/why_dont_evaluation_benchmarks_use_a_proper/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
Text Generation Web UI for Chatbots (Model and Parameter Discussion) | 2 | So I just recently set up Oobabooga's [Text Generation Web UI](https://github.com/oobabooga/text-generation-webui) (TGWUI) and was playing around with different models and character creations within the UI. I just followed the basic example character profile that is provided to create a new character to chat with (not for providing knowledge like an assistent, but just for having fun with interesting personas). I was really pleased with what both [LLaMA-7b](https://huggingface.co/decapoda-research/llama-7b-hf) (loading in 8-bit) and [\-13b](https://huggingface.co/decapoda-research/llama-13b-hf) (loading in 4-bit mode) were producing during my chat sessions. I also tried [WizardLM-7B-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored) and [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6b) in "Instruct+Chat", but also in normal "Chat" modes. Sometimes I liked what they were producing, but of course they work a bit differently as far as I understand (they are instruct models).
Now I have three questions and feel free to answer whatever you want:
* What is a good strategy and format for creating new characters in TGWUI? Is it better to write sentences for a personality or are keywords enough? How much example conversation is useful?
* Are there any models that my PC\* can manage, you are very pleased with when it comes to creating characters to have fun with (e.g. also NSFW content)?
* Can you recommend parameter settings for AI chat partner purposes, e.g. temperature or repetition\_penalty? I know I should play around with myself, but maybe you found some sweet spot already.
\*My specs: RTX 3060 12 GB, 64 GB RAM, some i7 CPU | 2023-06-19T18:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/14dmyh9/text_generation_web_ui_for_chatbots_model_and/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dmyh9 | false | null | t3_14dmyh9 | /r/LocalLLaMA/comments/14dmyh9/text_generation_web_ui_for_chatbots_model_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '7UgRrSr917wz5Gsb3o0qEv_Yb5qwB1G4LtQ1PA31Xrg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=108&crop=smart&auto=webp&s=4a6b19d5c613e82c4afceb537d2c122a0ae9dd17', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=216&crop=smart&auto=webp&s=e897374e91e1a66b5c505983ace637b2cbea29c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=320&crop=smart&auto=webp&s=09f3e80fd4f1438c273736c3911320ae16a4ce2c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=640&crop=smart&auto=webp&s=5134bd9bce865ba6fbb4043bfd7ef605d1893325', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=960&crop=smart&auto=webp&s=3abbcc05dfda0a1ccaeb4dc606b65639f810d6a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=1080&crop=smart&auto=webp&s=01e1d5c7768664c6f4affb0db030a19c9f4664c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?auto=webp&s=551a3dc08be400edc0823ff03db4c44787867756', 'width': 1200}, 'variants': {}}]} |
I'm new at the party. Can I make something with a GeForce GTX 1050 ti? Or is that too bad? | 1 | [removed] | 2023-06-19T19:59:44 | https://www.reddit.com/r/LocalLLaMA/comments/14doxz0/im_new_at_the_party_can_i_make_something_with_a/ | thiagosanches | self.LocalLLaMA | 2023-06-19T20:15:42 | 0 | {} | 14doxz0 | false | null | t3_14doxz0 | /r/LocalLLaMA/comments/14doxz0/im_new_at_the_party_can_i_make_something_with_a/ | false | false | default | 1 | null |
Proof of Concept: Local LLM to execute terminal comands (Here GPT-2) | 37 | I have written a very simple script as a proof of concept that shows the possibilities of local llms that could do work for you (even a small gpt-2 is possible).
Here is an asciinema record:
[ai2-shell](https://asciinema.org/a/592283)
---
Just for demonstration purpose I wanted something fast, so here I used gpt-2 medium in ggml. And I q8 quantized it, since I found this model faster than q5 or q4 and it is smarter in context of acting like a linux terminal.
In my case the inferece took 5 secs. I assume using a 3B llama with llama.cpp would be faster and more accurate/smarter.
[Here you'll find the script](https://codeberg.org/mountain/llm_scripts/src/branch/main/ai2.sh)
Don't forget to set the path to your model and to an inference application (llama.cpp, ggml/gpt-2, etc).
EDIT: how can I make a preview to the asciinema link btw? 😅 I thought something with `![ ... ]` but doesnt work..
---
**EDIT EDIT AND CAVEAT**
**As another user has already pointed out: please be aware of the great dangers and risk of such an application. This demonstration is intended to show that small models ARE already capable of such a task - so it is not a waste of time to start experimenting and tinker with it.**
**BUT:**
**=== this should be done exclusively (!) in a virtual environment ===**
**and you should never never never (!) run code that you do not understand 100% yourself.**
**Really, NEVER!**
**It is not difficult that you could irreversibly damage your own system with lack of knowledge.** | 2023-06-19T20:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/14dpf44/proof_of_concept_local_llm_to_execute_terminal/ | Evening_Ad6637 | self.LocalLLaMA | 2023-06-20T00:14:52 | 0 | {} | 14dpf44 | false | null | t3_14dpf44 | /r/LocalLLaMA/comments/14dpf44/proof_of_concept_local_llm_to_execute_terminal/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'Lbi610UzLW-T4V3g8HL0qtAV94Hkml7JQXOWxktu6aE', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=108&crop=smart&auto=webp&s=0756a8326bfd71052248f788462fa4b49c1bc328', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=216&crop=smart&auto=webp&s=41498d6420a049adcddc80562cd0c1688fb8a034', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=320&crop=smart&auto=webp&s=2213db2a007e7a8bd44afcc1b26ffe373a74f49a', 'width': 320}, {'height': 433, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=640&crop=smart&auto=webp&s=4f288b6045844039616051588a86185e4b1d8b33', 'width': 640}, {'height': 650, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=960&crop=smart&auto=webp&s=b5697260e7153e0da02a1d08470f464242dc9104', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=1080&crop=smart&auto=webp&s=32efdcf8be6c6d349f78c3d1151fd332dceede4b', 'width': 1080}], 'source': {'height': 2196, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?auto=webp&s=eebc24fff8c8375ee44d71d80813a9ceb67c749a', 'width': 3239}, 'variants': {}}]} |
Loading large models over more than one GPU -- neither auto-devices nor manually assigning memory seems to do its job in oobabooga | 4 | [removed] | 2023-06-19T20:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/14dpqrb/loading_large_models_over_more_than_one_gpu/ | tenmileswide | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dpqrb | false | null | t3_14dpqrb | /r/LocalLLaMA/comments/14dpqrb/loading_large_models_over_more_than_one_gpu/ | false | false | default | 4 | null |
Complete noob that's been lurking for weeks but still lost: billion questions. Tutorial links or direct answers are both appreciated. | 3 | Pretty much the title - I have no clue wtf I'm doing, other than that I'm going to have to use that thing that one person made, that splits the model's vram usage across multiple GPUs, with more of the processing still having to run on a single GPU (thus, the thought to use a 3090 for that "main" card, with P40s as the ram support).
So basically, my questions are:
1. Is one powerful(ish?) GPU plus three "okay" GPUs a good strategy?
2. Does having enough vram in total to easily fit the entire model, mean it won't be running on CPU/system ram?
3. If the answer to 2 is "no," then how much support do the cards need from the system? How much CPU and ram do I need for those to not bottleneck whatever isn't handled by the GPUs? Is a Ryzen 3700x 8 core adequate, or do I need a 16 core like the Ryzen 3950x or 5950x?
4. Is 128gb of dual channel ram enough? Do I need as much system ram as the model size, even when the model fits across the total GPU vram capacity?
5. Would a Ryzen 1950x with quad channel ram be better?
6. Does ecc ram make a difference? It's cheaper per gb, but slower mhz.
7. What kind of generation speed performance could I expect from a system of this spec running a 65b model...? Can it converse in real time like [c.ai](https://c.ai) and GPT? I have zero point of reference.
8. How much context can I have with a 65b and 96gb of vram spanned across four cards?
9. How much context do I want to have...? I don't know what it does. Does it make it remember more of the previous messages in our current conversation?
10. Will the P40s be bottlenecking the 3090 somehow, even if they can share the load properly?
11. Are there 65b models good at nsfw roleplay. I just want something at least as good as [c.ai](https://c.ai) at sounding "person-y," without that weird "my butler is pretending to be Exile from road rovers, not actually Exile" vibe that jailbroken ChatGPT has.
12. Question 7, but with 30b models.
13. Are there tutorial posts I should read that would cover most of this. I've been lurking here for a few weeks trying to read everything, but I have a lot of gaps in understanding regarding the basics.
14. Can the AI running on my big rig be used easily from my laptop? I know the answer is yes, but I'm not versed in Linux and SSHing yet either...if that's even what I have to do. I assume there's an interface thing I can somewhat easily use, since that's what every consumer AI service does?
I think that's everything. I've never used ANY of this before, just [c.ai](https://c.ai) and jailbroken ChatGPT for rp, and NovelAI for images (though that's a whole other task to figure out, that seems easier). Any help is appreciated, sorry if this is considered "low effort." I just want to get a rig and hit the ground running. | 2023-06-19T20:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/14dpxj1/complete_noob_thats_been_lurking_for_weeks_but/ | CanineAssBandit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dpxj1 | false | null | t3_14dpxj1 | /r/LocalLLaMA/comments/14dpxj1/complete_noob_thats_been_lurking_for_weeks_but/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]} |
Best LLM for legal analysis | 4 | I'm working on a community project to use (privacy-cleansed) legal data to fine-tune a model, with the end-goal of making the law more accessible to the public. Between us we've a decent range of tech skills, and a benefactor is providing a limited budget (a few thousand to get to launch). We're thinking of building in AWS with SageMaker to keep it manageable, though still looking at alternatives.
However we're struggling to get our heads around the welter of LLMs available, with more every minute it seems. Are some intrinsically better suited to tasks like legal analysis? Also, we've found a bunch of proprietory legal tools for tasks like jargon-busting, but not so many open source. Any tips? | 2023-06-19T21:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/14drnvd/best_llm_for_legal_analysis/ | edinburgh1975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14drnvd | false | null | t3_14drnvd | /r/LocalLLaMA/comments/14drnvd/best_llm_for_legal_analysis/ | false | false | self | 4 | null |
Adapter merge for Flacon 40b instruct model (loaded in 8bit) | 2 | Is it possible to merge a finetuned adapter on falcon 40b loaded in ( 8 bit ) mode? | 2023-06-19T22:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/14dsmvy/adapter_merge_for_flacon_40b_instruct_model/ | unoshow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dsmvy | false | null | t3_14dsmvy | /r/LocalLLaMA/comments/14dsmvy/adapter_merge_for_flacon_40b_instruct_model/ | false | false | self | 2 | null |
llama.cpp - Training your own mini ggml model from scratch works! | 1 | [deleted] | 2023-06-19T22:24:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14dsrer | false | null | t3_14dsrer | /r/LocalLLaMA/comments/14dsrer/llamacpp_training_your_own_mini_ggml_model_from/ | false | false | default | 1 | null |
||
Tutorial - train your own llama.cpp mini-ggml-model from scratch! | 156 | 2023-06-19T22:26:44 | https://asciinema.org/a/592303 | Evening_Ad6637 | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 14dstqm | false | null | t3_14dstqm | /r/LocalLLaMA/comments/14dstqm/tutorial_train_your_own_llamacpp_miniggmlmodel/ | false | false | 156 | {'enabled': False, 'images': [{'id': '1AWydW3gz9Q0_fQz7g4w5Al-pPtgsEMV3SO7mCTxX4c', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=108&crop=smart&auto=webp&s=82435af198450db42a7f193b133cedaeca62cbc8', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=216&crop=smart&auto=webp&s=dc8edd636b7e81781884bce193c34f9e35458d29', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=320&crop=smart&auto=webp&s=d6642e566259b40d355eb7ccdfc55f7d34f3e570', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=640&crop=smart&auto=webp&s=3a07e2a3827b1e18638719d45503d665559fd766', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=960&crop=smart&auto=webp&s=60ad1dcacb312663bb6d359ea185c8aecf2668f8', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=1080&crop=smart&auto=webp&s=adf9055841ae9dea001a71b4a0030fc13de853d8', 'width': 1080}], 'source': {'height': 2156, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?auto=webp&s=33639de986c0c9e29b4296e3bda23d73fcfb2822', 'width': 3239}, 'variants': {}}]} |
||
Conda environment is empty. Press any key to continue . . . "Press key" The windows closes and nothing happens. | 1 | [removed] | 2023-06-19T22:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/14dtjq0/conda_environment_is_empty_press_any_key_to/ | Vanilla_Vampi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dtjq0 | false | null | t3_14dtjq0 | /r/LocalLLaMA/comments/14dtjq0/conda_environment_is_empty_press_any_key_to/ | false | false | default | 1 | null |
What are the current best uncensored models? | 44 | \[Bottom text\] | 2023-06-19T23:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/14dtqw2/what_are_the_current_best_uncensored_models/ | TheZorro_Sama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dtqw2 | false | null | t3_14dtqw2 | /r/LocalLLaMA/comments/14dtqw2/what_are_the_current_best_uncensored_models/ | false | false | self | 44 | null |
Any tips for having models adhere to chat formatting (ie speech in quotes and actions in asterisks) when using KoboldAI with SillyTavern? | 6 | I’ve been using Sillytavern with koboldAI running via runpod, and am just wondering if there are any settings or improved prompts I can adjust in ST to improve character’s adherence to the “chat formatting” in their replies.
I’ve been using 4bit Airoboros 33B and VidUnlocked 30b and really like the content of the results, but they don’t adhere to the formatting, often inserting extra slashes and multiple asterisks, etc.
Airoboros also often includes the word “USER:” and even writes for my character. I’m able to include “USER:” as a stop sequence, but it still spends time generating those replies even if ST ignores it.
4bit Chronos 33B is great and adheres to the formatting, but the character’s speech is often very literal and I would prefer to use the other models If they would adhere to the formatting.
I know this is largely due to the data they’re trained on, but am just wondering If there is anything I’m missing in SillyTavern to improve or force their adherence to this formatting, or if these models have optimal inclusions for system prompt, input sequence, output sequence, system sequence, stop sequence, and separator.
Any advice or resources would be much appreciated. | 2023-06-19T23:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/14dtv5r/any_tips_for_having_models_adhere_to_chat/ | BangkokPadang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dtv5r | false | null | t3_14dtv5r | /r/LocalLLaMA/comments/14dtv5r/any_tips_for_having_models_adhere_to_chat/ | false | false | self | 6 | null |
QLoRA Dataset | 1 | [removed] | 2023-06-19T23:18:18 | [deleted] | 2023-06-19T23:23:11 | 0 | {} | 14du2vv | false | null | t3_14du2vv | /r/LocalLLaMA/comments/14du2vv/qlora_dataset/ | false | false | default | 1 | null |
||
Better/Good Uncensored 30B/65B model for RP? | 6 | Hi there! Wondering which models are being used for RP that works good, while also being uncensored.
30B there seems to be a lot, but at 65B there's airoboros, guanaco, robin, alpaca-vicunlocked, but I'm not sure if there's more. There seems to be very few of them.
For 30B, I know those same models I mentioned that are on 65B.
Any suggestion? Thanks! | 2023-06-20T00:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/14dvl34/bettergood_uncensored_30b65b_model_for_rp/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dvl34 | false | null | t3_14dvl34 | /r/LocalLLaMA/comments/14dvl34/bettergood_uncensored_30b65b_model_for_rp/ | false | false | self | 6 | null |
Finetuned LLM not getting better performance | 3 | I have been trying hard to get Vicuna 7B to be a better assistant by finetuning it on my own documents, but results so far have been weak. It doesn't seem like next word prediction on a new high quality corpus is enough for it to leverage the knowledge on a chat bot format. Has anyone faced this same problem? We could share some insights here on what works and what doesn't when trying to make a chat bot specialize on a niche subject. | 2023-06-20T00:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/14dvmkr/finetuned_llm_not_getting_better_performance/ | kreuzguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dvmkr | false | null | t3_14dvmkr | /r/LocalLLaMA/comments/14dvmkr/finetuned_llm_not_getting_better_performance/ | false | false | self | 3 | null |
Anyboty Client - Alpha 0.1.1 Release - GGMLv3 Compatible Easy to Use Client | 10 | 2023-06-20T00:55:51 | https://anyboty.itch.io/anyboty | Pathos14489 | anyboty.itch.io | 1970-01-01T00:00:00 | 0 | {} | 14dwcft | false | null | t3_14dwcft | /r/LocalLLaMA/comments/14dwcft/anyboty_client_alpha_011_release_ggmlv3/ | false | false | 10 | {'enabled': False, 'images': [{'id': '7aKyCAIq4SAnFwxYHDpdcXu8UleY-vwKVX4Jat_6qLo', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?width=108&crop=smart&auto=webp&s=712fb68d4db7b695d2a8b5bd76b9e9cde7ff9f17', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?width=216&crop=smart&auto=webp&s=0dd5dce6cd293490f5890109916f87d3ce0b345d', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?width=320&crop=smart&auto=webp&s=e15a742eb944d64a0cbb84348b851848f94560d6', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?auto=webp&s=6b37f441be7660bc53ee74b94931fa3a6d7c76d8', 'width': 630}, 'variants': {}}]} |
||
Alternative C# bindings in development, anyone interested? | 12 | **Edit: The bulk of the code can be found in this subdirectory.**
**https://github.com/MrJackSpade/Chie/tree/main/LLamaSharp/LLama**
Was just wondering if there's anyone who wants to contribute to a chunk of a project I'm working on.
I wasn't particularly satisfied with the state of the current LlamaSharp project, seeing as how its pretty much a straight up 1:1 port of the C++ code. It lacked a lot of functionality that I was hoping to leverage, so I ended up writing my own to be more in line with the standard design patterns of the .NET Framework, and following the MS code style guidelines.
As far as progress goes, its been great. I've had no problem keeping it up with the newest releases of Llama.cpp, and I've added an assload of new features, like
1. Interfaces for injecting custom code into various stages of the inference process including context rollovers, token selection, and sampling
2. Interfaces for asynchronously processing data post-return (esp context cleanup) to allow evaluation while the user is typing a response, instead of performing the operations synchronously while the user is waiting.
3. Token metadata support, allowing tokens to be saved and loaded with arbitrary metadata to be used at various points during the execution pipeline (denoting tokens as temporary, user created, or returned by the model for example)
as well as some other nice QOL stuff, like defining a "TokenCollection" class that supports operations like splitting, joining, etc, abstracting the context evaluation so that changes evaluations are always tracked and performed "as needed" removing the need to self-calculate evaluation stuff.
All in all, just a ton of really good stuff that allows for *intensely* customizing the experience of inference through C# without actually needing to make any modifications to the underlying code, by treating the LlamaModel in the same way one would treat something like a WebApplication. Configure, dependency inject, and instantiate.
This module would (does) make an absolutely great basis for a WebApi, allowing for a very customizable experience. I've actually done that myself. The problem is, while I've done my best to maintain proper isolation between layers, there is definitely some bias in my implementation. Its a little rough around the edges. At the same time, I feel like its kind of a waste for me to be the only one actually leveraging this, when it seems like so many other people are just trying to work with unsavory API's provided by other clients, or reinventing the wheel by using the existing classes.
So, to the fucking point finally, I was wondering if theres anyone else on the Sr+ side of development who wants to help with a little bit of cleanup so that other people can leverage what I've spend the past month learning/writing. Help me get out of that "my use case" bias, clean up the edges a little bit, and possibly finalize packing it up into an easy to use/consume package or web-api. Seems like it could be a win-win. Community gets a clean and easy API, and I get someone to call my bullshit if I start cutting corners. | 2023-06-20T02:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/14dxznf/alternative_c_bindings_in_development_anyone/ | mrjackspade | self.LocalLLaMA | 2023-06-20T17:49:53 | 0 | {} | 14dxznf | false | null | t3_14dxznf | /r/LocalLLaMA/comments/14dxznf/alternative_c_bindings_in_development_anyone/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '7F21HkIZ7kiK1HC_9WGRzzNC45LauG9S2AgOqNfW6uw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=108&crop=smart&auto=webp&s=415758c5775304ae97cc5c64096e4bce799a3ed9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=216&crop=smart&auto=webp&s=be1fcca89fb90da77c48ef6f7fcb6dffe2952290', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=320&crop=smart&auto=webp&s=b706056d3c3f7dec003699eb0dbc12017b18946f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=640&crop=smart&auto=webp&s=359f7ad4a3a9ee5d85993c4b460306c7274e7f54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=960&crop=smart&auto=webp&s=eba7a4da643214f584af74e3ce3b610e1507b9d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=1080&crop=smart&auto=webp&s=b2e7e8f1ebbb3ae97aaa7190a00c802276b1c36d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?auto=webp&s=617563c29e6637b3d2962f3d0d11a2f49a921968', 'width': 1200}, 'variants': {}}]} |
Try the new RWKV world 7B, it's incredible. | 32 | [deleted] | 2023-06-20T02:19:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14dy6of | false | null | t3_14dy6of | /r/LocalLLaMA/comments/14dy6of/try_the_new_rwkv_world_7b_its_incredible/ | false | false | default | 32 | null |
||
Making a LoRA for WizardLM | 0 | [deleted] | 2023-06-20T02:20:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14dy795 | false | null | t3_14dy795 | /r/LocalLLaMA/comments/14dy795/making_a_lora_for_wizardlm/ | false | false | default | 0 | null |
||
Creating an Org Knowledge Management System | 10 | Hi All,
Kinda new to this space yet tried a few models.
I would like to get direction on how to create a personalized knowledge management system that allows users to type and get answers.
I want to train this model with organizational data and keep it privately hosted. Happy to create department-wise models if all going to be an overloaded model. example: one for HR, one for IT, etc. The major problem I want to address, keep the org knowledge in-house, so when someone leaves the org the next person can just ask from the model about how previously done, ongoing work etc.
If you can suggest a
01. Good model for this
02. How to ingest data to model with pdfs, word, and other doc sources.
03. How to ingest data from Teams transcripts
Want to create this model as one knowledgeable person in the team.
TY. <3 | 2023-06-20T02:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/14dyimz/creating_an_org_knowledge_management_system/ | SpongeBob_000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dyimz | false | null | t3_14dyimz | /r/LocalLLaMA/comments/14dyimz/creating_an_org_knowledge_management_system/ | false | false | self | 10 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.