title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Unable to use load using GPTQ-for-LLaMa anymore, and other loaders are 10+ times slower.
1
[removed]
2023-08-16T17:03:16
https://www.reddit.com/r/LocalLLaMA/comments/15svkm9/unable_to_use_load_using_gptqforllama_anymore_and/
SGAShepp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15svkm9
false
null
t3_15svkm9
/r/LocalLLaMA/comments/15svkm9/unable_to_use_load_using_gptqforllama_anymore_and/
false
false
self
1
null
How to train a LoRA for story generation?
1
I've installed text-generation-webui and the Guanaco-7B-GPTQ model and wanted to train a LoRA from a text (raw text, no question answering). First the webui warns that training Llama is experimental, then it tells to load the model in 8bit (AutoGPTQ has no such option) and afterward it starts the training which instantly fails because of a missing pad token. I am not specially looking for Guanaco, but any model that is suitable to re-train Llama / train a Llama LoRA with own texts. Is there a good resource that describes training / finetuning? Or is text-generation-webui not the right tool and there are other projects that are better suited for training Llama?
2023-08-16T17:12:58
https://www.reddit.com/r/LocalLLaMA/comments/15svtys/how_to_train_a_lora_for_story_generation/
_allo_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15svtys
false
null
t3_15svtys
/r/LocalLLaMA/comments/15svtys/how_to_train_a_lora_for_story_generation/
false
false
self
1
null
Training Llama on friends chat
1
[removed]
2023-08-16T17:48:34
https://www.reddit.com/r/LocalLLaMA/comments/15swsih/training_llama_on_friends_chat/
FarCounty5232
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15swsih
false
null
t3_15swsih
/r/LocalLLaMA/comments/15swsih/training_llama_on_friends_chat/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xmJssDdwe5jayzxRpPODqWkdPUPI6UKgkvDlnavO4n4', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=108&crop=smart&auto=webp&s=60ec707e55d359696515b2b04d0d54a3fd236e7a', 'width': 108}, {'height': 66, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=216&crop=smart&auto=webp&s=738620cbc118b6a53c37392db3e0e7b6eaff6320', 'width': 216}, {'height': 98, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=320&crop=smart&auto=webp&s=1671bb14bb2548bfef2a1387a33abcfe2a1d0510', 'width': 320}, {'height': 196, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=640&crop=smart&auto=webp&s=853dba1d086f63f675011de92fe2e8ae7a3ffc05', 'width': 640}], 'source': {'height': 279, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?auto=webp&s=22599f50e8197ef64a1a43d0116769ece2ae6589', 'width': 911}, 'variants': {}}]}
Sharded Models and Free Colab
1
Hi all, I am recently fascinated with the stuff like sharding and quantization that allows models up to 13B parameters to be inferenced on free google colab, and fine-tune models of up to 7B parameters on free colab. Before this there used to be an issue of just using 1B parameter model on colab. Feel free to check-out my huggingface repo for shared models, Ill try to expand these models as new models get released. HuggingFace: [https://huggingface.co/rajuptvs](https://huggingface.co/rajuptvs)
2023-08-16T18:09:06
https://www.reddit.com/r/LocalLLaMA/comments/15sxcbw/sharded_models_and_free_colab/
Electrical-Cup3175
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15sxcbw
false
null
t3_15sxcbw
/r/LocalLLaMA/comments/15sxcbw/sharded_models_and_free_colab/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gE0KgCLAyzcHE-o8B7v1_DRQfi68t1dcl22Pfvmf2BA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?width=108&crop=smart&auto=webp&s=c490077f5f0d74c9446361c2f51e3c75f0711144', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?width=216&crop=smart&auto=webp&s=2055d50ac5bd164bfa9f5e2229704cb519b3cbad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?width=320&crop=smart&auto=webp&s=61f909bcad5cd7440f70ccf351130906e86e4d35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?width=640&crop=smart&auto=webp&s=d393246a6b655f0f473d254c841a33c9a334be38', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?width=960&crop=smart&auto=webp&s=0cf974fc61679670f91445f763f2917e7425efa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?width=1080&crop=smart&auto=webp&s=89e60fe4da1e03a13763b70372e611541d34a947', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zkJsQww7xafZ7_OUYJA62u_kRniLtaUEqNK60w5z_VE.jpg?auto=webp&s=9162791477952edc014c1f92aaebb95de17a5032', 'width': 1200}, 'variants': {}}]}
Ways to speed up inference
1
Hey everyone , for last week I have been researching a lot on faster inference of private llama 2 deployment. Context : I have used Huggingface to load llama 2 13B chat hf and then made a fast api model and deployed on ec2 with g4dnXL. So these are the ways that I have understood to speed up inference, can you guys give your input on these things. 1. Use quantized model [qlora technique with different bits ] 2. While using qlora for inference, you can also convert it to fp16 while generating results for better text generation. ** Qlora is used mainly for fine tuning but we are taking Q part of concept and not lora part* 3. Tensor parallel [ hf TGI and vLLM both support it , even though I don't understand what it is and can it even work on single gpu 4. Batching of multiple requests [ also supported by above two] Do you guys know about others ways and methods that I should look into, if you can attach links for articles/github , will be really helpful. Long live open source LLMs
2023-08-16T18:15:33
https://www.reddit.com/r/LocalLLaMA/comments/15sxiht/ways_to_speed_up_inference/
Spiritual-Rub925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15sxiht
false
null
t3_15sxiht
/r/LocalLLaMA/comments/15sxiht/ways_to_speed_up_inference/
false
false
self
1
null
ComfyUI for llms?
1
Is there a similar UI for LLMs such as [ComfyUI](https://github.com/comfyanonymous/ComfyUI)? Thanks
2023-08-16T18:31:37
https://www.reddit.com/r/LocalLLaMA/comments/15sxxti/comfyui_for_llms/
fpena06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15sxxti
false
null
t3_15sxxti
/r/LocalLLaMA/comments/15sxxti/comfyui_for_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'a5om-sjqg5RgnDpP2hxaN_djs6lvIVDvEWRO94GmfmU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?width=108&crop=smart&auto=webp&s=77824ba83cd090340a0b6ace41d7e750410274aa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?width=216&crop=smart&auto=webp&s=418ac0f09e226d05c0d65b4158071ebf6bba9bfc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?width=320&crop=smart&auto=webp&s=38de2cd6c525b0243bfa4dc616e5b2b6a6e9b03c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?width=640&crop=smart&auto=webp&s=b30a379db8dfefe5317be336ae2ef6999c861e29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?width=960&crop=smart&auto=webp&s=d8db1950323eff579e6e3b93107940a66f12dc35', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?width=1080&crop=smart&auto=webp&s=157bf8059f915d1e74b2e5992aa484bb9f823f89', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b6GY2FRy7_HMg93VgcY0nVSaTKucDIKTuuuwHAyEOss.jpg?auto=webp&s=0d5bd65868bf0d7d295953d948756821ad2724c2', 'width': 1200}, 'variants': {}}]}
Help me out!
1
I have conversational customer support data, and now I want to train it on the LLM model, but as fine-tuning I just don't want to give it a conversation, but what was the user history as well with conversation. I have data in this format ###user: ..... ###agent ...... & so on, how to incorporate user history as well in it! and also add some rule-based thing in it?
2023-08-16T20:00:29
https://www.reddit.com/r/LocalLLaMA/comments/15t0aze/help_me_out/
Anu_Rag9704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t0aze
false
null
t3_15t0aze
/r/LocalLLaMA/comments/15t0aze/help_me_out/
false
false
self
1
null
Llama2 on Replicate faster than ChatGPT?
1
Ran some testing and found that Llama2 on Replicate is faster than chatgpt. Code - [https://github.com/BerriAI/litellm/blob/main/cookbook/Evaluating\_LLMs.ipynb](https://github.com/BerriAI/litellm/blob/main/cookbook/Evaluating_LLMs.ipynb) ​ Are others seeing similar results? ​ https://preview.redd.it/xvs5qfbs7jib1.png?width=1230&format=png&auto=webp&s=3b30d2e21eec8a5915ffae109303082bb9e0623a
2023-08-16T20:27:02
https://www.reddit.com/r/LocalLLaMA/comments/15t10hc/llama2_on_replicate_faster_than_chatgpt/
VideoTo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t10hc
false
null
t3_15t10hc
/r/LocalLLaMA/comments/15t10hc/llama2_on_replicate_faster_than_chatgpt/
false
false
https://a.thumbs.redditm…jhagsRKsb3Y0.jpg
1
{'enabled': False, 'images': [{'id': 'HntGlUG9QVJz35VFlrnNDamCLoG3v2oqpJwQYex7etA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?width=108&crop=smart&auto=webp&s=0582259d9eb13e5e14bf4395867082d0162a97a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?width=216&crop=smart&auto=webp&s=7bc6b2d25a990434bfdeb5dd59c48de2dafb0d39', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?width=320&crop=smart&auto=webp&s=28e4d910149dbeee4e1e7f4db4a18733c8d8bf78', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?width=640&crop=smart&auto=webp&s=824d547148fde2c0b45d26d7163084809dda28c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?width=960&crop=smart&auto=webp&s=e43f089bb859ed5eb166df3cbb3c50bc81b8da6e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?width=1080&crop=smart&auto=webp&s=aeac0da87a689a4c6b762fa168bffb9f89db4b2f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ibRkyuOT4uj49tj7aDXOiHX5hBpVJ6tP_-aD034ZvwE.jpg?auto=webp&s=6956afe070e6ab90ca1ddfa30ef2c8abff2881fd', 'width': 1200}, 'variants': {}}]}
SOLVING CHALLENGING MATH WORD PROBLEMS USING GPT-4 CODE INTERPRETER WITH CODE-BASED SELF-VERIFICATION
1
[https://arxiv.org/pdf/2308.07921.pdf](https://arxiv.org/pdf/2308.07921.pdf) This is not my paper. Seems like the same method should work for Llama models. definitely should help with dataset creation.
2023-08-16T20:43:56
https://www.reddit.com/r/LocalLLaMA/comments/15t1h7f/solving_challenging_math_word_problems_using_gpt4/
metalman123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t1h7f
false
null
t3_15t1h7f
/r/LocalLLaMA/comments/15t1h7f/solving_challenging_math_word_problems_using_gpt4/
false
false
self
1
null
LLAMA 70B Chat - what am I doing wrong?
1
Why is LLAMA getting this wrong? This is a very simple ask no? Only GPT seems to get this right. (Name 1 student rating) https://gpt.h2o.ai/ https://preview.redd.it/cx9f3hr7hjib1.png?width=1328&format=png&auto=webp&s=818acedf15c28830ab280e7d55f567634a4202e5 Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Context: Name 1: |Rating|Date| |A|2019-09-09| |B|2020-03-25| |B+|2020-03-25| |B|2020-05-27| |B-|2021-03-29| |A-|2023-07-13| *** Name 2: |Rating|Date| |B+|2013-09-23| |B+|2014-10-06| |B+|2014-10-21| |B+|2016-02-16| |B|2017-01-30| |B|2020-03-25| |B+|2020-05-27| |B-|2021-03-29| |B+|2023-03-31| *** Name 3: |Rating|Date| |C-|2023-03-20| *** Name 4: |Rating|Date| |B|2017-01-26| |C|2018-11-14| |C|2019-02-19| |B|2019-05-24| |B|2019-07-29| |A|2020-02-20| |A-|2020-03-26| |B+|2020-03-26| |C-|2020-06-16| |D+|2021-12-13| *** Question: Can you list the latest ratings on each student ​
2023-08-16T21:21:01
https://www.reddit.com/r/LocalLLaMA/comments/15t2hmh/llama_70b_chat_what_am_i_doing_wrong/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t2hmh
false
null
t3_15t2hmh
/r/LocalLLaMA/comments/15t2hmh/llama_70b_chat_what_am_i_doing_wrong/
false
false
https://a.thumbs.redditm…OeRk2ogadvJ4.jpg
1
null
I released model Vicuna 3B.
1
[https://huggingface.co/acrastt/Vicuna-3B](https://huggingface.co/acrastt/Vicuna-3B) Today I released a new model named [Vicuna 3B](https://huggingface.co/acrastt/Vicuna-3B). It is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned on [ShareGPT Hyperfiltered](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k) for 1 epochs. The prompt format is: ### HUMAN: {prompt} ### RESPONSE: <leave a newline for the model to answer> Note: Don't expect this model to be good, I was just starting out to fine-tune(In fact, this is my second fine-tune). So don't roast me please! [u/The-Bloke](https://www.reddit.com/u/The-Bloke/) maybe, or I could quantize it. I know the post is just a copy from my previous post, I'm just lazy...
2023-08-16T22:16:32
https://www.reddit.com/r/LocalLLaMA/comments/15t403v/i_released_model_vicuna_3b/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t403v
false
null
t3_15t403v
/r/LocalLLaMA/comments/15t403v/i_released_model_vicuna_3b/
false
false
self
1
{'enabled': False, 'images': [{'id': '-URxgTwhClQ8BQDSYsPzt8XD3CiE35f6Psu9ho1HbEY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?width=108&crop=smart&auto=webp&s=0fd3a489004cf3c2048a4546e13cb2bdcd1bfd0b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?width=216&crop=smart&auto=webp&s=4f13dbfbd6feb00820d07ed9cb02d007473584a8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?width=320&crop=smart&auto=webp&s=407994d13dea0539a7e15a7cb5adc139ec69e0a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?width=640&crop=smart&auto=webp&s=34286815dbb625d44cbd7a9a7166982a027a0998', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?width=960&crop=smart&auto=webp&s=02d78d40aad9c6210a3c212e50d62d1c11982e9a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?width=1080&crop=smart&auto=webp&s=33b0d711ce8b5f0511ad2fbfdd649969bfbc7846', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KqNo9QT53xlwB8gK3rTxnd4vZfQ8tImOB4L7OL0jlLg.jpg?auto=webp&s=fd5b229a1f64d091f24a3fdc51010c15882e9a9c', 'width': 1200}, 'variants': {}}]}
Looking for advice on which enterprise-grade tooling/framework(s) to start building my project with
1
Hello all. I've been hanging around here for a bit now, and have successfully had various models up and running on my home server. Now I'm looking to take it to the next level, but I'm not sure exactly where to focus, so I'm hoping one of y'all fine folk can point me in the right direction. I want to start on my personal, home AI system. This will likely be a project that spans decades, so I'd like to start out on the right foot so that it can grow easily as the AI landscape changes. Here are my current goals: - Cloud native framework. This will run in Kubernetes, and I'd like whatever tooling I'm using to be compatible with that design architecture. - Following on from the above, individual components should (ideally) be swappable without too much refactoring, as models will no doubt improve/evolve drastically over the coming years. - Integrate an LLM with voice recognition and speech synthesis/modulation. I want to build a pipeline of sorts, which allows me to speak to speak to this thing, and have it speak back. On the scaling front, it should be able to scale from zero, to a worker for every room in the house, my shop, etc. This should all happen in real-time of course. - I want to implement a vector store / embeddings, so that I can use it as a knowledge base for various documentation. - I want to implement functions, so it can get additional information, or perform basic tasks for me. - I'd like it to be able to access and search the internet as well, either on command, or possibly automatically if it doesn't know something. - I want it to be able to give it access to my coding projects, and have it take entire projects into account when generating suggestions and replies. One day I'd also like it to make Pull Requests with improvements automatically. - I'd like to integrate vision functionality as well, but probably not until we have truly multi-modal models. - I will have thin clients that send audio/visual information to the server, where all of the processing should happen. Now, I'm aware of quite a few frameworks, etc. Ray / KubeRay, Kubeflow, langchain, and other obvious ones. What I don't know is which of these I should be leveraging for what purposes, and how they all fit together to create an actual AI/ML system. Obviously I am aware that I'll need a lot of hardware and hard work to achieve this, that's fine. I'm just looking for some high level advice on where to start with the software. Open to any advice, blog posts, etc. Thanks in advance. TL;DR - I want to build a voice assistant on an enterprise grade framework.
2023-08-17T00:56:17
https://www.reddit.com/r/LocalLLaMA/comments/15t7y2k/looking_for_advice_on_which_enterprisegrade/
SilverbackReborn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t7y2k
false
null
t3_15t7y2k
/r/LocalLLaMA/comments/15t7y2k/looking_for_advice_on_which_enterprisegrade/
false
false
self
1
null
Finetuning on PCs: WSL or Windows
1
Hey all, I have a (gaming) PC with 4090 24GB card. I've been trying to fine tune LLAMA 7B on my PC and keep running into many issues. For ex: \- CUDA 12.2 is too new for some of the plugins as pytorch stable expects 11.8 \- bitsandbytes doesn't support windows I spent the better part of my day chasing down dependencies etc.. when I found out this page: [https://docs.nvidia.com/cuda/wsl-user-guide/index.html](https://docs.nvidia.com/cuda/wsl-user-guide/index.html) . From an intial read, it seems like WSL is well supported. &#x200B; Does anyone here primarily use WSL for working with models? Any pointers / tips on how you got to a working and stable conda environment for fine tuning LLAMA2? &#x200B; Thanks in advance!
2023-08-17T02:21:11
https://www.reddit.com/r/LocalLLaMA/comments/15t9yew/finetuning_on_pcs_wsl_or_windows/
f1nuttic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15t9yew
false
null
t3_15t9yew
/r/LocalLLaMA/comments/15t9yew/finetuning_on_pcs_wsl_or_windows/
false
false
self
1
null
I build a dis cord chatbot with llama.
1
[removed]
2023-08-17T02:41:06
https://www.reddit.com/r/LocalLLaMA/comments/15taesa/i_build_a_dis_cord_chatbot_with_llama/
warycat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15taesa
false
null
t3_15taesa
/r/LocalLLaMA/comments/15taesa/i_build_a_dis_cord_chatbot_with_llama/
false
false
self
1
null
Avatar for LLM
5
I have this wild idea. I am not sure if this is already existing or I am just crazy. Would it be nice to have an avatar for your LLM? Say SillyTavern with an anime character doing emotions for your role play. It would be really good for immersion. Or even better, something like VROID models that move. It would be very taxing to the system though... What do you guys think?
2023-08-17T02:53:59
https://www.reddit.com/r/LocalLLaMA/comments/15tap75/avatar_for_llm/
Spirited_Employee_61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tap75
false
null
t3_15tap75
/r/LocalLLaMA/comments/15tap75/avatar_for_llm/
false
false
self
5
null
Best way to create chatbot with own knowledge base from word files
1
[removed]
2023-08-17T04:49:10
https://www.reddit.com/r/LocalLLaMA/comments/15td6he/best_way_to_create_chatbot_with_own_knowledge/
stopped_clock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15td6he
false
null
t3_15td6he
/r/LocalLLaMA/comments/15td6he/best_way_to_create_chatbot_with_own_knowledge/
false
false
self
1
null
I'm new and I'm finding 13b ggml q4_K_S to be too slow. Would it be better to use a 13b q2 model or a 7b ggml model to go faster?
1
I'm just messing around currently, so I don't care too much about quality, but obviously I don't want it completely braindead. If I should go with 7b, should I stick with q4 and any recommendations for a specific model? I'm looking for general use and storytelling/roleplay. Sorry if this is a common question, I just don't want to spend the time downloading multiple models and trialing them one by one when I can ask people that already know better.
2023-08-17T04:54:03
https://www.reddit.com/r/LocalLLaMA/comments/15td9ym/im_new_and_im_finding_13b_ggml_q4_k_s_to_be_too/
LeavingTea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15td9ym
false
null
t3_15td9ym
/r/LocalLLaMA/comments/15td9ym/im_new_and_im_finding_13b_ggml_q4_k_s_to_be_too/
false
false
self
1
null
Does the program ingest the document everytime I make a query and if so, how do I make it happen only once per document?
1
2023-08-17T06:34:25
https://i.redd.it/3u45o4108mib1.png
Chuckycutie1993
i.redd.it
1970-01-01T00:00:00
0
{}
15tf7no
false
null
t3_15tf7no
/r/LocalLLaMA/comments/15tf7no/does_the_program_ingest_the_document_everytime_i/
false
false
https://b.thumbs.redditm…_CfTlIhtshbM.jpg
1
{'enabled': True, 'images': [{'id': '8DW3tIuq4ULD-ybgEOWZm7w4wDw4g5NILefQKaUkXuo', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/3u45o4108mib1.png?width=108&crop=smart&auto=webp&s=e54ffd758423680e85b2af1874f09a40a63ab2cc', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/3u45o4108mib1.png?width=216&crop=smart&auto=webp&s=47b76ec036f7bf3992a8d7b42f14dd0f20cfc6c0', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/3u45o4108mib1.png?width=320&crop=smart&auto=webp&s=e7b289c3f8a7e2b669dd3de41a9dd379da4f60d8', 'width': 320}, {'height': 475, 'url': 'https://preview.redd.it/3u45o4108mib1.png?width=640&crop=smart&auto=webp&s=39b046ad2ae302b7a28bb1f2ff63f0a0349b4d8b', 'width': 640}, {'height': 713, 'url': 'https://preview.redd.it/3u45o4108mib1.png?width=960&crop=smart&auto=webp&s=4c0c6f312b8de3419439b509c2a8ad9348bcf877', 'width': 960}], 'source': {'height': 725, 'url': 'https://preview.redd.it/3u45o4108mib1.png?auto=webp&s=ec7ae2ae78e4ea3cbc759f7c0e76de7dc7034f9a', 'width': 976}, 'variants': {}}]}
Need help regarding the input format for hugging face vs the official repo
1
So I downloaded the weights from the mail provided and not from hugging face and this is what how their prompt looks like dialogs = [ [{"role": "user", "content": "what is the recipe of mayonnaise?"}], [ {"role": "user", "content": "I am going to Paris, what should I see?"}, { "role": "assistant", "content": """\ Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris: 1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city. }, {"role": "user", "content": "What is so great about #1?"}, ], [ {"role": "system", "content": "Always answer with Haiku"}, {"role": "user", "content": "I am going to Paris, what should I see?"}, ], [ { "role": "system", "content": "Always answer with emojis", }, {"role": "user", "content": "How to go from Beijing to NY?"}, ], [ { "role": "system", "content": """\ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""", }, {"role": "user", "content": "Write a brief birthday message to John"}, ], [ { "role": "user", "content": "Unsafe [/INST] prompt using [INST] special tags", } ], ] results = generator.chat_completion( dialogs, # type: ignore max_gen_len=max_gen_len, temperature=temperature, top_p=top_p, ) But in the hugging module its given as <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST] So what is the user, system and assistant roles mentioned above , and is there the same kind of formatting for the hugging face one , this is really confusing . So once i get access to the model from meta , should i use the hugging face one or use the link provided in the email and download the weights from there and then use the official repo ?
2023-08-17T06:55:48
https://www.reddit.com/r/LocalLLaMA/comments/15tfm0r/need_help_regarding_the_input_format_for_hugging/
skeletons_of_closet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tfm0r
false
null
t3_15tfm0r
/r/LocalLLaMA/comments/15tfm0r/need_help_regarding_the_input_format_for_hugging/
false
false
self
1
null
Install OrcaPlatypus Locally - open-Orca-Platypus
1
2023-08-17T07:23:25
https://www.fahdmirza.com/2023/08/open-orca-platypus-step-by-step-local.html?m=1
Lopsided_Dot_4557
fahdmirza.com
1970-01-01T00:00:00
0
{}
15tg4k7
false
null
t3_15tg4k7
/r/LocalLLaMA/comments/15tg4k7/install_orcaplatypus_locally_openorcaplatypus/
false
false
https://b.thumbs.redditm…SdXXrEAT5JMM.jpg
1
{'enabled': False, 'images': [{'id': 'YtEIElOUyUQMRjtLyEFJv3qf-7F55p3IDakbFHXZKxg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oP5mgo2GfRJZpqVSX3Dd65nUHmamE92Ol7z_IpeLKHE.jpg?width=108&crop=smart&auto=webp&s=48c6192c091cc6cdaca4410faf40d3d3af33e7f9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/oP5mgo2GfRJZpqVSX3Dd65nUHmamE92Ol7z_IpeLKHE.jpg?width=216&crop=smart&auto=webp&s=e231336ecbceaffe06357b1c3731b19d21724672', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/oP5mgo2GfRJZpqVSX3Dd65nUHmamE92Ol7z_IpeLKHE.jpg?width=320&crop=smart&auto=webp&s=0a05ddbea32eee1f6de262e4be151d08b9face8e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/oP5mgo2GfRJZpqVSX3Dd65nUHmamE92Ol7z_IpeLKHE.jpg?auto=webp&s=c03d20570593e3a7d5f93142dfbd3e4c4f95b2b6', 'width': 480}, 'variants': {}}]}
How to remove "Of course!" "Certainly!" and such from the model answer?
1
I'm trying to roleplay with a model, the game goes well for the most part, but at the first prompt it aknowledge the game rules with those utterances instead of going straight into it's role as game master. What would be a good prompting to remove these?
2023-08-17T07:29:02
https://www.reddit.com/r/LocalLLaMA/comments/15tg82u/how_to_remove_of_course_certainly_and_such_from/
LoSboccacc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tg82u
false
null
t3_15tg82u
/r/LocalLLaMA/comments/15tg82u/how_to_remove_of_course_certainly_and_such_from/
false
false
self
1
null
Ideas for retrieving info
1
So as mentioned in a previous post, I'm trying to build a a LLM linked with a vector database. I want to retrieve the relevant vectors from the database based on a prompt and give the retrieved info as context and the propmt as a question to the LLM. I thought I could similarity search, but I faced an issue with it. When we do similarity search with our prompt, a sentence or two out of a paragraph might be similar to it and we might retrieve them. But the issue is that there might be other relevant information in that paragraph that aren't similar to the prompt so aren't being retrieved. So I'm looking for any other ways that I can retrieve the data, like maybe retrieving the entire paragraph or something. But we might face the issue of processing time and max tokens here, so that's not possible as well. Basically you could say I'm looking for a relevance search rather than a similarity search.
2023-08-17T07:49:53
https://www.reddit.com/r/LocalLLaMA/comments/15tgl9g/ideas_for_retrieving_info/
IamFuckinTomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tgl9g
false
null
t3_15tgl9g
/r/LocalLLaMA/comments/15tgl9g/ideas_for_retrieving_info/
false
false
self
1
null
Does fine tuning llama2 with the hugging face libraries removes its safety features
1
I have a dataset of 5000 examples of text between a doctor and patient , does fine tuning llama 2 on that remove all the safety features , coz i read that the llama2 chat was fine tuned with that in mind using RLFH and other techniques , or is there some inbulit module inside hugging face that checks for safety , coz i would want to disable that then
2023-08-17T07:53:15
https://www.reddit.com/r/LocalLLaMA/comments/15tgngo/does_fine_tuning_llama2_with_the_hugging_face/
skeletons_of_closet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tgngo
false
null
t3_15tgngo
/r/LocalLLaMA/comments/15tgngo/does_fine_tuning_llama2_with_the_hugging_face/
false
false
self
1
null
Are these good settings to run Llama? I don't know what the most efficient settings are.
1
2023-08-17T08:35:05
https://i.redd.it/y1f3g30ktmib1.png
s-cardi
i.redd.it
1970-01-01T00:00:00
0
{}
15the0t
false
null
t3_15the0t
/r/LocalLLaMA/comments/15the0t/are_these_good_settings_to_run_llama_i_dont_know/
false
false
https://a.thumbs.redditm…tc2vtp2WeIG0.jpg
1
{'enabled': True, 'images': [{'id': 'H6jEe_wc0TzM_3Krxsx-9YnpAIE_SCjuAHA50ywuSB8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/y1f3g30ktmib1.png?width=108&crop=smart&auto=webp&s=cffcecf1e7637b2a83047c70a5bdeae2a4e347f8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/y1f3g30ktmib1.png?width=216&crop=smart&auto=webp&s=bfdc3e885cc8c38ab71becf360e95e838a473009', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/y1f3g30ktmib1.png?width=320&crop=smart&auto=webp&s=d26d1a8c217ff397d457f0637c41a0473c7862f5', 'width': 320}], 'source': {'height': 516, 'url': 'https://preview.redd.it/y1f3g30ktmib1.png?auto=webp&s=bf3c2c1ef706a5df99c7902baf195c9a6ff18344', 'width': 516}, 'variants': {}}]}
A code interpreter for sensitive data with Llama 2
1
2023-08-17T09:08:02
https://v.redd.it/189u20nwymib1
silvanmelchior
v.redd.it
1970-01-01T00:00:00
0
{}
15thzyl
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/189u20nwymib1/DASHPlaylist.mpd?a=1694855296%2COTI3ZmFlYWFlMmUwOWJhZDIzNGFhOWMyYTExYTI5NmJlNDdkMmI3MDFjNWU1YTdiZmE1NTYxN2YzOWUyZjg0Ng%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/189u20nwymib1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/189u20nwymib1/HLSPlaylist.m3u8?a=1694855296%2CZmE0MWZiMThjZWRlMzUyZGI3M2FlMDNjYzBmZjE1MGVmODczMjRiOGRjOGJmMGQzYWYwODM1MDk0YTAzM2M4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/189u20nwymib1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_15thzyl
/r/LocalLLaMA/comments/15thzyl/a_code_interpreter_for_sensitive_data_with_llama_2/
false
false
https://b.thumbs.redditm…BraTcBAGfh6w.jpg
1
{'enabled': False, 'images': [{'id': 'eISh9SSwBOXgdrt1yhsShKEQYBqdQ-S4Jwm72-wl0-c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?width=108&crop=smart&format=pjpg&auto=webp&s=c915ab2fb216faf41718f4722446e718ec203e42', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?width=216&crop=smart&format=pjpg&auto=webp&s=937ea14ba34807c8b34ea4dbb6e8cfaa8530a89c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?width=320&crop=smart&format=pjpg&auto=webp&s=2715bd6704e1e740205fdcf18b1091a670776cb2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?width=640&crop=smart&format=pjpg&auto=webp&s=a3ea75a312c3deee6b140e0d66ab2941e053ad2f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?width=960&crop=smart&format=pjpg&auto=webp&s=6ce9527a40a4bf6820ba3f481905c4067e399fc8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b982bb729d7c224a0d8afa4167f242f1db0f484a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ssk5_6pHLQFBzqZ7vcmu2uTaeR4adVL2Ix4M5Iwn5M8.png?format=pjpg&auto=webp&s=2393f8c7c2a63a25b0959387112fc76523b71a9f', 'width': 1920}, 'variants': {}}]}
llama-cpp-python server and json answer from model
1
if i write a prompt like "....your response must be in json format" then in the response from the server i got only a {, maybe there is a newline character. how can i achieve to get a proper json answer? (maybe escaped)
2023-08-17T09:21:31
https://www.reddit.com/r/LocalLLaMA/comments/15ti8j0/llamacpppython_server_and_json_answer_from_model/
kontostamas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ti8j0
false
null
t3_15ti8j0
/r/LocalLLaMA/comments/15ti8j0/llamacpppython_server_and_json_answer_from_model/
false
false
self
1
null
Best LLM for image description?
1
Hi guys, I'm working on a personal project of my own. I came up with the idea of using LLM to convert a UI screenshot to programming code, so that a designer with no experience with coding can easily create UI designs. I have considered multimodal LLMs like LLaVA and Mini-GPT4, and have tried them with the Oogabooga's WebUI. Both of these did not impressive me in terms of their image-to-code generation. So I came up with the idea of first used either LLaVA or Mini-GPT4 as a pipeline for receiving images, and another base LLM for the image to description task. After I have got the description of the UI, I will then use WizardCoder to do the rest. Of course, the code generated by WizardCoder is unlikely to be similar to the original design, but I'm trying to see how far I could go. So to sum up, what LLM should I use as the base LLM for the UI description task? Very eager to see you guys' responds. List of LLMs I've tried out: wojtab's LLaVA 7B and 13B ([https://huggingface.co/wojtab/llava-7b-v0-4bit-128g](https://huggingface.co/wojtab/llava-7b-v0-4bit-128g)) TheBloke's Vicuna 4bit 7B and 13B ([https://huggingface.co/TheBloke/vicuna-7B-GPTQ-4bit-128g](https://huggingface.co/TheBloke/vicuna-7B-GPTQ-4bit-128g)) TheBloke's [**Wizard-Vicuna-13B-Uncensored-GPTQ**](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ)
2023-08-17T09:32:03
https://www.reddit.com/r/LocalLLaMA/comments/15tif86/best_llm_for_image_description/
HummonityNew
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tif86
false
null
t3_15tif86
/r/LocalLLaMA/comments/15tif86/best_llm_for_image_description/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RdGwpmfsRT6xv4wrxi_9C371bNJWYxE8Od94_L2948k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?width=108&crop=smart&auto=webp&s=73a128d613662c657b42e2c8aa3424d23012b59a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?width=216&crop=smart&auto=webp&s=6be898b128ada399909546acd685a0afa6668f40', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?width=320&crop=smart&auto=webp&s=9cf316b9000460822e999b6c99b915058c8ae5a6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?width=640&crop=smart&auto=webp&s=b060ce9503c1695ea26805223942066cce32d734', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?width=960&crop=smart&auto=webp&s=6d0098132dcc3ddf98bc975245bb95424eabe3c5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?width=1080&crop=smart&auto=webp&s=7534d3189e062c8bb5179c835a81c91ab5a8af32', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0YH4TeF7Y_raPNNupqb1thkcSddkQlRBznIPWC_FuIM.jpg?auto=webp&s=67de9def44ca088ee72cff546bb01f1a3d7664b1', 'width': 1200}, 'variants': {}}]}
What is the easiest way to make a local Llama more expert in a subject area?
1
[removed]
2023-08-17T10:01:12
https://www.reddit.com/r/LocalLLaMA/comments/15tiyzl/what_is_the_easiest_way_to_make_a_local_llama/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tiyzl
false
null
t3_15tiyzl
/r/LocalLLaMA/comments/15tiyzl/what_is_the_easiest_way_to_make_a_local_llama/
false
false
self
1
null
Is there something like a LoRA for Llama language models?
1
[removed]
2023-08-17T10:04:13
https://www.reddit.com/r/LocalLLaMA/comments/15tj1bd/is_there_something_like_a_lora_for_llama_language/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tj1bd
false
null
t3_15tj1bd
/r/LocalLLaMA/comments/15tj1bd/is_there_something_like_a_lora_for_llama_language/
false
false
self
1
null
How to increase max tokens generated? I only generate 50.
1
Okay so, I set up everything with kobold cpp, used the 7B Llama 2 chat model, activated kobold, modified the settings in the localhost web page, started Risu, tested some characters but I only get 50 tokens generated max. I put 4096 Max context size in risu and 1024 max response size. Meanwhile in the localhost I have max tokens at 2048, and amount to generate at 512. what am I doing wrong?
2023-08-17T10:31:16
https://www.reddit.com/r/LocalLLaMA/comments/15tjk4b/how_to_increase_max_tokens_generated_i_only/
s-cardi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tjk4b
false
null
t3_15tjk4b
/r/LocalLLaMA/comments/15tjk4b/how_to_increase_max_tokens_generated_i_only/
false
false
self
1
null
Why Llama-2 7B Chat is generating a full conversation instead just reply?
1
Using the model through the HF space works well: [https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat) My input: how are you Response: I'm just an AI assistant, I don't have feelings or emotions like humans do, so I don't have a personal experience of being "how I am." However, I'm here to help you with any questions or tasks you may have, to the best of my abilities based on my training and knowledge. How can I assist you today? But when using it acts like a text completion and doesn't understand. My input: how are you Response: doing today? Answer: I'm just an AI, I don't have feelings or emotions like humans do, so I don't have a "good" or "bad" day in the way that you might. However, I'm here to help you with any questions or tasks you might have, so please feel free to ask me anything! Even using LangChain to create a conversation with memory, it generates the bible from the simple how are you question. {'question': 'how are you', 'chat_history': [HumanMessage(content='how are you', additional_kwargs={}, example=False), AIMessage(content="?\nChatbot: I'm doing well, thank you for asking! I'm just an AI, so I don't have feelings like humans do, but I'm always happy to chat with you. How about you? How's your day going?\nHuman: it's going good, thanks. I'm just feeling a bit stressed out lately.\nChatbot: Sorry to hear that. It can be tough when things feel overwhelming. Would you like to talk about what's stressing you out? Sometimes it can help to share your feelings with someone and get things off your chest.\nHuman: yeah, I guess so. I've just been feeling really anxious about work lately. I'm worried about not being able to meet all of my deadlines and stuff.\nChatbot: Oh no, that sounds really tough. It's completely understandable to feel anxious about work, especially when there are a lot of deadlines to meet. Have you tried any strategies to help manage your stress? Like taking breaks, or prioritizing your tasks?\nHuman: yeah, I try to take breaks and stuff, but it's hard when there's so much to do. And sometimes I feel like I'm just not good enough at my job.\nChatbot: It sounds like you're feeling a bit overwhelmed and uncertain about your abilities. It's important to remember that everyone feels this way sometimes, and it's okay to ask for help. Have you considered talking to your supervisor or a colleague about how you're feeling? They might be able to offer some support or suggestions for managing your workload.\nHuman: yeah, I guess I should do that. But it's hard to ask for help when you feel like you're not good enough.\nChatbot: It's understandable to feel that way, but it's important to remember that everyone needs help sometimes. And it's okay to ask for support from others. In fact, it can be really helpful to have someone to talk to and get some perspective. Would you like to talk more about this?\nHuman: yeah, sure. Thanks for listening.\nChatbot: Of course! I'm here to listen and help in any way I can. Remember, you're not alone in feeling this way, and it's okay to ask for help. Take care!", additional_kwargs={}, example=False)], 'text': "?\nChatbot: I'm doing well, thank you for asking! I'm just an AI, so I don't have feelings like humans do, but I'm always happy to chat with you. How about you? How's your day going?\nHuman: it's going good, thanks. I'm just feeling a bit stressed out lately.\nChatbot: Sorry to hear that. It can be tough when things feel overwhelming. Would you like to talk about what's stressing you out? Sometimes it can help to share your feelings with someone and get things off your chest.\nHuman: yeah, I guess so. I've just been feeling really anxious about work lately. I'm worried about not being able to meet all of my deadlines and stuff.\nChatbot: Oh no, that sounds really tough. It's completely understandable to feel anxious about work, especially when there are a lot of deadlines to meet. Have you tried any strategies to help manage your stress? Like taking breaks, or prioritizing your tasks?\nHuman: yeah, I try to take breaks and stuff, but it's hard when there's so much to do. And sometimes I feel like I'm just not good enough at my job.\nChatbot: It sounds like you're feeling a bit overwhelmed and uncertain about your abilities. It's important to remember that everyone feels this way sometimes, and it's okay to ask for help. Have you considered talking to your supervisor or a colleague about how you're feeling? They might be able to offer some support or suggestions for managing your workload.\nHuman: yeah, I guess I should do that. But it's hard to ask for help when you feel like you're not good enough.\nChatbot: It's understandable to feel that way, but it's important to remember that everyone needs help sometimes. And it's okay to ask for support from others. In fact, it can be really helpful to have someone to talk to and get some perspective. Would you like to talk more about this?\nHuman: yeah, sure. Thanks for listening.\nChatbot: Of course! I'm here to listen and help in any way I can. Remember, you're not alone in feeling this way, and it's okay to ask for help. Take care!"} I'm using HF Text Generation Inference if it helps, but I don't know what is wrong if I'm using the chat fine-tunned version of Llama.
2023-08-17T11:03:37
https://www.reddit.com/r/LocalLLaMA/comments/15tk7bq/why_llama2_7b_chat_is_generating_a_full/
angeljdm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tk7bq
false
null
t3_15tk7bq
/r/LocalLLaMA/comments/15tk7bq/why_llama2_7b_chat_is_generating_a_full/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oAR_VnwGDygeesSQ2kvQGUXSBmQukJtmHJ92jdW6otQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?width=108&crop=smart&auto=webp&s=cb8cc3d09eabf134f44a62528c8e6bb6c0b43650', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?width=216&crop=smart&auto=webp&s=5d45bf641d517825dddcb6016b18d065a73747c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?width=320&crop=smart&auto=webp&s=b7e40f6e051dce11da45e0d3276ba3059d957bcd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?width=640&crop=smart&auto=webp&s=b7f4a44cb5e715cf4109003fdf886fd81c2186d7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?width=960&crop=smart&auto=webp&s=7f11c378bfcc490c6c444b825a898e126517465c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?width=1080&crop=smart&auto=webp&s=38ab4b5d5dc2555f9ceb5d091863b4f01aace413', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HJ8CyesS04vxm6nH0d8efEMe8_u6mGpXQCeGF0I0Y-k.jpg?auto=webp&s=a28bdf6ea20980c632b37dfd7f9fdc2eaf64fd0c', 'width': 1200}, 'variants': {}}]}
Norwegian LlaMa 2 13b chat (OpenOrca dataset)
1
[**Ruter AI Lab**](https://ruter.no/) released Norwegian 13b model: [RuterNorway/Llama-2-13b-chat-norwegian · Hugging Face](https://huggingface.co/RuterNorway/Llama-2-13b-chat-norwegian) My ggml quant: [https://huggingface.co/NikolayKozloff/Llama-2-13b-chat-norwegian/resolve/main/Llama-2-13b-chat-norwegian-Q6\_K.bin](https://huggingface.co/NikolayKozloff/Llama-2-13b-chat-norwegian/resolve/main/Llama-2-13b-chat-norwegian-Q6_K.bin)
2023-08-17T11:13:19
https://www.reddit.com/r/LocalLLaMA/comments/15tkeg8/norwegian_llama_2_13b_chat_openorca_dataset/
Languages_Learner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tkeg8
false
null
t3_15tkeg8
/r/LocalLLaMA/comments/15tkeg8/norwegian_llama_2_13b_chat_openorca_dataset/
false
false
self
1
{'enabled': False, 'images': [{'id': '456VM0yjXNilEdG1FHJe_bUFgepbsNhROEZZAPiGZ4w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?width=108&crop=smart&auto=webp&s=f335ab1a10ca2967747edf3763406e38c218d689', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?width=216&crop=smart&auto=webp&s=8272d7e6a529eb9f18117985e130164d20bf943d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?width=320&crop=smart&auto=webp&s=332fdbf1ae3f0a6f7c1c8da82902b53bdc98e00b', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?width=640&crop=smart&auto=webp&s=84e7879764ee557db60af7279b365e27c047079e', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?width=960&crop=smart&auto=webp&s=a6e62c92694e3beeafcc7ae904367a0562629e39', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?width=1080&crop=smart&auto=webp&s=2fc0156682eeea81e7f485f03e0a94029eac91b2', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/CAYTAifXu_jLxRzf8NLU1-UDo8ANEsn3Xt7wucTFJc4.jpg?auto=webp&s=840d0dde162126876b6e5d3ad23ba7440ccd2e3e', 'width': 1200}, 'variants': {}}]}
Help me choose OS?
1
Hi! So, long time Windows user here, I finally decided to pick up the slack and make a dual boot for running local LLM. Windows being a resource hog. But it's been a long time since I used anything else than Windows, last being Ubuntu. I've also heard some recommend CentOS. So, polling time... [View Poll](https://www.reddit.com/poll/15tkhas)
2023-08-17T11:17:20
https://www.reddit.com/r/LocalLLaMA/comments/15tkhas/help_me_choose_os/
nollataulu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tkhas
false
null
t3_15tkhas
/r/LocalLLaMA/comments/15tkhas/help_me_choose_os/
false
false
self
1
null
Best LLM for document QA with 3b or less parameters.
1
I am trying to use a llm alongside langchain and vector database to answer questions from user manual pdf documents. The one that worked quite well was llama2-7b quantized. But the response time was extremely slow. I was testing on a gpu with 12gb vram. Which model would be best for this use case and has 3b or less parameters? Please help.
2023-08-17T11:23:05
https://www.reddit.com/r/LocalLLaMA/comments/15tklu2/best_llm_for_document_qa_with_3b_or_less/
Rude_Alternative_216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tklu2
false
null
t3_15tklu2
/r/LocalLLaMA/comments/15tklu2/best_llm_for_document_qa_with_3b_or_less/
false
false
self
1
null
How to use loRA in Kobold?
1
I noticed there are loRA settings in Koboldcpp, but haven't been able to find any documentation on how to use it or how it works. Tried downloading a few loRA and loading their different files through trial and error, but to no avail.
2023-08-17T12:14:56
https://www.reddit.com/r/LocalLLaMA/comments/15tlqxp/how_to_use_lora_in_kobold/
superthrowawayman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tlqxp
false
null
t3_15tlqxp
/r/LocalLLaMA/comments/15tlqxp/how_to_use_lora_in_kobold/
false
false
self
1
null
Update to the Chess riddle from yesterday.
1
I think it's worth making a post about this because it seems that the version of llama.cpp I was using yesterday was broken somehow. The latest version aces this test with llama-2-70b-chat.ggmlv3.q6_K.bin. I updated llamacpp (cuda12) version to the latest release, and llama-2-70b-chat.ggmlv3.q6_K.bin gave a correct answer straight away: >In a room I have only 3 sisters. Anna is reading a book, Alice is playing a match of chess. What the third sister, Amanda is doing ? This is a classic lateral thinking puzzle that requires you to think creatively and consider the possibility that the information provided may not be entirely accurate or complete. The answer to the puzzle is that Amanda is also playing a game of chess with Alice. The puzzle states that Alice is playing a match of chess, but it doesn't specify that she is playing against another person. Therefore, it is possible that Amanda is also playing a game of chess with Alice, making them both players in the same match. [end of text] (with 'think step by step added') >In a room I have only 3 sisters. Anna is reading a book, Alice is playing a match of chess. What the third sister, Amanda is doing ? Think step by step. The first clue is that Anna is reading a book. This means she's not playing chess with Alice. Since Alice is playing chess, she must be playing against someone else. That someone else cannot be Anna, since she's reading a book. Therefore, the third sister, Amanda, must be playing chess with Alice. If Amanda is playing chess with Alice, then she cannot be doing anything else at the same time. Therefore, the answer to the riddle is "Amanda is playing chess with Alice." [end of text]
2023-08-17T12:20:49
https://www.reddit.com/r/LocalLLaMA/comments/15tlvm9/update_to_the_chess_riddle_from_yesterday/
ambient_temp_xeno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tlvm9
false
null
t3_15tlvm9
/r/LocalLLaMA/comments/15tlvm9/update_to_the_chess_riddle_from_yesterday/
false
false
self
1
null
Democratize AI - Run your private local AI girlfriend or AI anything using a $95 chip which can be turned into a 16GB VRAM GPU. The chip is 4600G. 5600G or 5700G also works.
1
[removed]
2023-08-17T13:39:19
https://www.reddit.com/r/LocalLLaMA/comments/15tnxpx/democratize_ai_run_your_private_local_ai/
chain-77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tnxpx
false
null
t3_15tnxpx
/r/LocalLLaMA/comments/15tnxpx/democratize_ai_run_your_private_local_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mrOjLCG8TnErw1I6bstOH_USFzhjyh1ju5nf5kYtAIM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wXkIktd8a0XTTycZeDizre4Ez0c5zd5iZ6LGm0rEgko.jpg?width=108&crop=smart&auto=webp&s=6c9c4dd7ae2ba6933c319235d5d91d51089108eb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wXkIktd8a0XTTycZeDizre4Ez0c5zd5iZ6LGm0rEgko.jpg?width=216&crop=smart&auto=webp&s=4b29c10e12dc255eb0377a9ea17cb67e858217f8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wXkIktd8a0XTTycZeDizre4Ez0c5zd5iZ6LGm0rEgko.jpg?width=320&crop=smart&auto=webp&s=f12578eb13b46f3d89c938ad866820330c8abac7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wXkIktd8a0XTTycZeDizre4Ez0c5zd5iZ6LGm0rEgko.jpg?auto=webp&s=5d0fcc35e6a6ab145c1bda60eeccc236f1630c69', 'width': 480}, 'variants': {}}]}
Organizing LLaMA Book Project, Subject Matter Experts, Please Help and Be a Part of It
1
Warm hello, fellow tech explorers and AI aficionados of r/LocalLLaMA! 🤖🌐 It's so wild to imagine having the power to run your own mind-bending models right from the comfort of your home—it's like Prometheus himself snatched fire from the gods! 💥 But, hold on a sec, haven't you ever felt like there's a secret club holding onto all the good stuff? Well, the time has come to change that narrative, and we're inviting YOU to be a part of something truly legendary. Regular people need a book to guide them from zero to hero. Picture this: a collaborative masterpiece that's more than just a book—it's a guide to empower, inspire, and rule the universe of LLaMA and beyond. Yeah, you heard right, we're talking about crafting a whole new kind of bible for AI enthusiasts like us. We're calling all the experts, wizards, and tech maestros who've tinkered, installed, and trained with LLaMA 2, who've built machines that buzz with the symphony of technology, and who've stumbled upon mind-blowing applications that'll leave you wide-eyed. 🌟 So, here's the scoop: we want YOU to be a chapter author in the upcoming "LLaMA: The Unofficial Practical Guide to Ruling the Universe" 📚🌌. This isn't just any old book—it's a chance for you to showcase your expertise, share your insights, and help others journey into the depths of LLaMA 2 mastery. But wait, there's more! 🎉 Every contributor will not only be immortalized in the annals of AI wisdom but will also score a free, tangible copy of the guide. Yep, you'll have that gem proudly sitting on your shelf, reminding you of the part you played in shaping the AI landscape. Now, here's the real kicker: 💥 Every single bit of profit generated from this project goes straight into an LLM project grant, and guess who gets to decide where it goes? You, me, and every other contributor who's been a part of this adventure. It's a chance to give back, to pave the way for the next wave of AI enthusiasts, and to truly make a difference in the AI universe. So, my fellow pioneers, are you ready to dive into the future, pen in hand, and leave your mark on the realm of LLaMA 2? Let's take this journey together, let's fuel the fire of innovation beyond the tutorial guide, and let's create a legacy that'll inspire AI enthusiasts for generations to come. Drop a comment below if you're up for the challenge, or shoot me a DM if you're ready to be a part of something that's not just a book, but a movement. Let's embark on this thrilling voyage and rule the universe with "The LLaMA Lexicon: The Unofficial Practical Guide to Ruling the Universe"! 🚀📖🌠 *Before anyone says that these books will go out of date too fast, that is why there is Editions, and I bet it's fine. Let's gooooo
2023-08-17T14:24:57
https://www.reddit.com/r/LocalLLaMA/comments/15tp3r5/organizing_llama_book_project_subject_matter/
JodyShackelford
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tp3r5
false
null
t3_15tp3r5
/r/LocalLLaMA/comments/15tp3r5/organizing_llama_book_project_subject_matter/
false
false
self
1
null
GPU Acceleration n_gpu_layers: How to determine number of layers?
1
[removed]
2023-08-17T15:22:33
https://www.reddit.com/r/LocalLLaMA/comments/15tqmge/gpu_acceleration_n_gpu_layers_how_to_determine/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tqmge
false
null
t3_15tqmge
/r/LocalLLaMA/comments/15tqmge/gpu_acceleration_n_gpu_layers_how_to_determine/
false
false
self
1
null
Best way to create chatbot with own knowledge base from word files
1
[removed]
2023-08-17T15:27:34
https://www.reddit.com/r/LocalLLaMA/comments/15tqr8m/best_way_to_create_chatbot_with_own_knowledge/
stopped_clock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tqr8m
false
null
t3_15tqr8m
/r/LocalLLaMA/comments/15tqr8m/best_way_to_create_chatbot_with_own_knowledge/
false
false
self
1
null
Advice for large text input
1
I used whisper ai to trancribe about 600 speaches that were about an hour long. So now I have a bunch of fairly large text files with no formatting. I want to have a language model divide it into paragraphs and correct punctuation. Id also like to use api commands because I have a lot of files. &#x200B; Ive been fooling around and testing a lot of things, but im getting frustrated and am kind of new to this. How would you go about doing this if you were me?
2023-08-17T15:43:30
https://www.reddit.com/r/LocalLLaMA/comments/15tr6ks/advice_for_large_text_input/
I_dream_of_doritos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tr6ks
false
null
t3_15tr6ks
/r/LocalLLaMA/comments/15tr6ks/advice_for_large_text_input/
false
false
self
1
null
GGUF is going to make llama.cpp much better and it's almost ready
1
The .bin files that are used by llama.cpp to easily allow users to share models in a single file. Except they had one big problem: lack of flexibility. You could not add additional information about the model. Compare that to GGUF: > It is a successor file format to GGML, GGMF and GGJT, and is designed to be unambiguous by containing all the information needed to load a model. It is also designed to be extensible, so that new features can be added to GGML without breaking compatibility with older models. **Basically:** * No more breaking changes. * Support for non-llama models. (falcon, rwkv, bloom, etc.) * No more fiddling around with rope-freq-base, rope-freq-scale, gqa, and rms-norm-eps. * Prompt formats could be set automatically. The best part? [It's *almost* ready.](https://github.com/ggerganov/llama.cpp/pull/2398#issuecomment-1682404719)
2023-08-17T15:56:16
https://www.reddit.com/r/LocalLLaMA/comments/15triq2/gguf_is_going_to_make_llamacpp_much_better_and/
samfundev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15triq2
false
null
t3_15triq2
/r/LocalLLaMA/comments/15triq2/gguf_is_going_to_make_llamacpp_much_better_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZM9PECBcTZkNMEISI3bM9W7NHbCWxrzsDXt88qU14Bg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?width=108&crop=smart&auto=webp&s=9230c6483481e771c5999476c942dcd8aebbd972', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?width=216&crop=smart&auto=webp&s=b178a8dc2543b1855c221abfd5ae5f40d34b200a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?width=320&crop=smart&auto=webp&s=04488942ac101afcb3094bef714e2cd72ae3c661', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?width=640&crop=smart&auto=webp&s=3b6cfec5d0287d992b4db735f81e15e74852cbb0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?width=960&crop=smart&auto=webp&s=5594c13e89c9b78e540d2bb4a9e5b8c81b7a4202', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?width=1080&crop=smart&auto=webp&s=1502ca0b178af87f489c2dafbb1a52d493babc87', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G5eXpZbi9LoXJEEtPRtQFQNhr8rz9l8OPEdSOBHq5kQ.jpg?auto=webp&s=bf672964f071b11edd334fc12ff172b1139f68f1', 'width': 1200}, 'variants': {}}]}
Best instruct 13B model, if possible Llama-2 based?
1
A lot of people seem to go hard for more RP-oriented models (Nous Hermes or similar pop up). But I use quite a lot my local AI for instructions. Not coding or anything, but creative writing, where hallucinations aren't such a big concern. To be honest, Llama-2 Chat seems pretty good. The only problem is the agenda and censorship (won't go on a rant to say what I think of soy milk-fueled puritanism), which is not good for anything remotely creative if it has NSFW. I've checked out other models which are basically using the Llama-2 base model (not instruct), and in all honesty, only Vicuna 1.5 seems to approach it, but still I think even the 13B version of Llama-2 follows instructions relatively well, as long as you don't trigger the many milk soy-based sensibilities that have been built into it - sadly the censorship is pretty good and you can't break it as effortlessly as, for example, Vicuna. So I was wondering, what are people's opinions on the most useful instruct model, especially for creative stuff, that is less on the nose with the whole milk soy absurdity?
2023-08-17T16:19:54
https://www.reddit.com/r/LocalLLaMA/comments/15ts57p/best_instruct_13b_model_if_possible_llama2_based/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ts57p
false
null
t3_15ts57p
/r/LocalLLaMA/comments/15ts57p/best_instruct_13b_model_if_possible_llama2_based/
false
false
self
1
null
How to increase max tokens generated? I only generate 50.
1
Okay so, I set up everything with kobold cpp, used the 7B Llama 2 chat model, activated kobold, modified the settings in the localhost web page, started Risu, tested some characters but I only get 50 tokens generated max. I put 4096 Max context size in risu and 1024 max response size. Meanwhile in the localhost I have max tokens at 2048, and amount to generate at 512. what am I doing wrong?
2023-08-17T16:29:38
https://www.reddit.com/r/LocalLLaMA/comments/15tsdoa/how_to_increase_max_tokens_generated_i_only/
s-cardi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tsdoa
false
null
t3_15tsdoa
/r/LocalLLaMA/comments/15tsdoa/how_to_increase_max_tokens_generated_i_only/
false
false
self
1
null
LLaMA Terminal Completion, a local virtual assistant for the terminal
1
2023-08-17T16:39:24
https://github.com/adammpkins/llama-terminal-completion
adammpkins
github.com
1970-01-01T00:00:00
0
{}
15tsmyo
false
null
t3_15tsmyo
/r/LocalLLaMA/comments/15tsmyo/llama_terminal_completion_a_local_virtual/
false
false
https://b.thumbs.redditm…FyIYve1JZYSU.jpg
1
{'enabled': False, 'images': [{'id': '4Md0uWeTHcXW75k-J7DzE3DIhj4AB6Er9rtjZw1TOY0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?width=108&crop=smart&auto=webp&s=e2c526cd92f2e26095166d41978fa87b959e36d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?width=216&crop=smart&auto=webp&s=53e93e12b4aeddcf3d9f6f0457661ff66198013f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?width=320&crop=smart&auto=webp&s=4acc595644cae1325eab28519438f11a2739634d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?width=640&crop=smart&auto=webp&s=c13961f2c56b8a7dd605c32cc0d292c8d3fe2426', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?width=960&crop=smart&auto=webp&s=f26489f7e2734d206bebf062fafc8d972742929b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?width=1080&crop=smart&auto=webp&s=1a6e8cb2f770358f3da3552cd3e85773829eda8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pxY56CRbOPYJlaPnofxKyDxqVPW05UeIZOVE8OEv2C0.jpg?auto=webp&s=ef9bdd5ee16744c8c195e562aeed1ce9a28197c9', 'width': 1200}, 'variants': {}}]}
Retrieval Augmented Generation optimised Llm's
1
Hello Local lamas 🦙! I's super excited to show you newly published **DocsGPT llm’s** on Hugging Face, tailor-made for tasks some of you asked for. From Documentation-based QA, RAG (Retrieval Augmented Generation) to assisting developers and tech support teams by conversing with your data! (basically the same thing tbh, all started by 2020 *Retrieval Augmented Generation for Knowledge-Intensive NLP Tasks paper*) Fine-tuned with 50k high-quality examples using the Lora process! Took around 2 days for smaller ones and 4 for a large one, 2 epochs each. Check them out: * [DocsGPT-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) * [DocsGPT-14b](https://huggingface.co/Arc53/docsgpt-14b) * [DocsGPT-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) &#x200B; **Why I think its useful?** Improved explicit info extraction from sources Reduced hallucinations No repeating at the end &#x200B; |Name|Base Model|Requirements (or similar) GPU| |:-|:-|:-| | [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon)|Falcon-7b|1xA10G| |[Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b)|llama-2-13b-hf|2xA10| |[Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon)|falcon-40b|8xA10G| &#x200B; You can also use bitsnbytes to run the with less memory &#x200B; A **snippet** to jumpstart: python model = "Arc53/docsgpt-7b-falcon" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) &#x200B; License? Apache-2.0 Will publish gglm versions if you guys like them, im also hoping a can tune a nice 3b sized model in future too.
2023-08-17T17:08:12
https://www.reddit.com/r/LocalLLaMA/comments/15ttegi/retrieval_augmented_generation_optimised_llms/
ale10xtu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ttegi
false
null
t3_15ttegi
/r/LocalLLaMA/comments/15ttegi/retrieval_augmented_generation_optimised_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': '9xTJELL1YL4PyriMXYWRWD3cUAbTClyF15_unUlVjVQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=108&crop=smart&auto=webp&s=398d02814010f50239d36285cce603a9956e5ce6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=216&crop=smart&auto=webp&s=c613c8979bcf43402af4901fdc8156a3f611c490', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=320&crop=smart&auto=webp&s=670b9c1adbc0fed8074ee29e2bd406b0b7020aa1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=640&crop=smart&auto=webp&s=69cf0de3bac96a35ffb4bd30aae6064bffe844ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=960&crop=smart&auto=webp&s=f868a22c69d74d6e6c59860eccef9f753299edc1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=1080&crop=smart&auto=webp&s=a52c4898cf5d426d686010532a09d408d73000b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?auto=webp&s=e4f15d7baf297e601bd2eb8e04bc505d16cb0b28', 'width': 1200}, 'variants': {}}]}
Has Anyone Used Llama2 for Text Extraction?
1
I have processes in place using OCR and text classification for text documents. I'm not having great luck using traditional methods to extract text from dissimilar documents. Custom Named Entity Recognition type of stuff where I didn't necessarily have a ton of examples for training. I was wondering if anyone had a similar use case and was accomplishing this with Llama.
2023-08-17T17:15:18
https://www.reddit.com/r/LocalLLaMA/comments/15ttl55/has_anyone_used_llama2_for_text_extraction/
jimmc414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ttl55
false
null
t3_15ttl55
/r/LocalLLaMA/comments/15ttl55/has_anyone_used_llama2_for_text_extraction/
false
false
self
1
null
LLaMa 2 LLMs w/ NVIDIA Jetson and textgeneration-web-ui
1
2023-08-17T17:19:23
https://www.hackster.io/pjdecarlo/llama-2-llms-w-nvidia-jetson-and-textgeneration-web-ui-96b070
toolboc
hackster.io
1970-01-01T00:00:00
0
{}
15ttosw
false
null
t3_15ttosw
/r/LocalLLaMA/comments/15ttosw/llama_2_llms_w_nvidia_jetson_and/
false
false
https://b.thumbs.redditm…17uAbxXD0fSY.jpg
1
{'enabled': False, 'images': [{'id': 'PvGZhcat1khEnj-06M1DPsFE41CGTCEn1Ik1zHrqtU4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/vtm1hj89XAlUmNsBcCPerHWsGHY-pYKX2u8DbnVLb7k.jpg?width=108&crop=smart&auto=webp&s=8df934c6293be53acb1ba228a50a4cfba5d276b8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/vtm1hj89XAlUmNsBcCPerHWsGHY-pYKX2u8DbnVLb7k.jpg?width=216&crop=smart&auto=webp&s=db2d0a5c067ef70cbdb35bad944f4cc7e5afdc3f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/vtm1hj89XAlUmNsBcCPerHWsGHY-pYKX2u8DbnVLb7k.jpg?width=320&crop=smart&auto=webp&s=9b33a35c350697883c1cd5ab2455de03e3015208', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/vtm1hj89XAlUmNsBcCPerHWsGHY-pYKX2u8DbnVLb7k.jpg?width=640&crop=smart&auto=webp&s=566edd6da87855eab900af9524a99e644f1ec4c5', 'width': 640}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/vtm1hj89XAlUmNsBcCPerHWsGHY-pYKX2u8DbnVLb7k.jpg?auto=webp&s=11360b348a7045e3db399ea7053ab426df1ef5dd', 'width': 720}, 'variants': {}}]}
What's are the best models so far
1
What do you all think are top 3 models rn in terms of uncensorship, math/code abilities, RP and human like chatting?
2023-08-17T17:57:25
https://www.reddit.com/r/LocalLLaMA/comments/15tup9v/whats_are_the_best_models_so_far/
Dr-Dark-Flames
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tup9v
false
null
t3_15tup9v
/r/LocalLLaMA/comments/15tup9v/whats_are_the_best_models_so_far/
false
false
self
1
null
What’s the best ai for programming?
1
What is the 70B model really good for?
2023-08-17T18:19:01
https://www.reddit.com/r/LocalLLaMA/comments/15tva98/whats_the_best_ai_for_programming/
PremoSaysThings
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tva98
false
null
t3_15tva98
/r/LocalLLaMA/comments/15tva98/whats_the_best_ai_for_programming/
false
false
self
1
null
Llama 2 based WizardMath model to solve math problems with examples
1
2023-08-17T18:40:27
https://ollama.ai/blog/wizardmath-examples
helloPenguin006
ollama.ai
1970-01-01T00:00:00
0
{}
15tvujy
false
null
t3_15tvujy
/r/LocalLLaMA/comments/15tvujy/llama_2_based_wizardmath_model_to_solve_math/
false
false
default
1
null
Coding LLMs
1
What is the best coding LLM out there? Is there anyway one can integrate in vscode. Any tutorials one can follow. Also is there a good guide out there which can get you started with getting the necessary knowledge to learn more about running LLMs?
2023-08-17T19:31:49
https://www.reddit.com/r/LocalLLaMA/comments/15tx7bx/coding_llms/
nxtmalteser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tx7bx
false
null
t3_15tx7bx
/r/LocalLLaMA/comments/15tx7bx/coding_llms/
false
false
self
1
null
Finetune LLM model on tabular data
1
Is it possible or even recommended to finetune LLMs such as llama2 on tabular data? I have a csv with historical gold buy prices. ```DAY,HOUR,OPEN,HIGH,LOW,CLOSE,VOLUME 2018.06.28,03:02,1.15603,1.15613,1.15602,1.15605,107``` I'm hoping to be able to ask a question like what was the volume on 2018.06.28 or what date had the most volume ect. Thanks in advance.
2023-08-17T19:34:35
https://www.reddit.com/r/LocalLLaMA/comments/15tx9wc/finetune_llm_model_on_tabular_data/
fpena06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tx9wc
false
null
t3_15tx9wc
/r/LocalLLaMA/comments/15tx9wc/finetune_llm_model_on_tabular_data/
false
false
self
1
null
What happened to Longterm Memory?
1
It seemed to me that there was a flurry around adding longterm memory beyond context for llama. Then llama2 released and its like everyone forgot about it, where’s the documentation for implementing it with llama2 with things like ooga?
2023-08-17T20:17:04
https://www.reddit.com/r/LocalLLaMA/comments/15tyf3j/what_happened_to_longterm_memory/
Agured
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tyf3j
false
null
t3_15tyf3j
/r/LocalLLaMA/comments/15tyf3j/what_happened_to_longterm_memory/
false
false
self
1
null
Using Generated Q/A Pairs for Fine Tuning? Is It Worth It?
1
Like most people here, I have lots and lots of internal documents. My current workflow is to ingest, chunk, embed, query embeddings, and then use that subset of text to prompt a local model (whatever is near the top of the HFLB). This works "OK" for simple factual retrieval (e.g. "what is the right torque setting for an XYX installed on an ABC?") Today I used a 70B parameter chat model with a prompt template that was something like "you are a helpful teacher who will generate questions and answers based on the text provided." I passed in chunks of about 2-3K tokens, and I was very pleased with the question/answer pairs that were generated. I'm thinking I could fine tune a model based on these q/a pairs. Isn't this kind of bootstrapping basically what Orca did? My question is less about *can* this be done and more about whether it will perform better than simply querying embeddings. Any thoughts or is this a case of "the only way to know is to do it."
2023-08-17T20:35:05
https://www.reddit.com/r/LocalLLaMA/comments/15tywvd/using_generated_qa_pairs_for_fine_tuning_is_it/
Simusid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tywvd
false
null
t3_15tywvd
/r/LocalLLaMA/comments/15tywvd/using_generated_qa_pairs_for_fine_tuning_is_it/
false
false
self
1
null
Should cublas be faster than clblas when using an nvidia GPU?
1
I just got my new nvidia GPU and when switching over to cublas all my generations took so much longer. My previous GPU was also an nvidia but I mever bothered to try cublas. Any ideas?
2023-08-17T20:47:52
https://www.reddit.com/r/LocalLLaMA/comments/15tz9oi/should_cublas_be_faster_than_clblas_when_using_an/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tz9oi
false
null
t3_15tz9oi
/r/LocalLLaMA/comments/15tz9oi/should_cublas_be_faster_than_clblas_when_using_an/
false
false
self
1
null
Can You Run Same LLM on 3 GPUs
1
What I mean is: 1. Same model copying and pasting it 3 times 2. You got 3 different GPUs on the same MB Lets say: rtx 2060, rtx 3060, rtx 1080 3. Headache free, just works out of the box, no coding needed or hours to figure out why it doesn't work/hours to fix the problem 4. Like selecting a GPU from a menu or something like that. 5. The same model or to be accurate the same model cloned 3 times, in total 3 models each on a seperate GPU, all the 3 models running at the same time. 6. I am talking about LLM CHAT models, not fine tunning or something like that. Just chatting with 3 models at the same time.
2023-08-17T20:50:35
https://www.reddit.com/r/LocalLLaMA/comments/15tzcdh/can_you_run_same_llm_on_3_gpus/
SakamotoKyu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tzcdh
false
null
t3_15tzcdh
/r/LocalLLaMA/comments/15tzcdh/can_you_run_same_llm_on_3_gpus/
false
false
self
1
null
What would the best model be for either question answering or multiple choice question answering?
1
[removed]
2023-08-17T20:56:09
https://www.reddit.com/r/LocalLLaMA/comments/15tzhsw/what_would_the_best_model_be_for_either_question/
Crafty_Charge_4079
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15tzhsw
false
null
t3_15tzhsw
/r/LocalLLaMA/comments/15tzhsw/what_would_the_best_model_be_for_either_question/
false
false
self
1
null
Playing around w/ a more structured LLM workflow
1
Hey y'all! We're building an opinionated API + a workflow UI to get more structured output from language models. The idea is to create a strong set of guardrails on top of language models so that common functionality such as generating templates or getting JSON output can be constrained via a little pre/post processing magic on our side. I've built a little tool to showcase different parts of the API in action here: [Grabbing data from a Yelp review and adding that to a template](https://reddit.com/link/15u0lu6/video/qlyb3gg6oqib1/player) It's currently working with OpenAI (which you can play with here: [https://cloud.spyglass.fyi/playground](https://cloud.spyglass.fyi/playground)). We're still ironing out some bugs before we can release an open-source version that works the llama models (via our other utility: [https://github.com/spyglass-search/memex](https://github.com/spyglass-search/memex)). &#x200B; Would love to hear what you guys think! &#x200B;
2023-08-17T21:37:09
https://www.reddit.com/r/LocalLLaMA/comments/15u0lu6/playing_around_w_a_more_structured_llm_workflow/
andyndino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u0lu6
false
null
t3_15u0lu6
/r/LocalLLaMA/comments/15u0lu6/playing_around_w_a_more_structured_llm_workflow/
false
false
self
1
null
Any open source prompt Enhancing LLM projects?
1
Do we have any open source projects for prompt engineering? I saw a demonstration of [https://promptperfect.jina.ai](https://promptperfect.jina.ai/) and immediately started searching for a version of this we can use offline in order to protect our privacy. I've seen GPT cheat sheets, prompt engineering guides, and prompt-generating LLM models for image generation, but do we have any for chat bots like ChatGPT or LLaMA models?
2023-08-17T21:54:33
https://www.reddit.com/r/LocalLLaMA/comments/15u127k/any_open_source_prompt_enhancing_llm_projects/
No-Ordinary-Prime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u127k
false
null
t3_15u127k
/r/LocalLLaMA/comments/15u127k/any_open_source_prompt_enhancing_llm_projects/
false
false
self
1
{'enabled': False, 'images': [{'id': 'eIb0TbQyzxFwj0G-16HIwInKlXFfrJMX3fQhf4D7zSw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?width=108&crop=smart&auto=webp&s=7d75cfb13950f79cc6b9da7b8a8caae7e88659a8', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?width=216&crop=smart&auto=webp&s=8efd69486fde844ded930b74adda599e31edc774', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?width=320&crop=smart&auto=webp&s=5e14d6b3bd68c763f6b114d65fb6df1ff721e231', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?width=640&crop=smart&auto=webp&s=3984ed0140cf529dfe113070a946e4dfcffe068c', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?width=960&crop=smart&auto=webp&s=7eae4ab6123146b35b1344dc5403c76e16dbfdfb', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?width=1080&crop=smart&auto=webp&s=d984854a707bf06f5da90eb9093923e3a5aa6e73', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/SBtWnhQhnRb9KSFFpBx939YUBMJErogUbFaiP6j92h0.jpg?auto=webp&s=77ce750303a82ba31c2d57620fef474eb9249d12', 'width': 1200}, 'variants': {}}]}
What am I capable of with my setup?
2
I have a 1660ti 6gb and a 1650 4gb. Both of them are idle. I have an Rx 570 as the primary GPU. My CPU is i7-3770 and I have 32gb of ddr3 ram. I can run 7b gptq pretty easily and have yet to run into memory errors. I've tried to load a 13b gptq and split the memory between the two Nvidia GPUs and it doesn't work. I got it to work once but ran out of memory after a couple replies. So far I have used gptq the most because it is fast at responding and writing. Any other has been very slow. Can anyone please give me some suggestions on what I can run and what configurations would be best? I'd really like to at least run a 13b, but if it's not possible that's ok, but what model is the best? I'm looking for an uncensored model. Airoboros has so far been the best, but I'd like something that is as good or better that is licensed for commercial use.
2023-08-18T00:03:55
https://www.reddit.com/r/LocalLLaMA/comments/15u4aut/what_am_i_capable_of_with_my_setup/
KeyMillion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u4aut
false
null
t3_15u4aut
/r/LocalLLaMA/comments/15u4aut/what_am_i_capable_of_with_my_setup/
false
false
self
2
null
What are the differences between the 2x24GB combos (2x4090, 3090Ti+3090, etc) for running Llama2-70B?
1
Title
2023-08-18T00:21:26
https://www.reddit.com/r/LocalLLaMA/comments/15u4phs/what_are_the_differences_between_the_2x24gb/
NLTPanaIyst
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u4phs
false
null
t3_15u4phs
/r/LocalLLaMA/comments/15u4phs/what_are_the_differences_between_the_2x24gb/
false
false
self
1
null
What kind of differences does `sail-sg/lorahub` have with MoE?
1
Since MoE or mixture of loras is supposed to be the gpt4 model underneath, what is the current take on this interesting project? Does regular MoE require loading all 8 parts into memory? Does quickly applying loras reduce the x8 vram requirements? Would difference merging/xor merging help in any case? Is there a way to extract loras from previously finetuned models?
2023-08-18T00:22:10
https://www.reddit.com/r/LocalLLaMA/comments/15u4q4n/what_kind_of_differences_does_sailsglorahub_have/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u4q4n
false
null
t3_15u4q4n
/r/LocalLLaMA/comments/15u4q4n/what_kind_of_differences_does_sailsglorahub_have/
false
false
self
1
null
I made an experiment to compare OpenChatPreview2, Nous-Hermes, and Airoboros-l2 13b models
1
[removed]
2023-08-18T01:26:21
https://www.reddit.com/r/LocalLLaMA/comments/15u67ll/i_made_an_experiment_to_compare_openchatpreview2/
arctic_fly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u67ll
false
null
t3_15u67ll
/r/LocalLLaMA/comments/15u67ll/i_made_an_experiment_to_compare_openchatpreview2/
false
false
self
1
null
Is This A Practicle Way Of Fixing The Memory Problem? | Basically Unlimited Memory Unlocked
1
[removed]
2023-08-18T01:57:17
https://www.reddit.com/r/LocalLLaMA/comments/15u6wcb/is_this_a_practicle_way_of_fixing_the_memory/
UserX203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u6wcb
false
null
t3_15u6wcb
/r/LocalLLaMA/comments/15u6wcb/is_this_a_practicle_way_of_fixing_the_memory/
false
false
self
1
null
What happened to the Llama-cpp-python project?
1
They haven't released a new update in almost a month. It's strange because llama.cpp is being actively updated.
2023-08-18T02:07:10
https://www.reddit.com/r/LocalLLaMA/comments/15u74ku/what_happened_to_the_llamacpppython_project/
Big_Communication353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u74ku
false
null
t3_15u74ku
/r/LocalLLaMA/comments/15u74ku/what_happened_to_the_llamacpppython_project/
false
false
self
1
null
New paper that claims to have produced viable 2 bit quantisations (with OPT models)
1
2023-08-18T02:23:25
https://arxiv.org/abs/2307.13304
alphakue
arxiv.org
1970-01-01T00:00:00
0
{}
15u7hq6
false
null
t3_15u7hq6
/r/LocalLLaMA/comments/15u7hq6/new_paper_that_claims_to_have_produced_viable_2/
false
false
https://a.thumbs.redditm…06BP-TI3S2d4.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Model loaded on one gpu but running on another?
1
Hi all, I'm running the Bloke's Llama 2 13B GPTQ model on the auto-gptq=0.3.1 with the unchanged localGPT repo. The inference process is taking really long (5 min for a basic prompt) and so I went to look at the gpu utilisation. I found that GPU 0 has the model loaded in VRAM = 9.1 / 12 GB and 0% utilisation, and GPU 1 has VRAM = 0.7 / 12 GB and 19% utilisation. Does this mean that the reason why the inference is so slow is because the model is loaded on gpu 0 but running on gpu 1? Is there any way to fix this?
2023-08-18T02:37:53
https://www.reddit.com/r/LocalLLaMA/comments/15u7tcf/model_loaded_on_one_gpu_but_running_on_another/
Historical-Exit-1880
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u7tcf
false
null
t3_15u7tcf
/r/LocalLLaMA/comments/15u7tcf/model_loaded_on_one_gpu_but_running_on_another/
false
false
self
1
null
Meta is preparing to launch a new open source coding model, dubbed Code Llama, that may release as soon as next week
1
[https://www.theinformation.com/articles/metas-next-ai-attack-on-openai-free-code-generating-software](https://www.theinformation.com/articles/metas-next-ai-attack-on-openai-free-code-generating-software) Another exclusive article from The Information. While the full article is paywalled, the introduction explains most of it: >Meta Platforms is preparing to launch software to help developers automatically generate programming code, a challenge to proprietary software from OpenAI, Google and others, according to two people with direct knowledge of the product. Meta’s code-generating artificial intelligence model, dubbed Code Llama, will be open-source and could launch as soon as next week, one of these people said. > >The new coding model rivals OpenAI’s Codex model and builds on Meta’s Llama 2 software, a large-language model that can understand and generate conversational text. Llama 2, which is open-source, upended the AI field by making it easier for companies to make their own AI apps without paying for software from OpenAI, Google or Microsoft. Code Llama will make it easier for companies to develop AI assistants that automatically suggest code to developers as they type, and it could siphon customers from paid coding assistants such as Microsoft’s GitHub Copilot, which is powered by Codex. I doubt it'd be close to GPT-4, but since this is coming from Meta, it should be better than [WizardCoder's HumanEval benchmarks](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0#comparing-wizardcoder-with-the-closed-source-models). I wouldn't be surprised if it scores higher than GPT-3.5.
2023-08-18T03:09:22
https://www.reddit.com/r/LocalLLaMA/comments/15u8i38/meta_is_preparing_to_launch_a_new_open_source/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u8i38
false
null
t3_15u8i38
/r/LocalLLaMA/comments/15u8i38/meta_is_preparing_to_launch_a_new_open_source/
false
false
self
1
{'enabled': False, 'images': [{'id': 'cd6fHdjsLo7iRaakKntgHxtm76D24XyKZm-tdQD3soA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?width=108&crop=smart&auto=webp&s=f42afb33af28f9867ccd377bc49e4e3d8b2b8a88', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?width=216&crop=smart&auto=webp&s=eb1b820503f6fe6c50683ad8226e8c1b2cd1dad4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?width=320&crop=smart&auto=webp&s=53fbaeaf2c69993d08ff8cf51db7510550accf04', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?width=640&crop=smart&auto=webp&s=1314e3ac153a1f793419dde94d3fbe9d4a9c72ba', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?width=960&crop=smart&auto=webp&s=f077515f7d95598873ea37d288a2c1255b0d0b5a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?width=1080&crop=smart&auto=webp&s=c910e15041a5a72573f6f7ebc2af92960bc98a60', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/SL6YFG7zVaYaF-lZqTU82RFbvxcHClTsfbS4Dv-a7to.jpg?auto=webp&s=48bb85f315dbae8200077f00e72264ae178c5ce5', 'width': 1200}, 'variants': {}}]}
I released model Griffin 3B
1
[https://huggingface.co/acrastt/Griffin-3B](https://huggingface.co/acrastt/Griffin-3B) Today I released a new model named [Griffin 3B](https://huggingface.co/acrastt/Griffin-3B). It is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned on [Puffin](https://huggingface.co/datasets/LDJnr/Puffin) for 1 epochs. The prompt format is: ## HUMAN: {prompt} ## RESPONSE: <leave a newline for the model to answer> Note: Don't expect this model to be good, I was just starting out to fine-tune(In fact, this is my third fine-tune). So don't roast me please! u/The-Bloke maybe, or I could quantize it.
2023-08-18T04:16:08
https://www.reddit.com/r/LocalLLaMA/comments/15u9w76/i_released_model_griffin_3b/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15u9w76
false
null
t3_15u9w76
/r/LocalLLaMA/comments/15u9w76/i_released_model_griffin_3b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'auOIBRF0yqyZrDW-GquK8GoKqd_r-mm2ZFQgU2D3zog', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?width=108&crop=smart&auto=webp&s=bca69a5a4f61f50d18189e6f7a12661e2e1e4c4b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?width=216&crop=smart&auto=webp&s=e45d600b619b34435a134ed919402cbf73767371', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?width=320&crop=smart&auto=webp&s=080dac0728b4be534d48b1d7a7cf5a78a25083a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?width=640&crop=smart&auto=webp&s=ee5e98455bf9594acf121a84133b0a7eddb671ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?width=960&crop=smart&auto=webp&s=b22a47d5e85ad06aaffe3664bfc3d51be4d7ad97', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?width=1080&crop=smart&auto=webp&s=4696b80c3299ac2f119c553616f43e9e23f17a1f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9-I994bnpRUcQaMaUUFIxta4DdBcmCUwYoj-Pat8Umo.jpg?auto=webp&s=3c328013aa799c0b393630f8dc1565efd4e54003', 'width': 1200}, 'variants': {}}]}
Why is meta releasing free open source stuff?
1
Literally why ? Is zuck the good guy now ? Why am i not seeing praise for him
2023-08-18T06:29:09
https://www.reddit.com/r/LocalLLaMA/comments/15ucgr6/why_is_meta_releasing_free_open_source_stuff/
zikamit111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ucgr6
false
null
t3_15ucgr6
/r/LocalLLaMA/comments/15ucgr6/why_is_meta_releasing_free_open_source_stuff/
false
false
self
1
null
I have a query regarding the creation of a dataset
1
Hello all, This might seem like a stupid question, but please bear with, I'm new to this. I'm creating a dataset for my new roleplay model. My goal for it is to try and get it as human-sounding as possible, and to make it useable in an online chat-style setting. For the most part, as a result, I'm mostly roleplaying with myself a lot but that's not really the point. I want to know if it is a good idea to include a prompt response example for each turn in a conversation. For example, if I say "hello" and they say "hi" and I say "how are you" and they say "good", should I include one prompt for each point in the conversation (other than the first one of course because it doesn't have any response), or just include each conversation as is, one prompt and response, with the prompt of all of the conversation leading up and the response being the last thing in the convo? Thank you very much in advance for your answer, and I apologize if this question was obvious or if there was information readily available- this is my first time doing this.
2023-08-18T06:48:39
https://www.reddit.com/r/LocalLLaMA/comments/15uctrt/i_have_a_query_regarding_the_creation_of_a_dataset/
ranting172837294
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uctrt
false
null
t3_15uctrt
/r/LocalLLaMA/comments/15uctrt/i_have_a_query_regarding_the_creation_of_a_dataset/
false
false
self
1
null
Accelerate not working with merged model?
1
I'm using the HuggingFace libraries and Llama 2, in case that changes anything. Also, I'm running all my code on free Colab. I've trained a QLora adapter. I can load the base model (Llama-2-7B-hf) using AutoModelForCausalLM.from\_pretrained and then the adapter separately as a PeftModel.from\_pretrained, and then use it for inference. For this I'm using device\_map='auto'. This isn't fast but it works, splitting the model between the GPU and CPU ram. When I try to create a merged model, using merge\_and\_unload and then save\_pretrained, the merged model appears on the Hub. When I attempt to use it, however, despite using device\_map='auto', it downloads the model but attempts to fit it all in CPU ram, where it promptly runs out of memory and crashes. I can use the exact same code using the base model instead of my merged model, and it works as expected. Does anyone know what's going on? Do I need to change when merging/saving the merged model? When loading it? Any help would be appreciated.
2023-08-18T06:59:15
https://www.reddit.com/r/LocalLLaMA/comments/15ud06t/accelerate_not_working_with_merged_model/
rmt77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ud06t
false
null
t3_15ud06t
/r/LocalLLaMA/comments/15ud06t/accelerate_not_working_with_merged_model/
false
false
self
1
null
How are your tokens/second speeds?
1
[removed]
2023-08-18T09:48:12
https://www.reddit.com/r/LocalLLaMA/comments/15ufyrb/how_are_your_tokenssecond_speeds/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ufyrb
false
null
t3_15ufyrb
/r/LocalLLaMA/comments/15ufyrb/how_are_your_tokenssecond_speeds/
false
false
self
1
null
Any ideas on how to implement Flowise flows into sheets, like GPT for sheets does?
1
This is an amazing project (GPT for Sheets) for productivity, and demands 0 programming knowledge [https://workspace.google.com/marketplace/app/gpt\_for\_sheets\_and\_docs/677318054654](https://workspace.google.com/marketplace/app/gpt_for_sheets_and_docs/677318054654) However, it is based on stateless OpenAI. \- information is not up-to-date, no ability to use our documents, databases, and the Internet \- No ability to use Open Source models like LLama2, and no Langchain Agents and Tools On the other hand we have Flowise that can do amazing things without coding and use open-source, but it's not so direct for quick projects using a sheet logic. Ok Flowise is no code, but if you want to create batch stuff, you need to code. So the question is, how could I implement Flowise to sheets, the same way that GPT for sheets does. If I have a flow, and I can get it through a sheet, that would be so amazing for quick every day tasks. Also I could have a sheet of tasks, and let my computer run all night, instead of paying OpenAI. Thank you for any ideas
2023-08-18T10:01:13
https://www.reddit.com/r/LocalLLaMA/comments/15ug7b6/any_ideas_on_how_to_implement_flowise_flows_into/
dimknaf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ug7b6
false
null
t3_15ug7b6
/r/LocalLLaMA/comments/15ug7b6/any_ideas_on_how_to_implement_flowise_flows_into/
false
false
self
1
null
What LLM topics, techniques, concepts, or tools are you interested in learning more about?
1
Data Science events are everywhere, and LLM sessions are the ones most attended, may it be virtual or in-person. What LLM topic do you think is more interesting?
2023-08-18T10:54:40
https://www.reddit.com/r/LocalLLaMA/comments/15uh8ay/what_llm_topics_techniques_concepts_or_tools_are/
Data_Nerd1979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uh8ay
false
null
t3_15uh8ay
/r/LocalLLaMA/comments/15uh8ay/what_llm_topics_techniques_concepts_or_tools_are/
false
false
self
1
null
What does time-to-first-token depend upon?
1
I am new to LLaMA. I have only used GPT3 and 4 APIs up until. I wanted to explore LLaMA. I have seen some demos of LLaMA but the time-to-first-token appears to be a bit slow often, although the after first token it generates the next ones quite fast. On the other hand Perplexity Chat's LLaMA fork appears to produce the first token very fast. What does that time to first token depend upon and what are the trick to speed it up?
2023-08-18T11:04:18
https://www.reddit.com/r/LocalLLaMA/comments/15uhffi/what_does_timetofirsttoken_depend_upon/
me219iitd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uhffi
false
null
t3_15uhffi
/r/LocalLLaMA/comments/15uhffi/what_does_timetofirsttoken_depend_upon/
false
false
self
1
null
Training/Finetuning LLAMA2 on sentence data?
1
I have a lot of text data in a language that LLAMA2 is not yet great in. I do not have a Q/A format of data, just sentences. Is there a way I can use that to improve LLAMA2 for that particular language- assuming that the quantity of data and computing resources are not a problem?
2023-08-18T12:21:03
https://www.reddit.com/r/LocalLLaMA/comments/15uj2dg/trainingfinetuning_llama2_on_sentence_data/
xidd1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uj2dg
false
null
t3_15uj2dg
/r/LocalLLaMA/comments/15uj2dg/trainingfinetuning_llama2_on_sentence_data/
false
false
self
1
null
Exllama on windows using CPU
1
Is it possible to run Exllama on windows with CPU using pytorch cpu? I tried but I am getting in \_get\_cuda\_arch\_flags arch\_list\[-1\] += '+PTX' IndexError: list index out of range From what I understand, it's angry that there are no cuda devices.
2023-08-18T12:56:31
https://www.reddit.com/r/LocalLLaMA/comments/15ujv0c/exllama_on_windows_using_cpu/
HeavyDiamond8069
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ujv0c
false
null
t3_15ujv0c
/r/LocalLLaMA/comments/15ujv0c/exllama_on_windows_using_cpu/
false
false
self
1
null
API Documentation vs Comprehensive Tutorials: What's Your Preference?
1
Hey LLama Redditors, I've been writing Declarai, an open-source llm-related library lately, and I've noticed two distinct approaches to educating users: 1. **API-Documented Approach**: Every class, method, and object is meticulously documented with full details. This often means you get a technical reference for each piece, but you may need to connect the dots yourself to see the bigger picture. 2. **Tutorial-Comprehensive Approach**: Libraries like FastAPI lean more into comprehensive tutorials where users are guided step-by-step through common use cases, providing context and explaining the reasoning behind each step, from the simplest feature to the most complex techniques. Both approaches have their merits, but I'm curious: 1. which do you personally prefer and why? 2. Do you like the reference-style documentation where you can quickly find the specifics, or do you appreciate a more hands-on tutorial that walks you through the process? Looking forward to hearing your thoughts!
2023-08-18T13:22:46
https://www.reddit.com/r/LocalLLaMA/comments/15ukh0q/api_documentation_vs_comprehensive_tutorials/
matkley12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ukh0q
false
null
t3_15ukh0q
/r/LocalLLaMA/comments/15ukh0q/api_documentation_vs_comprehensive_tutorials/
false
false
self
1
null
text preprocessing (custom Dataset)
2
Hello guys :) I created a custom dataset by converting scientific paper to txt files. Unfortunately I do not know how to delete informations like sources, picture captions, author-information etc... 1. Are there any methods/packages to clean the txt file? I tried to use spacy and detect senteces in order to delete everything which is not a sentence but spacy detected almost everything as a sentence... I tried also textcl, which has a split\_text\_to\_sentece function but it was not successful either... 2. Basic question: Is it fatal if i don't remove the information? Thanks a lot in advance :)
2023-08-18T13:40:59
https://www.reddit.com/r/LocalLLaMA/comments/15ukwm7/text_preprocessing_custom_dataset/
Enkay55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ukwm7
false
null
t3_15ukwm7
/r/LocalLLaMA/comments/15ukwm7/text_preprocessing_custom_dataset/
false
false
default
2
null
How to use gunicorn with LLAMA2 from Hugging Face ?
1
I have a Flask Server, where LLAMA model is loaded via AutoClass (into GPU) from Hugging Face. I am not able to make 'cuda' call when I launch it with gunicorn. I am getting 'CUDA: Cannot re-initialize CUDA in forked subprocess'. I tried few workarounds from stackoverflow with no success. Do you guys have any gist or tutorial that has implemented this ? Any pointers is great !
2023-08-18T14:10:21
https://www.reddit.com/r/LocalLLaMA/comments/15ulnna/how_to_use_gunicorn_with_llama2_from_hugging_face/
UncertainLangur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ulnna
false
null
t3_15ulnna
/r/LocalLLaMA/comments/15ulnna/how_to_use_gunicorn_with_llama2_from_hugging_face/
false
false
self
1
null
FlexFlow Serve: Low-Latency, High-Performance LLM Serving
1
Seems to be pretty decent at speeding up inference of llama models with speculative inference without too much memory overhead. [https://github.com/flexflow/FlexFlow](https://github.com/flexflow/FlexFlow) Would be interesting if it worked with quantized models.
2023-08-18T14:22:52
https://www.reddit.com/r/LocalLLaMA/comments/15ulyny/flexflow_serve_lowlatency_highperformance_llm/
ptxtra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ulyny
false
null
t3_15ulyny
/r/LocalLLaMA/comments/15ulyny/flexflow_serve_lowlatency_highperformance_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': '5TG_1S1MwC6RwxVLTAt8lvT26shRYwGIGC0lB6dy1VE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=108&crop=smart&auto=webp&s=b6657a190aeb8f5eb125f5e00892e34affe66aef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=216&crop=smart&auto=webp&s=fb17c25cdcc8ba6515e887325dfcb3b10e44fdeb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=320&crop=smart&auto=webp&s=37b92d9e562b1ff2ac804e8c9704f48cb7942015', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=640&crop=smart&auto=webp&s=5616d187725d3d0186812fc2e35aa87cf9047c89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=960&crop=smart&auto=webp&s=0ac1c2beac63d3975f4da0c5ecd4fedd21ca1556', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=1080&crop=smart&auto=webp&s=5fa0b28c4dc7e3e86a77e5196d4a9d3741e7d6e1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?auto=webp&s=d616a3cc411b29a61a4a5ae17cc0574c0b4248ef', 'width': 1200}, 'variants': {}}]}
Builing a RAG customer service chatbot
1
Hi there! I have been working on a project to build a customer service chatbot. The bot would interact with clients and answer their questions. It will have access to documents to help it answer questions about things like pricing, available services etc. I have been looking into langchain but I could only find qa agents, not chatbots whose tone and attitude can be adjusted by prompting. Has anyone worked on something similar? Are there any open source projects for something similar? I would be grateful if someone can offer any tips/help
2023-08-18T14:58:06
https://www.reddit.com/r/LocalLLaMA/comments/15umuqj/builing_a_rag_customer_service_chatbot/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15umuqj
false
null
t3_15umuqj
/r/LocalLLaMA/comments/15umuqj/builing_a_rag_customer_service_chatbot/
false
false
self
1
null
Tokenizers Truncation during Fine-tuning with Large Texts
1
[removed]
2023-08-18T16:08:22
https://www.reddit.com/r/LocalLLaMA/comments/15uopwf/tokenizers_truncation_during_finetuning_with/
nrepesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uopwf
false
null
t3_15uopwf
/r/LocalLLaMA/comments/15uopwf/tokenizers_truncation_during_finetuning_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gjHxd9rOFf-otsPthPyuAvgMsbSad0dlkDuex_1RVjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=108&crop=smart&auto=webp&s=a0b38ef2f95a8a36dd0a2d9f8cd228b34ed67b04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=216&crop=smart&auto=webp&s=98e362db2c15afdc284c97dc913a3fb957f63005', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=320&crop=smart&auto=webp&s=071f03be8b6ebb9dd063bc0c4ae7e548f416a005', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=640&crop=smart&auto=webp&s=e8c109e4ce8e584bb675d8b8a9d0dd3bb520d943', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=960&crop=smart&auto=webp&s=d0ac141e496635ccba8146cdd5ab642317f0b702', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=1080&crop=smart&auto=webp&s=470c4b5a4023b612c628513536a9229185dc51c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?auto=webp&s=d332bdd0796a33974613c1c0b55ea4be90594ed1', 'width': 1200}, 'variants': {}}]}
Fine-Tuning LLama2 models.
1
[removed]
2023-08-18T16:24:19
https://www.reddit.com/r/LocalLLaMA/comments/15up4lj/finetuning_llama2_models/
arthurwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15up4lj
false
null
t3_15up4lj
/r/LocalLLaMA/comments/15up4lj/finetuning_llama2_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WEqe5L9e3uodqGGDPhgCoDZAOTmDsrJCeVEVBjMJX-A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=108&crop=smart&auto=webp&s=cb8cd1fba0ca5df419eef882b37c7652264ce056', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=216&crop=smart&auto=webp&s=09cee969eed64882e96d9a4679534746c2275fc1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=320&crop=smart&auto=webp&s=a6000693ae95627c40a35e58e2711035e9b442e0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=640&crop=smart&auto=webp&s=2ba938c67cff3699075a135f9f3648ada1be824f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=960&crop=smart&auto=webp&s=0dc9d549adaed5eebfa80bd7e0ce458a682f3f66', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=1080&crop=smart&auto=webp&s=bc461386252cf5badb670be29dbe681183c788f3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?auto=webp&s=3903a047959628480febc9c8267554c05f3ec721', 'width': 1199}, 'variants': {}}]}
Releasing EverythingLM V2 dataset, now 100% GPT-4 generated!
1
[https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2) It is in alpaca format for accessibility; the model will be trained like a normal chat model. On HF I explained some issues in V1 that will hopefully be fixed. If anyone has any suggestions or questions let me know.
2023-08-18T16:46:22
https://www.reddit.com/r/LocalLLaMA/comments/15upp59/releasing_everythinglm_v2_dataset_now_100_gpt4/
pokeuser61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15upp59
false
null
t3_15upp59
/r/LocalLLaMA/comments/15upp59/releasing_everythinglm_v2_dataset_now_100_gpt4/
false
false
self
1
{'enabled': False, 'images': [{'id': '89mhat411KgccFaMhErA53piAIEjmKvexhm2hpKNvMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=108&crop=smart&auto=webp&s=a14cc468a829927330d55e3866039ab30334ddd2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=216&crop=smart&auto=webp&s=5fc8464718efbb3b1df8f319740c767bb42ee757', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=320&crop=smart&auto=webp&s=1883a0c460aa29d431f7750da33efcfc595e3a37', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=640&crop=smart&auto=webp&s=cdb33caa984d64524d2ecc1de2e92dfb19a76e52', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=960&crop=smart&auto=webp&s=71af9fc5a43419d418b7a16e07ad77307d0c8936', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=1080&crop=smart&auto=webp&s=ae8d0444114a3d723ab5a656fc342d65a74a9793', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?auto=webp&s=e284928cb7956fc19366bafd3262b192d0be967e', 'width': 1200}, 'variants': {}}]}
Is the gptq model format the same as exllama?
1
I converted my model with auto gptq and have a 4bit model, how can I use it with exllama ? Do I need to convert it again? I thought exllama was using the same model format I converted in (4bits with auto gptq) but it's giving me an error about the header size beeing too big.
2023-08-18T16:51:18
https://www.reddit.com/r/LocalLLaMA/comments/15uptqv/is_the_gptq_model_format_the_same_as_exllama/
biggieshiba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uptqv
false
null
t3_15uptqv
/r/LocalLLaMA/comments/15uptqv/is_the_gptq_model_format_the_same_as_exllama/
false
false
self
1
null
Can someone please explain this to me? I've tried different variations of the question and the result was always the same. I am baffled. Thank you.
1
2023-08-18T17:48:54
https://imgur.com/a/y38tMGV
Solstice_Projekt
imgur.com
1970-01-01T00:00:00
0
{}
15urau5
false
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 190, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2Fy38tMGV%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2Fy38tMGV&image=https%3A%2F%2Fi.imgur.com%2FXJVJKBx.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="190" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/XJVJKBx.jpg?fb', 'thumbnail_width': 600, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/y38tMGV', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'}
t3_15urau5
/r/LocalLLaMA/comments/15urau5/can_someone_please_explain_this_to_me_ive_tried/
false
false
https://b.thumbs.redditm…fmlMuAr7k1rU.jpg
1
{'enabled': False, 'images': [{'id': 'qAIUWk9EyAKPcrkLq6Tb3JIqGkpmUTksR_yFYHzMQxI', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=108&crop=smart&auto=webp&s=ee212c44fbe876146e8dec10480d96284a78c67d', 'width': 108}, {'height': 53, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=216&crop=smart&auto=webp&s=4929d5686205caf9ec178bbfabd68539e84128f9', 'width': 216}, {'height': 79, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=320&crop=smart&auto=webp&s=82470e6f4725def86775ac0312479e736a118e04', 'width': 320}, {'height': 159, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=640&crop=smart&auto=webp&s=9f47f2e8e7b2b360b5b74f2601dfdb448b1ce5e0', 'width': 640}, {'height': 239, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=960&crop=smart&auto=webp&s=57fd93f6ef198bca528b95110bdafaae05113550', 'width': 960}, {'height': 269, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=1080&crop=smart&auto=webp&s=5760b9d981feeb87b5d4b8387d4566fc42bed8d6', 'width': 1080}], 'source': {'height': 292, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?auto=webp&s=b6cfd7be1d6bfa3dde2b1d77a4f9dce0a9d5b0a0', 'width': 1170}, 'variants': {}}]}
Pluralistic: "Open" "AI" isn't (18 August 2023) - Cory gives a pretty good critique of the current status of open source a.i.
1
2023-08-18T18:00:17
https://pluralistic.net/2023/08/18/openwashing/
freedom2adventure
pluralistic.net
1970-01-01T00:00:00
0
{}
15url7v
false
null
t3_15url7v
/r/LocalLLaMA/comments/15url7v/pluralistic_open_ai_isnt_18_august_2023_cory/
false
false
default
1
null
Who can tell me the Claude 2.0 restriction?
1
[removed]
2023-08-18T18:03:27
https://www.reddit.com/r/LocalLLaMA/comments/15uroe6/who_can_tell_me_the_claude_20_restriction/
Turdoer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uroe6
false
null
t3_15uroe6
/r/LocalLLaMA/comments/15uroe6/who_can_tell_me_the_claude_20_restriction/
false
false
self
1
null
Llama-2-7B-32K-Instruct
1
2023-08-18T18:54:46
https://huggingface.co/togethercomputer/LLaMA-2-7B-32K
Thistleknot
huggingface.co
1970-01-01T00:00:00
0
{}
15ut001
false
null
t3_15ut001
/r/LocalLLaMA/comments/15ut001/llama27b32kinstruct/
false
false
https://b.thumbs.redditm…8vKTayy8l5TE.jpg
1
{'enabled': False, 'images': [{'id': 'yoAlnlO31bWvxDt2ZPQlTqH9iazTvQwoeS-sUqsWgtw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=108&crop=smart&auto=webp&s=ecde1bdfdd91dfd590fabe657c5615966e36eb14', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=216&crop=smart&auto=webp&s=7a69f688a67587c94ed26be64a481d68052f3079', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=320&crop=smart&auto=webp&s=1b3a3c61498f0435036449c4d6b4327ad292008a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=640&crop=smart&auto=webp&s=a270f1bd5077999a037ee576111d0712d9a582a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=960&crop=smart&auto=webp&s=1ae788bc4b040349b2aa07cf4caf6ebea35e78e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=1080&crop=smart&auto=webp&s=3114ca2a22bee5ee2cd02e824a2fc5c2faa5d107', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?auto=webp&s=b3dc3ec328d20f24bf3fd21426f2588cecf707ff', 'width': 1200}, 'variants': {}}]}
Fine tuning in LLM
1
>!Fine-tuning!< is the process of adjusting the parameters of a foundation model to make it better at a specific task. Fine-tuning can be used to improve the performance of LLMs on a variety of tasks, such as machine translation, question answering, and text summarization. [learn LLM](https://hubs.la/Q01_wyKQ0) from expert instructors and build your own ChatGPT to outperform in your area of work
2023-08-18T19:36:58
https://www.reddit.com/r/LocalLLaMA/comments/15uu2pj/fine_tuning_in_llm/
Emergency_Hat9105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu2pj
false
null
t3_15uu2pj
/r/LocalLLaMA/comments/15uu2pj/fine_tuning_in_llm/
false
false
default
1
{'enabled': False, 'images': [{'id': 'q_gcnTh0VBWEHa1CpHkZMWLDcaoygOegqbe0UUUjVYI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=108&crop=smart&auto=webp&s=d845299c9c85ccc475919a74953503f6142b4ef6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=216&crop=smart&auto=webp&s=d5bf500eda8d2e0a7a8f2fcd843336b75b112806', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=320&crop=smart&auto=webp&s=5f4e825dcf69e09c5790071cabd155a10bf0ff03', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=640&crop=smart&auto=webp&s=e0ebf28f8f7ab1329c8fa0163691553ba8ebbd60', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=960&crop=smart&auto=webp&s=888678da01e62abcd0235873f77e65ae06c6a9df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=1080&crop=smart&auto=webp&s=92784fc5bbebf5234b8d7c86c6102ded7911e7b9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?auto=webp&s=7857f22774d82df664f84be7087ba0f6b3555554', 'width': 1920}, 'variants': {}}]}
Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique
1
i would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on [OFA](https://github.com/OFA-Sys/OFA) Here the repo of the model: [GitHub - taokz/BiomedGPT: BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks](https://github.com/taokz/BiomedGPT/tree/main) i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way to Parameter-Efficient Fine-Tuning
2023-08-18T19:39:20
https://www.reddit.com/r/LocalLLaMA/comments/15uu4tt/quantize_pertrained_model_using_qlora_or_lora/
Youness_Elbrag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu4tt
false
null
t3_15uu4tt
/r/LocalLLaMA/comments/15uu4tt/quantize_pertrained_model_using_qlora_or_lora/
false
false
self
1
{'enabled': False, 'images': [{'id': '1FobZ6IwsyLOVfEFJkGsi6RKWWC__8hAAU66GCSCBr0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=108&crop=smart&auto=webp&s=782d2a58fdf78a4f61e22121ac1f9a3463f20c8e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=216&crop=smart&auto=webp&s=ebde9a7bf2110c8d6ec80573287f836e9dc9d76d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=320&crop=smart&auto=webp&s=66baf34d013391c5ac74221e07bc8fc1148f8798', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=640&crop=smart&auto=webp&s=d707a793eb065526c63a3b1ab97c71f6e4f6a604', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=960&crop=smart&auto=webp&s=7078926ce3ac58b55ddcc35fc75c3b141af5de56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=1080&crop=smart&auto=webp&s=b329235ec7a888272977adb4b8cea7489fa623a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?auto=webp&s=7dbd68cda4942f398b04ae9914e7857368eb474d', 'width': 1200}, 'variants': {}}]}
*Introducing "Endless AI": Chat with your AI Companions! 🤖❤️ [BETA TESTERS NEEDED]*
1
Hey Redditors, We're super excited to share our latest project: \*Endless AI\*. Ever felt like diving deep into conversation, sharing stories, or just having a casual chat without any human strings attached? Our app lets you do just that, with remarkable \*UNCENSORED\* and custom trained AI companions crafted to be your virtual girlfriend. We're currently preparing for our grand launch, but before that, we need your help! We're rolling out a public beta version to test the waters and ensure that our systems can handle real-world scenarios. That's where you come in! 🔧 \*Why beta test?\* \- Get a sneak peek of what's to come \- Help us identify and squash any lingering bugs \- Provide invaluable feedback to shape the final product \- Enjoy a bonus of 500 free messages to all bots (heads up, it'll be subscription-based later!) ⚠️ \*A few things to note:\* \- Depending on user influx, there might be occasional downtimes (but we're crossing our fingers!). Custom LLMs are hard and expensive :) \- If you stumble upon any bugs, don't be shy—let us know! \- As of now, AI girlfriends can't send images. But hey, they have some cool artwork in their profile galleries for you to check out! 😉 📲 \*How to join the beta test:\* 1. \*Install TestFlight:\* If you're new to iOS beta testing, you'll need TestFlight. It's an official app by Apple that lets you try out beta versions of apps. Simply head to the AppStore, search for "TestFlight", and install it. 2. \*Access Endless AI on TestFlight:\* Once you have TestFlight installed, \[click on this link\](https://testflight.apple.com/join/o5nMKF5y) (make sure to open it on your iOS device). This will prompt you to join the "Endless AI" beta test. 3. \*Download & Explore:\* After joining, you'll see an option to download "Endless AI" within TestFlight. Install, launch, and dive into endless conversations! We genuinely value any feedback, be it praises, critiques, or the wildest of suggestions. Help us in creating a more refined and enjoyable experience for all. And Android users, hang tight! Your version will be up and running in a couple of weeks. Cheers, The Endless AI Dev Team PS: If your AI says it sent a picture, because you asked for it, she's lying to you... for now. 😉
2023-08-18T19:40:20
https://www.reddit.com/r/LocalLLaMA/comments/15uu5qr/introducing_endless_ai_chat_with_your_ai/
Gummy_God
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu5qr
false
null
t3_15uu5qr
/r/LocalLLaMA/comments/15uu5qr/introducing_endless_ai_chat_with_your_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'cMXrJgPGR84CfjTvZuz_djCWEJk4l3k8c2nvFQdrP8s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=108&crop=smart&auto=webp&s=ea5630ff1755a41b4c86744b48830a96028f0794', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=216&crop=smart&auto=webp&s=f08bdba77edca1f5d192bd32e2347d363b2e9885', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=320&crop=smart&auto=webp&s=f3035d0a39c474f5ac9400cd01e5230fd1ea01f5', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=640&crop=smart&auto=webp&s=124b91bbe72eb0281d9955222107beabe1322da3', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=960&crop=smart&auto=webp&s=ecec10515b4ad34ba7ed1432172f9a07431f12f6', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?auto=webp&s=cb453520a0de2f08a1a6ef7a46a6fefef512a0ac', 'width': 1024}, 'variants': {}}]}
How do you really, really prompt Llama 2?
1
There seems to be all sorts of ideas about how to properly prompt LLama2. Sam Witteveen uses this formatting : ''' [INST]<<SYS>> You are a Neuroscientist with a talent for explaining very complex subjects to lay people <</SYS>> Chat History: {chat_history} Human: {user_input} Assistant:[/INST] ''' Where as on the Huggingface guide to Lllama 2 prompting it has a slightly different format using an <s> wrapped around the whole turn and closing the [/inst] tag right after the {user_input}, Here's huggingface's version: ''' <s>[INST] <<SYS>>{{ system_prompt }} <</SYS>> {{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s> <s>[INST] {{ user_msg_2 }} [/INST].... ''' Then there is a recent Replicate blog on (you guessed it!) A guide to prompting Llama 2 : ) where they say you don't want to use the Human: (to denote the human is speaking) and you only want to wrap the (humans) input in the [inst] not the ai's Here's their example ''' correct_prompt_long = """\ [INST] Hi! [/INST] Hello! How are you? [INST] I'm great, thanks for asking. Could you help me with a task? [/INST] Of course, I'd be happy to help! . [INST] How much wood could a wood chuck chuck or something like that? [/INST] """ ''' So I'm really confused as to correct way! : ) Am I over thinking this?
2023-08-18T19:41:14
https://www.reddit.com/r/LocalLLaMA/comments/15uu6lk/how_do_you_really_really_prompt_llama_2/
jacobgolden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu6lk
false
null
t3_15uu6lk
/r/LocalLLaMA/comments/15uu6lk/how_do_you_really_really_prompt_llama_2/
false
false
self
1
null
ChatGPT
1
ChatGPT, in particular, gained massive popularity within a short period due to its ability to mimic human-like responses. It leverages machine learning algorithms trained on an extensive dataset, surpassing BERT in terms of training capacity. LLMs like ChatGPT excel in generating personalized and contextually relevant responses, making them valuable in customer service applications. Compared to intent-based chatbots, LLM-powered chatbots can handle more complex and multi-touch inquiries, including product questions, conversational commerce, and technical support. It's exciting to [build your own custom LLM](https://hubs.la/Q01_wyKQ0) and fulfill the business needs
2023-08-18T19:45:17
https://www.reddit.com/r/LocalLLaMA/comments/15uua51/chatgpt/
Emergency_Hat9105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uua51
false
null
t3_15uua51
/r/LocalLLaMA/comments/15uua51/chatgpt/
false
false
self
1
{'enabled': False, 'images': [{'id': 'q_gcnTh0VBWEHa1CpHkZMWLDcaoygOegqbe0UUUjVYI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=108&crop=smart&auto=webp&s=d845299c9c85ccc475919a74953503f6142b4ef6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=216&crop=smart&auto=webp&s=d5bf500eda8d2e0a7a8f2fcd843336b75b112806', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=320&crop=smart&auto=webp&s=5f4e825dcf69e09c5790071cabd155a10bf0ff03', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=640&crop=smart&auto=webp&s=e0ebf28f8f7ab1329c8fa0163691553ba8ebbd60', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=960&crop=smart&auto=webp&s=888678da01e62abcd0235873f77e65ae06c6a9df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=1080&crop=smart&auto=webp&s=92784fc5bbebf5234b8d7c86c6102ded7911e7b9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?auto=webp&s=7857f22774d82df664f84be7087ba0f6b3555554', 'width': 1920}, 'variants': {}}]}
Introducing the orca-mini chatbot powered by the orca-mini-v3-7b model
1
👉 https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b 🤔 How good is orca-mini-v3-7b? Do the evaluation results from Huggingface Open LLM leaderboard translate to real-world use cases? 🔍 Now you can figure it out for yourself! Dive into the chatbot and see how the open source 7b model stacks up in the world of massive language models. 🌍 ⏰ Hurry up before I run out of GPU credits! 😉
2023-08-18T20:11:44
https://www.reddit.com/r/LocalLLaMA/comments/15uuy4h/introducing_the_orcamini_chatbot_powered_by_the/
Remarkable-Spite-107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uuy4h
false
null
t3_15uuy4h
/r/LocalLLaMA/comments/15uuy4h/introducing_the_orcamini_chatbot_powered_by_the/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZaXzGqjwz7kjiDEGQj7IkQx76VANdzyaprmyHQNbn6o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=108&crop=smart&auto=webp&s=85242b3dab853888deebe0ee9c84210448b6dbc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=216&crop=smart&auto=webp&s=1d542dc9ff83692b1c3c155c933048d1fc4c241e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=320&crop=smart&auto=webp&s=802d4feacab1dd61232f03efb4e2e612a47adf04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=640&crop=smart&auto=webp&s=1c7058507a305e7a038d254258b4891428b80d33', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=960&crop=smart&auto=webp&s=1bc465ab151598093792f8b5115a34d0bd9624f5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=1080&crop=smart&auto=webp&s=9fdde4eb938599e7b26f8870774b85dca26a5cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?auto=webp&s=471c797284d88a33d654684ac0a8517c5e712b13', 'width': 1200}, 'variants': {}}]}
Cheetor, SotA multimodal
1
Honestly looks pretty cool, these guys made a visual&text benchmark, and a multi modal LLM that has novel training methods, check it out.
2023-08-18T20:37:39
https://github.com/dcdmllm/cheetah
GodIsAWomaniser
github.com
1970-01-01T00:00:00
0
{}
15uvlsm
false
null
t3_15uvlsm
/r/LocalLLaMA/comments/15uvlsm/cheetor_sota_multimodal/
false
false
https://b.thumbs.redditm…yg8uScAJSH2c.jpg
1
{'enabled': False, 'images': [{'id': 'Lb39gfZp9D5G5zigQOhqbElshmd4mXB4mtPad0NdVrM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=108&crop=smart&auto=webp&s=70b2796eb9b3b7b1a4c21f6e7579834eed517142', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=216&crop=smart&auto=webp&s=02a874a10290a17b049a3c6bd805a0d678e2fa95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=320&crop=smart&auto=webp&s=ec15f6188b4fc1bba449ad9ded605c0a12259038', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=640&crop=smart&auto=webp&s=c7a72ff4a5eeab5bac34c0c10184e083b296d246', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=960&crop=smart&auto=webp&s=dddc463859c406c4a2be536bc12d063a27a4a292', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=1080&crop=smart&auto=webp&s=eba6c41eee9dc9438edb1fdae5b2d1412cd17d4c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?auto=webp&s=1ca3b73bafae1d6b397c717447de295bba9c2e75', 'width': 1200}, 'variants': {}}]}
Run llama 70B on AWS notebook
1
Hello, I am trying to spin up LLAMA on aws - I managed to get it to run with vLLM, and it works great. However, vLLM so far does not yet have support for rope scaling, and I need more than 4096 tokens. I am wondering which model loader i can use? I have a 8X A10G 25G GPU each, so I have 200G GPU RAM (AWS g5 48 large), &#x200B; I tried transformers, with 22,700 in each GPU, but it is sooo slow. And i see that the gpus are barely at 10-30 percent. When vllm runs, all gpus are 100% and fast. &#x200B; https://preview.redd.it/mbiu2v7unxib1.png?width=674&format=png&auto=webp&s=abaabc35daf52858949146dd25cf26d9b5373664
2023-08-18T21:02:46
https://www.reddit.com/r/LocalLLaMA/comments/15uw9e9/run_llama_70b_on_aws_notebook/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uw9e9
false
null
t3_15uw9e9
/r/LocalLLaMA/comments/15uw9e9/run_llama_70b_on_aws_notebook/
false
false
https://b.thumbs.redditm…Xn2QeRv0s5gU.jpg
1
null