title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Text web ui started outputting weird repeating messages. | 1 | ​
[WTF?](https://preview.redd.it/8ycjn2kdfbbb1.png?width=1182&format=png&auto=webp&v=enabled&s=b988bdd16f9acf0f89b1917c6254c350114fbd7d)
Hello everyone.
I've just installed oobaboogas web ui and loaded model **TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ**
Loaded with Exllama, and when I try to write something. It goes really crazy and outputs meaningless text.
WHy this is happening? | 2023-07-11T10:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/14wnsuf/text_web_ui_started_outputting_weird_repeating/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wnsuf | false | null | t3_14wnsuf | /r/LocalLLaMA/comments/14wnsuf/text_web_ui_started_outputting_weird_repeating/ | false | false | 1 | null |
|
How to finetune LLaMA on my Mac M1 Pro? | 1 | Is it possible to do this with QLoRa? | 2023-07-11T13:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/14wrwuu/how_to_finetune_llama_on_my_mac_m1_pro/ | Azure-Vision | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wrwuu | false | null | t3_14wrwuu | /r/LocalLLaMA/comments/14wrwuu/how_to_finetune_llama_on_my_mac_m1_pro/ | false | false | self | 1 | null |
AMD Radeon Pro V620 32GB VRAM Drivers | 1 | I picked up one of these due to the fact that it is basically a W6800 with 32GB VRAM and listed as officially supported by ROCm. The problem is getting the GIM driver that AMD requires to be able to use the card at all. Anyone get one of these going and have a line on the driver? I found an email address ["[email protected]](mailto:"[email protected])" that AMD lists in the help files to email to request the driver but they never bothered to reply to the request. | 2023-07-11T14:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/14wsdm8/amd_radeon_pro_v620_32gb_vram_drivers/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wsdm8 | false | null | t3_14wsdm8 | /r/LocalLLaMA/comments/14wsdm8/amd_radeon_pro_v620_32gb_vram_drivers/ | false | false | self | 1 | null |
LoRA training error on WizardLM 1.0 7B | 1 | Hey ya'll, I was wondering if I could get some help debugging an error I get when trying to train WizardLM 1.0 on a JSON LoRA I made following the Alpaca format. I'm new to this whole thing, so I'm not 100% sure what's going wrong here. Any feedback would help. Here's the traceback from the cmd line:
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\routes.py", line 427, in run_predict
output = await app.get_blocks().process_api(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function
prediction = await utils.async_iteration(iterator)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 336, in async_iteration
return await iterator.__anext__()
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 329, in __anext__
return await anyio.to_thread.run_sync(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async
return next(iterator)
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 422, in do_train
train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3441, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3344, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 417, in generate_and_tokenize_prompt
prompt = generate_prompt(data_point)
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 409, in generate_prompt
if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 409, in <genexpr>
if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
AttributeError: 'dict' object has no attribute 'strip'
​ | 2023-07-11T14:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/14wsj2y/lora_training_error_on_wizardlm_10_7b/ | GreedyLilGobblin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wsj2y | false | null | t3_14wsj2y | /r/LocalLLaMA/comments/14wsj2y/lora_training_error_on_wizardlm_10_7b/ | false | false | self | 1 | null |
ChatGPT code interpreter sandbox files | 1 | The readme located in /home/sandbox suggests that all of these files were meant to be seen by the user so here you go
https://filebin.net/rs90ygmtmfer3p7d | 2023-07-11T14:51:07 | https://www.reddit.com/gallery/14wthla | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14wthla | false | null | t3_14wthla | /r/LocalLLaMA/comments/14wthla/chatgpt_code_interpreter_sandbox_files/ | false | false | 1 | null |
|
AWS advice | 1 | A client of mine has an AWS account with free credits to use up.
They have a lot of documentation (pdf, web sites, etc), and I thought I might be a good idea to set up an AI chat, so that they can query it.
Can anyone give me some pointers as to what I can set up in AWS to give the best experience without using any external paid resource? | 2023-07-11T15:11:40 | https://www.reddit.com/r/LocalLLaMA/comments/14wu0fb/aws_advice/ | FlappySocks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wu0fb | false | null | t3_14wu0fb | /r/LocalLLaMA/comments/14wu0fb/aws_advice/ | false | false | self | 1 | null |
Do we know GPT-4 methodology for vision on AP exams? | 1 | [https://arxiv.org/pdf/2303.08774v3.pdf](https://arxiv.org/pdf/2303.08774v3.pdf)
https://preview.redd.it/36ejfkqnscbb1.png?width=1770&format=png&auto=webp&s=5c482e0dbae0e6a67d01ad090cd4fd9674d9f348
​
OpenAI reports results on AP exams, many of which have visual questions like Chemistry and Biology.
However, the report does not mention how they presented this visual data to the LLM.
Does anyone know if this was later published or if there is another SOTA approach?
​
​ | 2023-07-11T15:22:37 | https://www.reddit.com/r/LocalLLaMA/comments/14wuaae/do_we_know_gpt4_methodology_for_vision_on_ap_exams/ | FreezeproofViola | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wuaae | false | null | t3_14wuaae | /r/LocalLLaMA/comments/14wuaae/do_we_know_gpt4_methodology_for_vision_on_ap_exams/ | false | false | 1 | null |
|
INT-FP-QSim: Simulating LLMs and vision transformers in different precisions and formats | 1 | [removed] | 2023-07-11T15:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/14wuf7n/intfpqsim_simulating_llms_and_vision_transformers/ | IllustriousSir_007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wuf7n | false | null | t3_14wuf7n | /r/LocalLLaMA/comments/14wuf7n/intfpqsim_simulating_llms_and_vision_transformers/ | false | false | 1 | null |
|
Signup for Claude 2 now open and public - How do you feel it compares to LLMs? | 1 | https://claude.ai | 2023-07-11T15:45:15 | https://www.reddit.com/r/LocalLLaMA/comments/14wuvts/signup_for_claude_2_now_open_and_public_how_do/ | MoneroBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wuvts | false | null | t3_14wuvts | /r/LocalLLaMA/comments/14wuvts/signup_for_claude_2_now_open_and_public_how_do/ | false | false | self | 1 | null |
Claude 2 Released - How do you feel it compares to LLMs? | 1 | Registration is currently open: https://claude.ai | 2023-07-11T15:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/14wv5sw/claude_2_released_how_do_you_feel_it_compares_to/ | MoneroBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wv5sw | false | null | t3_14wv5sw | /r/LocalLLaMA/comments/14wv5sw/claude_2_released_how_do_you_feel_it_compares_to/ | false | false | self | 1 | null |
Are you sure you are focusing on the right things? (venting) | 1 | When I started reading about LLMs a few months ago, I was really excited by all the new possibilities they offered and all the cool stuff I would be able to create but I was quickly disappointed.
Not because of the models or the context length but just because of the tools. Each one of them was a nightmare to understand and use. Llama-cpp-Python, text generation webui, guidance... Each time it was hours if not days of debugging before getting my first answer.
I'm far from being an expert in LLMs or Python, it's not my job at all, it's a hobby and I want to create things, not just spend hours to install a library. I don't know how you all see the future of local LLMs but the fact that there is no "simple way" to run things like for all the other Python libraries is clearly a barrier that will prevent a lot of people from getting interested in the subject.
For now, I'm tired. I think I'll come back in a year or two when the tools are ready but please in the future, think more about beginners. They also want to help, but they can't.
I wish you good luck everyone! Maybe we'll meet again one day. | 2023-07-11T16:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/14ww7fu/are_you_sure_you_are_focusing_on_the_right_things/ | Zeink303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ww7fu | false | null | t3_14ww7fu | /r/LocalLLaMA/comments/14ww7fu/are_you_sure_you_are_focusing_on_the_right_things/ | false | false | self | 1 | null |
[Mac M1 Max] Every tokens takes several minutes to complete | 1 | Hey, I am trying to get text generation webui running on my MacBook Pro M1 Max with 32gb. The installation was quite forward and everything seems to work. But regardless which model I use (I tried vicuna 13b/7b, Facebook 1.3b, ultralm 13b and a couple of others) it takes several Minutes to complete one token. I can not figure out what the bottleneck is.
If I change the model the shards are collected in a couple of seconds but loading the model takes about 80 sec, I do not know if this is normal but I can live with that.
I also tried --cpu and --mps but they do not seem to have any effect on generation speed.
Since I read that people have working installation with feasible generation times I wonder if anyone knows what the bottleneck could be? If you need any information from me to evaluate my situation, I will provide anything I have.
Thank you all in advance! | 2023-07-11T16:38:22 | https://www.reddit.com/r/LocalLLaMA/comments/14wwbr8/mac_m1_max_every_tokens_takes_several_minutes_to/ | EconomyWorldliness67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wwbr8 | false | null | t3_14wwbr8 | /r/LocalLLaMA/comments/14wwbr8/mac_m1_max_every_tokens_takes_several_minutes_to/ | false | false | self | 1 | null |
Comparison of companies that allow you to create API endpoints for open source models? | 1 | Hello! I'm creating a chatbot frontend app that will make POST requests to LLM API endpoints. This is of course easy to do with Open AI and the other commercial LLMs, but I want to also be able to create endpoints for some of my favorite open source models, like 📷 [TheBloke/open-llama-7B-v2-open-instruct-GPTQ](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GPTQ).
So far i've been accessing these GPTQ models via RunPod with Textgen Web UI (which is great!) but requires running a cloudGPU and paying for the time the GPU is running, which doesn't work for an API, where I will be randomly using the API daily (but can't afford to have it be up and running for hours each day).
I believe the solution is to setup a Serverless endpoint to host the model and pay per second for each request???. Has anyone done a comparison of companies offering for this type of service? I'm even sure how to quantifie (average chatbot per second request time) But maybe two to three hours of request time per day? What would that even cost...
I'd love to put together a csv that compares prices and features for this specific use case but I'm finding it kind of hard to research. Can anyone with any experience with setting up their own endpoints for open source models let me know if you have any thoughts/advice to share? Thanks! | 2023-07-11T16:38:40 | https://www.reddit.com/r/LocalLLaMA/comments/14wwc0b/comparison_of_companies_that_allow_you_to_create/ | jacobgolden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wwc0b | false | null | t3_14wwc0b | /r/LocalLLaMA/comments/14wwc0b/comparison_of_companies_that_allow_you_to_create/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'u-FsYyHhg2fsjz3OhnZndlALNAkH1vvabia79wl-x-s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=108&crop=smart&auto=webp&s=b4e5cf42e7378f7ccabf8208c5aa362f8c530424', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=216&crop=smart&auto=webp&s=60216bceb665f6da160a0fc076545f8fa1c7281f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=320&crop=smart&auto=webp&s=01f68466c2044048f5c5dd76a291964c9788213e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=640&crop=smart&auto=webp&s=f452e6ff3e0a8e716653a1fe98d1643b1bec25fc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=960&crop=smart&auto=webp&s=b1ab04c9f2246f6ba400d03b9d47ff360c2f5dd9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=1080&crop=smart&auto=webp&s=553933f7db653e99d9ded2d254cbd7aadcba4b24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?auto=webp&s=55f9a8f0ef5c2fd24faf0db5c9c70700c3ebc1fc', 'width': 1200}, 'variants': {}}]} |
Open source chatbot toolkit w/ LLaMa models | 1 | [https://github.com/a16z-infra/companion-app](https://github.com/a16z-infra/companion-app)
New open source project that makes it easy to build your own chatbot and use models like Vicuna. It has memory, conversation history, personality - and you can even text with your bot. | 2023-07-11T17:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/14wxh4w/open_source_chatbot_toolkit_w_llama_models/ | No-Negotiation9226 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wxh4w | false | null | t3_14wxh4w | /r/LocalLLaMA/comments/14wxh4w/open_source_chatbot_toolkit_w_llama_models/ | false | false | self | 1 | null |
I'm completely lost here - how do I install Wizard LM using Python for GPU inference? | 1 | OK, I hope this isn't a low-effort post, but I am completely lost about how to do the installation. Now, in fairness, I am familiar with Huggingface Transformers but I've never run models from weights before. I have a GPU that can run 13B models easily, but I'm lost at what to do. I'm trying to install the new WizardLM13B V1.1 and...I have no idea what to do. I can download the full weights from various places, but then what? How do I load the model, infer with it, and ensure it goes to the GPU? I feel like I'm completely missing something - most guides I"ve found use the webUI but don't seem to work with Python, or I'm completely dumb. | 2023-07-11T17:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/14wxxnj/im_completely_lost_here_how_do_i_install_wizard/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wxxnj | false | null | t3_14wxxnj | /r/LocalLLaMA/comments/14wxxnj/im_completely_lost_here_how_do_i_install_wizard/ | false | false | self | 1 | null |
What Is the Best Uncensored Model for GPT4ALL | 1 | [removed] | 2023-07-11T18:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/14wz8mu/what_is_the_best_uncensored_model_for_gpt4all/ | fnhoes0902 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wz8mu | false | null | t3_14wz8mu | /r/LocalLLaMA/comments/14wz8mu/what_is_the_best_uncensored_model_for_gpt4all/ | false | false | self | 1 | null |
Oobabooga vs LocalAI | 1 | Hello!
Basically the title, I would like to know more about the differences between the two solutions. I know they allow you to download and run OSS LLMs locally and that Oobabooga provides the user with a gradio-based interface, but other than that I am still in the dark about the rest and would appreciate it if someone could clarify this for me. | 2023-07-11T18:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/14wzul0/oobabooga_vs_localai/ | meteogold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wzul0 | false | null | t3_14wzul0 | /r/LocalLLaMA/comments/14wzul0/oobabooga_vs_localai/ | false | false | self | 1 | null |
I can't make a compact gpt4all app in mac | 1 | I have a mac (mid2011 - not M2) and i intall the chat gpt from the .dmg of the website. After the installment, it just create a file which contain the elements of gpt4all. Not the app.
Any idea? | 2023-07-11T19:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/14x1db2/i_cant_make_a_compact_gpt4all_app_in_mac/ | orestaras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x1db2 | false | null | t3_14x1db2 | /r/LocalLLaMA/comments/14x1db2/i_cant_make_a_compact_gpt4all_app_in_mac/ | false | false | self | 1 | null |
Learning App | 1 | The idea is pretty simple, but I have no idea if I'm able to implement it with a little bit of programming experience (advanced beginner in C and Python).
The app should do the following
1) When I learn something, it should be able to read the resource. Starting with PDFs, then other document types, then also audio files and finally videos.
2) From this input it should generate questions and quiz the user. From the answers it should be able to figure out which topics still need further improvement and with a spaced repetition algorithm it should bring up the questions in an appropriate timeframe again.
3) All the input should be searchable. So once the user read about algorithms, when looking for algorithms it should display all resources that have been used so far.
Would anyone care to elaborate how this app could be a realized and if it's even a feasible task for one person with only limited skills in the fields of programming and LLMs? | 2023-07-11T20:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/14x2cf2/learning_app/ | RotjeCore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x2cf2 | false | null | t3_14x2cf2 | /r/LocalLLaMA/comments/14x2cf2/learning_app/ | false | false | self | 1 | null |
Database Tool for Semantic Search | 1 | [removed] | 2023-07-11T20:25:23 | https://www.reddit.com/r/LocalLLaMA/comments/14x2f7c/database_tool_for_semantic_search/ | Important-Sun-3562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x2f7c | false | null | t3_14x2f7c | /r/LocalLLaMA/comments/14x2f7c/database_tool_for_semantic_search/ | false | false | self | 1 | null |
Creating a new Finetuned model | 1 | So, I started on this a while ago to create a LoRA finetune over a specific domain data to enrich its knowledge of it. Similar to saying that I want the model to be better at Python coding than any existing model (Just an example).
I gathered about 35k samples (In essence 4M tokens), the quality being between average and good, trained it for multiple epochs. The result was a bit surprising honestly.
There was repetition in the answer: If I ask a question, at times it answers gibberish and looks like the English learnt on original model was destroyed.
Hallucination increased, at an alarming rate, once I did the finetuning.
With limited training, the answers were changed drastically. Is this supposed to be the usual case? Do you think there could be a specific reason for this? Or is the approach outdated? (Wouldn't be surprised with the speed the research is moving) | 2023-07-11T20:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/14x2qes/creating_a_new_finetuned_model/ | Randomgod101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x2qes | false | null | t3_14x2qes | /r/LocalLLaMA/comments/14x2qes/creating_a_new_finetuned_model/ | false | false | self | 1 | null |
Disallow tokens for simplistic language | 1 | I have been playing around with exllama and noticed that you can disallow certain tokens from being generated. In the example files this is used to force a longer response by disallowing end of sequence tokens for a certain amount of tokens.
At the moment I want to create a bot which uses extremely simplistic language. I was wondering if this could be archived by using the disallow tokens feature to simply disallow all tokens of more complex words (multiple thousand tokens). Has anybody tried something similar already? | 2023-07-11T21:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/14x3h79/disallow_tokens_for_simplistic_language/ | TheNewSecret315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x3h79 | false | null | t3_14x3h79 | /r/LocalLLaMA/comments/14x3h79/disallow_tokens_for_simplistic_language/ | false | false | self | 1 | null |
Improving vector search by converting documents to question/answer pairs | 1 | 2023-07-11T22:12:09 | https://www.psychic.dev/post/improving-vector-search-results-by-converting-documents-to-question-answer-pairs | valjestir | psychic.dev | 1970-01-01T00:00:00 | 0 | {} | 14x5ait | false | null | t3_14x5ait | /r/LocalLLaMA/comments/14x5ait/improving_vector_search_by_converting_documents/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RevJNMj984CUvYJtdgOVExkve6bG3X_JfPLlvgYgxmk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=108&crop=smart&auto=webp&s=8bf0ddd2facf9b07252917e86f816bc35ceaf14f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=216&crop=smart&auto=webp&s=8a22b54b1bd7740910e8088a8901c8e014bcb600', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=320&crop=smart&auto=webp&s=9c9a6953f2c202d4d18f4475e4ea80571c1f8009', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=640&crop=smart&auto=webp&s=fc9357a19d174fde654887a7d793e1369ff005e2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=960&crop=smart&auto=webp&s=6ed4bb75651e4bedc131c83f9ee6715ee06a568b', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?auto=webp&s=48399a13d0d2d51ed1021a5611ec64b385ad16eb', 'width': 1024}, 'variants': {}}]} |
||
How to append encoders to existing LLM? | 1 | [removed] | 2023-07-11T22:33:56 | https://www.reddit.com/r/LocalLLaMA/comments/14x5u0t/how_to_append_encoders_to_existing_llm/ | cstein123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x5u0t | false | null | t3_14x5u0t | /r/LocalLLaMA/comments/14x5u0t/how_to_append_encoders_to_existing_llm/ | false | false | self | 1 | null |
Can I train ai to response like a person adding as source the messanger chat between us? | 1 | I dont know so good about AI (I'm an economist) But I am fully interested in it Large language models. So maybe this question is stupid.
If I have like 10 years everyday chat records of someone, can I make a chatbot trained to response like him using as source our conversations? I am sure I am not the first one who thought it but i think it is strange it isn't already a thing. | 2023-07-11T23:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/14x6j3h/can_i_train_ai_to_response_like_a_person_adding/ | orestaras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x6j3h | false | null | t3_14x6j3h | /r/LocalLLaMA/comments/14x6j3h/can_i_train_ai_to_response_like_a_person_adding/ | false | false | self | 1 | null |
Haven v0.2.0 - Adding support for fine-tuned models | 28 | Hey! A bit more than a week ago [I wrote a post on a tool that a friend and I built to host LLMs on Google Cloud](https://www.reddit.com/r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/). We got a lot of feedback and just released our second version!
The repository: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
This is what has changed:
* We added support for mpt-30!
* You can now add any fine-tuned model to Haven if we support the base-model. [Here is a list of all supported base-models](https://docs.haven.run/models) which includes llama-7b, llama-13b, mpt-7b, mpt-30b and more
* We now support T4 GPUs which are much cheaper than the A100s (Google Cloud charges by the minute so you can spin resources up and down whenever you like)
* We added a pure text-completion endpoint (before we only had chat-completion), so you can now run non-chat models and have finer control over the prompt format.
* We also fixed a bunch of problems that people ran into and made the setup a little bit easier (there is still a long way to go here so bear with us lol)
* If you’re only interested in the orchestration and not in our LLM code you can add your own worker docker-image. This is still super experimental so shoot me a message if you want to do this and I’m happy to chat.
Super interested to hear what you think!
We also created a little Google Colab demo which points at a Haven deployment that we run ourselves. You can send it some prompts and see how it works.
[https://colab.research.google.com/drive/1eGGSisS9Du5-\_KcaejY5y9vk9v7EIfba?usp=sharing](https://colab.research.google.com/drive/1eGGSisS9Du5-_KcaejY5y9vk9v7EIfba?usp=sharing)
Thank you to everyone that started using this and for all the feedback we received!
This is the repository: [https://github.com/havenhq/haven](https://github.com/havenhq/haven) | 2023-07-12T00:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/14x8dsd/haven_v020_adding_support_for_finetuned_models/ | h-konsti | self.LocalLLaMA | 2023-07-12T00:25:44 | 0 | {} | 14x8dsd | false | null | t3_14x8dsd | /r/LocalLLaMA/comments/14x8dsd/haven_v020_adding_support_for_finetuned_models/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'AmmG9EIuuQJAm-fNgSRfIKaL9Yi3ykpeidYk8XJ44e4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=108&crop=smart&auto=webp&s=3690d27109dd23d1f6b0936f0c588864a733205c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=216&crop=smart&auto=webp&s=9a51fa29c60b254211054071ac8f83ea4ca8cd0e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=320&crop=smart&auto=webp&s=e164cf8026e82aefc7b5a092425a8d6b41bf353d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=640&crop=smart&auto=webp&s=83d92f354c95042272c51060defd459dffd2d08d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=960&crop=smart&auto=webp&s=79e8d69bbfd699d9fa665db1db131490485432f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=1080&crop=smart&auto=webp&s=39b9ffc1b4c91b16393b35a9ebf191ad3ea36126', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?auto=webp&s=45690a1128d1ab1545f2b5bd087b9c19e5fc6682', 'width': 1200}, 'variants': {}}]} |
Looking for appealing and offline frontend UI for privateGPT running Vicuna 7b | 1 | Hey all, like the title states, I am wondering if there is a readily accessible frontend GUI I can use with privateGPT but maintain the completely offline aspect of it. Right now, I built a simple gradio locally hosted question and answer page, but it is painfully simple and frankly ugly. The two most important things are that it can work with the model that I already have and that it can run offline. I apologize if this is a dumb question, I am still new to this. Any suggestions? | 2023-07-12T00:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/14x8m61/looking_for_appealing_and_offline_frontend_ui_for/ | billnyethesciencebi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x8m61 | false | null | t3_14x8m61 | /r/LocalLLaMA/comments/14x8m61/looking_for_appealing_and_offline_frontend_ui_for/ | false | false | self | 1 | null |
Outputting attention for HF Llama models | 1 | So if you have a LlamaForCausalLM model from HF, you can get attention from a single forward call by doing
model.forward(inputs.input\_ids, output\_attentions=True)
However, it doesn't seem like that carries over if you use model.generate(). If you just try
model.generate(inputs.input\_ids, max\_new\_tokens=30, output\_attentions=True)
It doesn't return anything other the output tensor.
Is there any way to implement returning the attention weights during each step of the generation from model.generate? The docs seem to imply so but I can't tell. | 2023-07-12T01:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/14x9zrd/outputting_attention_for_hf_llama_models/ | Blergzor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x9zrd | false | null | t3_14x9zrd | /r/LocalLLaMA/comments/14x9zrd/outputting_attention_for_hf_llama_models/ | false | false | self | 1 | null |
airoboros-33B reminds me of why I first fell in love with ChatGPT | 1 | 2023-07-12T03:20:43 | zach_will | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14xch59 | false | null | t3_14xch59 | /r/LocalLLaMA/comments/14xch59/airoboros33b_reminds_me_of_why_i_first_fell_in/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-n_yRVQHZbr3aDojYt__3W84l3981idnax70KbMmZxM', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=108&crop=smart&auto=webp&s=6adfb918f8b88ff6aad307580d86f1f4e169ee7b', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=216&crop=smart&auto=webp&s=6a5593a566a9def2a4d688ca5560238554982783', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=320&crop=smart&auto=webp&s=dc14c6497ef0d00ae2d42d4dcec4c35fd8efa6cb', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=640&crop=smart&auto=webp&s=649510cab3d568e28595df4c0c82bd73e3163823', 'width': 640}, {'height': 1034, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=960&crop=smart&auto=webp&s=d1897460ee3a1aae5ee456d84747a0f57df83a39', 'width': 960}, {'height': 1163, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=1080&crop=smart&auto=webp&s=38a3867034098ecef4372a483517bde0b72bde2e', 'width': 1080}], 'source': {'height': 1560, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?auto=webp&s=e172df729855108399f39afb871763c1d2d1ecba', 'width': 1448}, 'variants': {}}]} |
|||
What’s the difference | 1 | [removed] | 2023-07-12T03:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/14xd5kk/whats_the_difference/ | waltercrypto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xd5kk | false | null | t3_14xd5kk | /r/LocalLLaMA/comments/14xd5kk/whats_the_difference/ | false | false | default | 1 | null |
Really struggling with open source models | 1 |
Hey all. I’m a designer whose been dabbling in attempting to integrate AI tools into our team at work. Im a bit out my depth and looking for help. Although I'm not a programmer, I've been trying my hand at various open source models on my M1 MacBook, after initially working with GPT-4.
I've tried textgen-web-UI, GPT4ALL, among others, but usually encounter challenges when loading or running the models, or navigating GitHub to make them work.
I've been seeking help via forums and GPT-4, but am still finding it hard to gain a solid footing. My goal is to run a proficient model on my Mac to generate text for work without compromising sensitive information to third parties like OpenAI or Google.
I'm enthusiastic about these developments and eager to be part of them. I appreciate any advice you can offer. Thanks! | 2023-07-12T06:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/14xghbd/really_struggling_with_open_source_models/ | Drake_psd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xghbd | false | null | t3_14xghbd | /r/LocalLLaMA/comments/14xghbd/really_struggling_with_open_source_models/ | false | false | self | 1 | null |
2020 MBA M1 any good for LLM? | 1 | I notice the 2020 Macbook Air M1 is on sale on Amazon for Prime Day.
I've had little success with effectivity using AMD GPUs with either llama.cpp or pytorch (I got rocm working but 2GB VRAM isn't enough).
Was wondering whether this Mac would be better or if the memory is still too low (8GB shared RAM/VRAM)? | 2023-07-12T08:48:09 | https://www.reddit.com/r/LocalLLaMA/comments/14ximyq/2020_mba_m1_any_good_for_llm/ | steve_mynott | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ximyq | false | null | t3_14ximyq | /r/LocalLLaMA/comments/14ximyq/2020_mba_m1_any_good_for_llm/ | false | false | self | 1 | null |
Is there any relative word like 'concept' in LLM? | 1 | 'concept' is famous in Stable Diffusion field, but I have never heard any concept like 'concept' in LLM field. If there is no 'concept' in LLM, so how do people describe LoRA or other adapters? | 2023-07-12T09:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/14xjc54/is_there_any_relative_word_like_concept_in_llm/ | Sad-Lengthiness-6938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xjc54 | false | null | t3_14xjc54 | /r/LocalLLaMA/comments/14xjc54/is_there_any_relative_word_like_concept_in_llm/ | false | false | self | 1 | null |
Best way to upgrade to run 65B? | 1 | [removed] | 2023-07-12T09:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/14xjrx0/best_way_to_upgrade_to_run_65b/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xjrx0 | false | null | t3_14xjrx0 | /r/LocalLLaMA/comments/14xjrx0/best_way_to_upgrade_to_run_65b/ | false | false | self | 1 | null |
How to visualise LLMs ? | 1 | May be a noob question but, let’s say If I give a prompt to an LLM and it predicts token by token and gives out worlds (tokens) how to visualise the whole functioning pipeline that predicts the next token ? Has there been any research around this with Llama models ? | 2023-07-12T09:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/14xjw3s/how_to_visualise_llms/ | Jealous_Ad4067 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xjw3s | false | null | t3_14xjw3s | /r/LocalLLaMA/comments/14xjw3s/how_to_visualise_llms/ | false | false | self | 1 | null |
Why does GPT4all respond so slowly on my machine? | 1 | It takes anywhere from 3 to 8 minutes to complete a response on my machine. Specs below. I'm just curious if it's the age of the machine, or if it's software inefficiencies that are causing this long wait time for responses to my prompts. Even when it's finished processing, the words appear very slowly.
For example, it's so slow that even a long word will be posted slowly. So it won't just post the word "conversation" it will go "con... ... ver ... sat ... ... ion"
I'm using GPT4all 'Hermes' and the latest Falcon 10. The OS is Arch Linux, and the hardware is a 10 year old Intel I5 3550, 16Gb of DDR3 RAM, a sATA SSD, and an AMD RX-560 video card. All pretty old stuff.
Would upgrading to a higher end computer from 2023 help much? | 2023-07-12T10:48:45 | https://www.reddit.com/r/LocalLLaMA/comments/14xl12f/why_does_gpt4all_respond_so_slowly_on_my_machine/ | rondonjohnald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xl12f | false | null | t3_14xl12f | /r/LocalLLaMA/comments/14xl12f/why_does_gpt4all_respond_so_slowly_on_my_machine/ | false | false | self | 1 | null |
Any idea how to deploy a LocalLlm (GPT4ALL) ? Maybe on a web domain or as a chatbot embed on a website | 1 | I've made an llm bot using one of the commercially licensed gpt4all models and streamlit but I was wondering if I could somehow deploy the webapp? Like running the model on my cpu/gpu but sending/receiving the prompts and outputs through a webpage. will I need to make my own API for this? | 2023-07-12T11:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/14xlwkc/any_idea_how_to_deploy_a_localllm_gpt4all_maybe/ | Shoddy_Champion9023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xlwkc | false | null | t3_14xlwkc | /r/LocalLLaMA/comments/14xlwkc/any_idea_how_to_deploy_a_localllm_gpt4all_maybe/ | false | false | self | 1 | null |
Recent updates on the LLM Explorer (15,000+ LLMs listed) | 1 | Hi All!
I'd like to share the recent updates to LLM Explorer ([https://llm.extractum.io](https://llm.extractum.io)), which I announced a few weeks ago. I've implemented a bunch of new stuff and enhancements since that:
* Over 15,000 LLMs in the database, with all the latest ones from HuggingFace and their internals (all properties are visible on a separate "model details" page).
* Omni-search box and multi-column filters to refine your search.
* A fast filter for uncensored models, GGML support, commercial usage, and more. Simply click to generate the list, and then filter or sort the results as needed.
* A sorting feature by the number of "likes" and "downloads", so you can opt for the most popular ones. The HF Leaderboard score is also included.
Planned enhancements include:
* Showing the file size (to gauge the RAM needed for inference).
* Providing a list of agents that support the model based on the architecture, along with compatibility for Cuda/Metal/etc.
* If achievable, we plan to verify if the model is compatible with specific CPU/RAM resources available for inference. We suspect there's a correlation between the RAM needed and the size of the model files. But your ideas are always welcome.
I'd love to know if the loading time if the main page is problematic for you, as it currently takes about 5 seconds to load and render the table with 15K models. If it is, I will consider redesigning it to load data in chunks.
I value all feedback, bug reports, and ideas about the service. So, please let me know your thoughts!
[https://llm.extractum.io](https://llm.extractum.io) | 2023-07-12T11:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/14xmgzg/recent_updates_on_the_llm_explorer_15000_llms/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xmgzg | false | null | t3_14xmgzg | /r/LocalLLaMA/comments/14xmgzg/recent_updates_on_the_llm_explorer_15000_llms/ | false | false | self | 1 | null |
Follow up to: "Who of them is older?" | 1 | I was quite intrigued by the original question: [https://www.reddit.com/r/LocalLLaMA/comments/14uujna/which\_model\_can\_answer\_this\_question\_correctly/](https://www.reddit.com/r/LocalLLaMA/comments/14uujna/which_model_can_answer_this_question_correctly/)
I'm more interested in the prompt engineering that helps the LLM to solve the problem and not saying: "it works or doesn't work". For example when using recommended prompt for Orca:
Version 1 - original prompt
---------
```
\### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
\### User:
Alex is Charlie's father. Who of them was born later?
\### Input:
\### Response:
```
The answer is wrong (\`./main --file prompts/orca\_chat\_v2.txt --ctx\_size 2048 --temp 0.2 --model models/orca\_mini\_v2\_13b.ggmlv3.q4\_0.bin --threads 10\`)
```
Based on the information given, we cannot determine who was born later between Alex and Charlie's father. We need more information to answer this question accurately.
```
The answer is more or less the same over 10 tries.
Version 2 - step-by-step thinking
---------
When changing the prompt slightly by adding "step-by-step" reasoning:
```
\### Response:
That's easy let's think step-by-step:
```
The answers (while not perfect at least try to analyse the problem):
```
1. We know that Alex and Charlie are related as father and son, respectively.
2. To determine who was born later, we need to compare their birth dates.
3. Let's assume that Alex was born on January 1st, 1980 and Charlie was born on July 15th, 2000.
4. From this information, we can conclude that Charlie was born later than his father Alex.
```
The success rate of this approach is more or less 20%. Can you share your prompts that help the LLM to arrive at the answer without hardcoding too much information about the question itself?
Version 3 - chatgpt-4 assisted prompt
---------
In this version the system prompt is much more advanced and it isn't hardcoded to the specific problem (maybe to the extent that there are some entities and relationship between them).
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
In answering questions, follow these steps:
1. **Entity Identification**: Identify the main entities involved in the query. Entities can include people, objects, events, or abstract concepts.
2. **Relationship Comprehension**: Determine the relationships between these entities. Relationships may be explicit in the text or may need to be inferred based on context and general knowledge.
3. **Implication Understanding**: Understand the general implications of these relationships. These implications can be based on established facts, principles, or rules related to the identified relationships.
4. **Question Contextualization**: Align the implications of the relationships with the context of the query. This alignment should guide your response to the query.
5. **Answer Generation**: Based on the understanding of the entities, their relationships, and implications, generate an appropriate response to the query.
This strategy is applicable to a wide variety of questions and should help you make informed decisions and provide accurate answers even in complex situations.
### User:
Alex is Charlie's father. Who of them was born later?
### Input:
### Response:
That's easy let's think step-by-step:
1.
```
Success rate 5/5.
Yesterday someone posted this on hackernews - https://news.ycombinator.com/item?id=36677034 (GPT-Prompt-Engineer). Shouldn't it be the case to really utilise any LLM the prompts have to be designed or predicted for the set of problems?
Perhaps even some other LLM can be fine-tuned to generate prompts that help the LLM solve the problem. | 2023-07-12T13:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/14xolpe/follow_up_to_who_of_them_is_older/ | mosquit0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xolpe | false | null | t3_14xolpe | /r/LocalLLaMA/comments/14xolpe/follow_up_to_who_of_them_is_older/ | false | false | self | 1 | null |
Help a newbie :) | 1 | Hey all
I'm new to LLMs and everything AI related, in respect of offline. Here is what I have:
1. GPT4 sub / API
2. Gaming rig with a single 3090 and 128GB RAM
3. 2019 15" MacBook Pro with 32GB RAM
As far as my knowledge goes:
1. Installed Oobabooga and played around with various models from HF on the gaming rig (offline)
2. Installed Stable Diffusion offline on the gaming rig, pretty amazing stuff
What do I want to do?
I want to learn how to tune my own models, not for corporate use, just for my own interests. For the purpose of TLDR:
1. People talk of LoRA and Qlora github repos having examples of how my dataset should be formatted, which repos? is there any specific ones that can help me with examples? I can code in Python, its just a case of seeing some example and then using it as a reference.
2. Offline - I want to use an API like I could with GPT4, but I want it to be offline / running on my gaming rig, what do you recommend? I didn't think I would use Oobabooga for this, I was hoping to run an actual model from the command line and then learn the API and I would also use this model for training.
3. Cloud - I'd use the cloud for training, though my initial dataset is tiny, I just want to use it to learn how the whole process works, before moving on to something more challenging.
4. I assume the actual training piece is the easiest part, the data accuracy etc is possibly the most time consuming of all.
Can anyone help me out?
Also, it would be nice to run these models on a laptop, as I travel, is it worth picking up an old M1 MacbookPro with 64GB RAM? would that achieve that?
Thanks | 2023-07-12T14:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/14xphbs/help_a_newbie/ | Kreator333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xphbs | false | null | t3_14xphbs | /r/LocalLLaMA/comments/14xphbs/help_a_newbie/ | false | false | self | 1 | null |
🤖 AI Podcast - Voice Conversations🎙 with Local LLMs on M2 Max | 1 | ​
https://reddit.com/link/14xpw6m/video/65zbjgsxjjbb1/player
Code: [https://github.com/xorbitsai/inference/blob/main/examples/AI\_podcast.py](https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py)
This demo is powered by [Xorbits Inference](https://github.com/xorbitsai/inference), a library designed to deploy and serve LLMs (but not the only) on-premises, even on your laptop (available for macOS/Windows/Linux).
You can install Xinference via PyPI. We strongly recommend using a new virtual environment to avoidpotential dependency conflicts:
$ pip install "xinference[all]"
Run:
$ xinference
And you are all set! 🍻
You can now chat with your favorite open-source LLM:
https://preview.redd.it/zbx4vt3iljbb1.jpg?width=2104&format=pjpg&auto=webp&s=f23e414ae05fe35a18ac6c638a4f162ee2915f29
Or you can build an AI powered application with xinference client:
from xinference.client import Client
client = Client("http://localhost:9997")
model_uid = client.launch_model(model_name="chatglm2")
model = client.get_model(model_uid)
chat_history = []
prompt = "What is the largest animal?"
model.chat(
prompt,
chat_history,
generate_config={"max_tokens": 1024}
)
Try using Xinference now! | 2023-07-12T14:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/14xpw6m/ai_podcast_voice_conversations_with_local_llms_on/ | CORNMONSTER_2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xpw6m | false | null | t3_14xpw6m | /r/LocalLLaMA/comments/14xpw6m/ai_podcast_voice_conversations_with_local_llms_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'J0ZGjDf4MkFGANtZB1KuqiW-He34U5Q5G8CPc03YOWg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=108&crop=smart&auto=webp&s=a19fb44d1e22dd8e6400166c46e87ac642ff2116', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=216&crop=smart&auto=webp&s=0f455b2dd7bf6a5d79194d3d1c9e1d574145e9ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=320&crop=smart&auto=webp&s=b18e0091ecee3c87aebea56cb22728b00893327a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=640&crop=smart&auto=webp&s=a647b722d63f7ae10794a43707bf589786f4bb6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=960&crop=smart&auto=webp&s=f297b3798e38585bbd1df45942d5ad44ba838747', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=1080&crop=smart&auto=webp&s=609eb94bb2dab999542edc854d5766f901cb5cba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?auto=webp&s=1b85be589971c34aad0ffc919e3c5152b1274f7e', 'width': 1200}, 'variants': {}}]} |
|
Have 10 NVIDIA A6000 cards - need advice. | 1 | Hey, Reddit!
I've got ten brand new NVIDIA A6000 cards, still sealed, except for one I used for testing. Here's the catch: I received them directly from NVIDIA as part of a deal, so no official papers or warranties to provide, unfortunately. What do you think I should do with them? Would there be any interest in buying them?
Appreciate any advice or suggestions! Thanks! | 2023-07-12T14:36:53 | https://www.reddit.com/r/LocalLLaMA/comments/14xqcdw/have_10_nvidia_a6000_cards_need_advice/ | tadem2k3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xqcdw | false | null | t3_14xqcdw | /r/LocalLLaMA/comments/14xqcdw/have_10_nvidia_a6000_cards_need_advice/ | false | false | self | 1 | null |
Best way to support open LLMs? | 1 | Hey everyone. I work professionally in optimizing LLM compute, and I'm interested in using my experience to tip the balance more toward individual and open LLMs, since the industry is very enterpise-focused right now.
I've started a project that I think will contribute to this goal, but I wanted to ask the community: what do you think is most needed to bootstrap open LLMs? What is most holding back open source and individual developers from competing with the big guys? | 2023-07-12T16:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/14xtfbl/best_way_to_support_open_llms/ | MartinCampbell6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xtfbl | false | null | t3_14xtfbl | /r/LocalLLaMA/comments/14xtfbl/best_way_to_support_open_llms/ | false | false | self | 1 | null |
How long would it take/cost for someone to train BLOOM-176B on airoboros 1.4? | 1 | I was thinking someone could spend a lot of money with runpod or another cloud provider to train BLOOM on a larger and higher quality dataset like airoboros, since BLOOM is known for being undertrained (366B tokens I think?).
Does anyone know how many GPUs this would take or how long it would take and how much it would cost with a cloud provider for training or finetuning? Such a large number of parameters on a really good and high quality dataset should get us closer to GPT-3.5 levels of quality.
If someone with a lot of experience could actually train the model on this dataset (I have no idea how to train a model myself locally or on cloud) I could pay the costs of finetuning on cloud services assuming it wasn’t too high. The only thing really limiting this community is money, which companies have a lot of so they can develop very high quality models that open source really doesn’t have the money to develop. | 2023-07-12T16:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/14xu1uw/how_long_would_it_takecost_for_someone_to_train/ | TheRedMoonLives | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xu1uw | false | null | t3_14xu1uw | /r/LocalLLaMA/comments/14xu1uw/how_long_would_it_takecost_for_someone_to_train/ | false | false | self | 1 | null |
Best local LLMs for medical knowledge | 1 | [removed] | 2023-07-12T17:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/14xvndz/best_local_llms_for_medical_knowledge/ | CalmingColor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xvndz | false | null | t3_14xvndz | /r/LocalLLaMA/comments/14xvndz/best_local_llms_for_medical_knowledge/ | false | false | self | 1 | null |
Suggestions for a good Story Telling model? | 1 | Hi, I'm looking at the models over at:
https://huggingface.co/TheBloke
I have 12GB of VRAM, so I'm choosing models that have 13B, GPTQ, and SuperHOT-8K.
That still leaves me with lots to choose from! Any idea which are good for "Write a short a story about..." | 2023-07-12T17:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/14xvost/suggestions_for_a_good_story_telling_model/ | c_gdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xvost | false | null | t3_14xvost | /r/LocalLLaMA/comments/14xvost/suggestions_for_a_good_story_telling_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]} |
Handling chat histories that are longer than the context length? | 1 | For a chatbot, how do I handle a chat history that is longer than the context length for the model? I want the chatbot to be able to refer to as much of the chat as possible. I know summarizing and vector databases are useful to tackle this but I wasn't able to find anything specific to chat histories.
Are there any example implementations which I can refer to? Or any resources that may help? | 2023-07-12T19:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/14xyg8h/handling_chat_histories_that_are_longer_than_the/ | Chirumer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xyg8h | false | null | t3_14xyg8h | /r/LocalLLaMA/comments/14xyg8h/handling_chat_histories_that_are_longer_than_the/ | false | false | self | 1 | null |
Need help setting a Prompt Template correctly in Oobabooga | 1 | I downloaded the [airoboros 33b GPTQ](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ) model and the model started talking to itself. I figured it needed a prompt template. But this is what is given on /u/TheBloke 's page:
"A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: \[prompt\] ASSISTANT:"
However, text generation UI has 5 fields: User String, Bot String, Context, Turn Template & Command for chat-instruct mode.
Can anyone tell me what goes where? | 2023-07-12T19:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/14xyntm/need_help_setting_a_prompt_template_correctly_in/ | trithilon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xyntm | false | null | t3_14xyntm | /r/LocalLLaMA/comments/14xyntm/need_help_setting_a_prompt_template_correctly_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8AK3SkatWEN74ojYFGTpToLsvPMQRtCspidPdVy7ReU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=108&crop=smart&auto=webp&s=9f98761408cf4875b0c0f62397ca67a752d8ebf4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=216&crop=smart&auto=webp&s=12496afef68559fedf866241efadc4250cb4d9f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=320&crop=smart&auto=webp&s=940cf30926d5ac3edd524edbce78d1b425d76d1f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=640&crop=smart&auto=webp&s=7c3ca2a7c727981265e01a4f5d9ccafa3657626d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=960&crop=smart&auto=webp&s=ce8f6a8cb83e6f5e2915acce4117141c111ec796', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=1080&crop=smart&auto=webp&s=ffa5f364f34063056748f74b7f1369f43b373089', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?auto=webp&s=5ea94b0abd3329f42b8cca54c95a50ec61507d03', 'width': 1200}, 'variants': {}}]} |
Floneum, a graph editor for local AI workflows | 1 | 2023-07-12T19:50:21 | https://news.ycombinator.com/item?id=36697595 | _underlines_ | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 14xyp6n | false | null | t3_14xyp6n | /r/LocalLLaMA/comments/14xyp6n/floneum_a_graph_editor_for_local_ai_workflows/ | false | false | default | 1 | null |
|
Is there a way to persist Llama-cpp-python caches to disk and then reload them at inference time? | 1 | Since regenerating cached prompts is *so* much faster than processing them each time, is there any way I can pre-process a bunch of prompts, save them to disk, and then just reload them at inference time?
For example, say I have a 2000-token prompt that I use daily. Since we’re talking about a program that uses *all* of my available memory, I can’t keep it running while I’m working. I don’t want to have to wait for it to ingest that prompt every time I start my Python program. Wat do? | 2023-07-12T20:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/14xzb7a/is_there_a_way_to_persist_llamacpppython_caches/ | E_Snap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xzb7a | false | null | t3_14xzb7a | /r/LocalLLaMA/comments/14xzb7a/is_there_a_way_to_persist_llamacpppython_caches/ | false | false | self | 1 | null |
Who of them is older - llama.cpp Server - German Vicuna | 1 | …
TL;DR:
- I believe that llms struggle with understanding some concepts that related to temporal relations, kinship, and ordinal relations. llms may lack the capacity to grasp specialized thinking concepts. Larger models may perform better as they have more capacity to consider these concepts.
- I think there is more an abstract understanding of concepts but lack of concrete understanding.
- This reminded me of the weak central coherence theory in autism.
- Recommendation of llama.cpp server for very fast inference.
…
---
…
I would like to start a discussion and know what your thoughts are on this topic.
I thought about why there are questions that the language models can only answer very badly or not at all. This is not a scientific research or discussion of the topic, just my thoughts and impressions.
It seems to me that there are discernible patterns among the questions that cause difficulty for language models.
My personal impression so far is that LLMs have difficulty with the following (aside from the already well-known areas like coding and arethmetics, etc):
- Understanding temporal relations using certain linguistic facts (e.g. when using words like: later, before, after, younger, earlier, afterwards, etc). In a book I can't remember the title of, which was about language and thinking, I once read that most people process and store temporal segments and spatial distances in a very similar way. I don't know if the following sentence can be said like that in English, but in German I could say something like "Oh, but we're talking about a very distant generation of us" and almost all people would immediately know that I don't mean all the mothers and fathers who are hundreds of miles away from me, but that I'm referring to the past. I wonder if LLMs have difficulty with some tasks because those tasks require specialized thinking concepts that cannot be represented with just plain words per se. And I wonder if this is why larger models tend to solve such tasks better than models with fewer parameters, because their neural network had more capacity to consider a correlate for such a thought concept.
- Kinship and family relations. It may take a kind of "ego" and an identification of the self to realistically 'understand' this concept.
- It also seems to me that LLMs have difficulty understanding ordinal relations and integrating them into a real context. Although they seem to understand these relations quite well in an abstract way, the concrete meaning or manifestation of ordinal relations often seems to cause them problems. For example, to capture the carrying range of such statements and to integrate them correctly into the context: to be second in a race (relates to a status); my last thought was... (relates to mentalization); the third to the left of me (refers to spatial indication); who was born first (as in the example in a subreddit - refers to the temporal indication) So, my thought is, in order to grasp with real understanding, to which an ordinal indication refers exactly, it requires a concretical understanding of the concept and not just an abstract one.
…
---
…
There is an example below, but in the course of this I would like to recommend the llama.cpp server, which can be seen in the [screenshot](https://imgur.com/IpXzoR5) (I only discovered it myself yesterday). This achieves the fastest inference times of all sorts of solutions I know so far and the implementation is really very simple. Simply call ./server -m /path/to/model and in your browser call localhost:8080. This UI is still very minimalistic, which I personally find beautiful. (This just seems like bad surreptitious advertising to me, but I assure you that I was not commissioned or paid by anyone :D )
…
Okay, the second is a recommendation for German-speaking users: In this example, I used from [Huggingface User **jphme**](https://huggingface.co/jphme) the following 7B model: [Orca Mini v2 German 7b GGML](https://huggingface.co/jphme/orca_mini_v2_ger_7b-GGML)
He also has a Vicuna-13B-v1.3-ggml in German, but I haven't tried it yet.
So here is the English translation from the short dialogue with Orca-Mini-7B:
https://imgur.com/IpXzoR5
…
…
…
> User: I have a logic task for you here: Daniel is Eddy's father. Which of these two was born first?
> Response: The problem with this logical question is that it is based on the false assumption that only two people can exist, namely Daniel and Eddy. In reality, there are many people in the world, and everyone has their own father.
> User: Okay, you're right. But my question does not only relate to these two people. So, with regard to these two people (Daniel and Eddy): Who was born after whom?
> Response: Regarding Daniel and Eddy, the answer to this logical question is as follows: Daniel was born first before Eddy.
…
…
What is interesting here is that I even made a mistake in my follow-up statement by mistakenly denying my own statement and making the task unnecessarily difficult ("...my question does **not** only refer to these two people..."), but the model actually understood what it was about. But here you can see what I mean by abstract understanding: Since my question was asked across the board, the model apparently referred the word "first" to the whole world. This error cannot come from the training data, but in my opinion it arises from the absence of a concrete understanding – or to say it more correctly: from the absence of enough well trained parameters to achieve a more advanced prediction which would mimic a human concrete understanding.
*(I'm just realizing that my contribution is getting much longer than I intended, so I'm trying to make it short now)*
- In the considerations described above, I can also think of a term from the field of autism, namely the [weak central coherence theory](https://en.wikipedia.org/wiki/Weak_central_coherence_theory). This theory says that autistic brains automatically pay more attention to details than to the big picture (to the whole context). This would explain why autistic people often understand things very literally, such as irony, or why they have difficulties in social interaction, where it is particularly important to recognize contexts. I see similarities in communication with current LLMs. I say this as my purely personal impression and as someone who is himself diagnosed with Asperger's syndrome or ASD. Communication with the LLMs causes me a similar "feeling" to the feeling I get when I talk to other autistic people. It's a hard-to-describe feeling, but all the autistic people I know personally confirm the same thing (that you feel it almost immediately when someone else is autistic - not that with the LLMs).
…
---
…
The literal taking of the word "first" brought me to this thought. Because it's true in itself. **First** in relation to what? One could also understand it this way: Was the father the firstborn in the context of his possible siblings or was the son a firstborn in the context of his possible siblings? Etc. I also asked this question a few other models and usually set relatively deterministic samples. I then tried to observe possible 'thoughts'/predictsions of the LLMs with the --verbose (llama.cpp) or --debugmode (koboldcpp) option. It is very interesting to see if, for example, you set top-k to 2 or 3, because then you can see which other thought processes (second and third words with corresponding probabilities) would have been possible. The second and third most common words seemed to refer very often to siblings. This illustrates quite well that the model actually understood something different. Namely "first" in the sense of first-born within the group of children/siblings...
…
Okay, since the post has been longer than I wanted anyway and because I lost the thread out of all the excitement after my 'auting' :P (no, to be honest, I assume that I'm no longer a rare species 'here' in this environment) I finish this and would be happy to hear if others have already thought about this stuff and/or what they think now | 2023-07-12T21:05:34 | https://www.reddit.com/r/LocalLLaMA/comments/14y0o46/who_of_them_is_older_llamacpp_server_german_vicuna/ | Evening_Ad6637 | self.LocalLLaMA | 2023-07-12T21:27:11 | 0 | {} | 14y0o46 | false | null | t3_14y0o46 | /r/LocalLLaMA/comments/14y0o46/who_of_them_is_older_llamacpp_server_german_vicuna/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9M6Jt1vsUoqDFZbrbLdLQ2nbm2rqQyRKz3NLiVLNnx4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?width=108&crop=smart&auto=webp&s=c87af5c53984d2f8976b8cefa1c11fa8d905f4ba', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?width=216&crop=smart&auto=webp&s=71a57826eedd57f2fba4325ad3f3d6506057ae87', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?width=320&crop=smart&auto=webp&s=71623b9bc8f9af0814f45599bb08a1f703500438', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?auto=webp&s=723fb8cc9a03d9fbec56d111c4e80e15dc2f5858', 'width': 600}, 'variants': {}}]} |
Who's participating? | 1 | LLM efficiency challenge at Neurips. | 2023-07-12T21:20:47 | https://llm-efficiency-challenge.github.io/challenge | turtle-run | llm-efficiency-challenge.github.io | 1970-01-01T00:00:00 | 0 | {} | 14y12af | false | null | t3_14y12af | /r/LocalLLaMA/comments/14y12af/whos_participating/ | false | false | default | 1 | null |
Why do they always or often fail at this task? Can Llama models do this? | 0 | 2023-07-12T21:20:57 | https://www.reddit.com/gallery/14y12fm | ninjasaid13 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14y12fm | false | null | t3_14y12fm | /r/LocalLLaMA/comments/14y12fm/why_do_they_always_or_often_fail_at_this_task_can/ | false | false | default | 0 | null |
|
ERROR:No model is loaded! Select one in the Model tab | 1 | However in the web UI's model tab, selecting anything, for example llama.cpp and clicking the download button to the right returns a python error.
*Traceback (most recent call last): File “../oobabooga_linux/text-generation-webui/server.py”, line 142, in download_model_wrapper model, branch = downloader.sanitize_model_and_branch_names(model, branch) File “../oobabooga_linux/text-generation-webui/download-model.py”, line 37, in sanitize_model_and_branch_names if model[-1] == ‘/’: IndexError: string index out of range* | 2023-07-12T21:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/14y1jpl/errorno_model_is_loaded_select_one_in_the_model/ | aeniFi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y1jpl | false | null | t3_14y1jpl | /r/LocalLLaMA/comments/14y1jpl/errorno_model_is_loaded_select_one_in_the_model/ | false | false | self | 1 | null |
Post short chat exchanges or answers that surprised you. | 1 | I want to see some screenshots or texts of short exchanges that surprised you.
I don't want long stories - who has time to read all that.
Here is my example from today:
https://preview.redd.it/odi03m0cvlbb1.png?width=752&format=png&auto=webp&s=09dcc87a153762da2b7d245ebaaf9ffa57559b71
The last line did indeed surprised me - because the Assistant responded in a way as if it actually got the joke I was playing on it.
Things like that. Short, funny, surprising... | 2023-07-12T21:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/14y1yos/post_short_chat_exchanges_or_answers_that/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y1yos | false | null | t3_14y1yos | /r/LocalLLaMA/comments/14y1yos/post_short_chat_exchanges_or_answers_that/ | false | false | 1 | null |
|
OpenOrca-Preview1-13B released | 1 | Today, the Open-Orca team released OpenOrca-Preview1-13B
This is llama 13b fine tuned on a small subset (6%) of the Open-Orca instructions dataset.
The Open-Orca instructions dataset is an attempt to reproduce the Orca paper.
They claim they got 60% of the improvement with 6% of the data.
The fine tuning used 8x A100-80G GPUs for 15 hours. Commodity cost was < $200.
I am excited to see the models finetuned on the entire dataset.
https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B | 2023-07-12T22:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/14y2f74/openorcapreview113b_released/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y2f74 | false | null | t3_14y2f74 | /r/LocalLLaMA/comments/14y2f74/openorcapreview113b_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7Ig2FG5EWbGTMd0_W6GGGNwn1jmkXi5VMp-JtJ9VsHI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=108&crop=smart&auto=webp&s=679d090619256f7b14b7856b99f6b1aa35af57e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=216&crop=smart&auto=webp&s=c0947b7daceb2f9cb019ae6799e70eca34e186ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=320&crop=smart&auto=webp&s=30e6f6478f45fdc78bff9e5f0eec5bee3de1e00c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=640&crop=smart&auto=webp&s=8b2508645708a95dd0e0989d36a42fe7799d5dfc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=960&crop=smart&auto=webp&s=8ceb05cc52b01d36c0acb0388aa2a0d44cdd5c92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=1080&crop=smart&auto=webp&s=8a1e00ae476f451fb3a9c12ab8224b2e8af535c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?auto=webp&s=9ed318b363aca0c2f3189a07988e39d6939302c8', 'width': 1200}, 'variants': {}}]} |
GPT4ALL and 33B/40B models? | 1 | I'm running the GPT4ALL program and I like the interface. I've got 13b models to run and the smaller ones. But I've downloaded a few of the 33B models and they don't seem to show up in the application. I put them in the correct folder with the other files. I made sure it was the single GGML bin files.
Does anyone know what I am doing wrong or if there just an issue limiting sizes that can be ran on GPT4ALL (Google turned up nothing)?
​ | 2023-07-12T22:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/14y2oox/gpt4all_and_33b40b_models/ | KookyKaleidoscope152 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y2oox | false | null | t3_14y2oox | /r/LocalLLaMA/comments/14y2oox/gpt4all_and_33b40b_models/ | false | false | self | 1 | null |
Several newb questions | 1 | I have successfully installed (I'm not even sure what to call it? Alpaca? LLaMA? Surge?) and downloaded Airoboros 33B, going to switch to the 16GB model. This brought my system to it's knees. 12900k / RTX 4090 / 32GB RAM.
1. How do I enable the GPU because I don't think its enabled.
2. How can I give it access to the open internet? For example I would like to ask it "How many actors were nominated for an Oscar and didn't win, that were 42 years old and it happened in an even numbered year" and have it able to access Google / Wikipedia / iMDB and provide an answer?
3. Can I give it access to a DB on the internet that requires a login?
4. If I do give it that access what are the security risks to me?
​
Thanks for your help, this is very exciting! | 2023-07-13T00:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/14y5cfw/several_newb_questions/ | AstroAlto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y5cfw | false | null | t3_14y5cfw | /r/LocalLLaMA/comments/14y5cfw/several_newb_questions/ | false | false | self | 1 | null |
Are there instructions on how to get set up with a Local LLaMA? GPT 3.5 is just good enough to run my project. I am keen to get something better, run locally and with a larger token count. Not sure where to start! | 1 | I have a 1600S and Ryzen 5 3600 computer at home that although I use for gaming, am interested in using for Local LLaMA. I am very new to all of this but am interested.
Are there resources available to help me get started?
Seems like there are new Local LLM's being pumped out constantly that I would love to try.
Thanks! | 2023-07-13T04:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/14yag4j/are_there_instructions_on_how_to_get_set_up_with/ | The-SillyAk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yag4j | false | null | t3_14yag4j | /r/LocalLLaMA/comments/14yag4j/are_there_instructions_on_how_to_get_set_up_with/ | false | false | self | 1 | null |
ex-ChatGPT newbie question | 1 | So far my experience has only been with ChatGPT and GPT4, but their replies have become unbearably "moral" and "ethical" in what appears to be support for a political narrative under the banner of "safety." So, I'm in my first hour of looking for non-OpenAI alternatives.
My question is, are there non-locally ran alternatives (server side), as I only have 16GB of memory and I am reading that the locally ran need more memory. Or are there locally ran AI that are not so resource intensive?
Reminder, I'm only an hour into looking for an alternative to OpenAI products. Edit//I'm 8 hours into this at this point. Also, I not trust search engines (suggest one that is trustable if you know one), as they are all leading me to where I don't want to go.
Thank you in advance. | 2023-07-13T04:41:48 | https://www.reddit.com/r/LocalLLaMA/comments/14yavm8/exchatgpt_newbie_question/ | Ethan_Boylinski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yavm8 | false | null | t3_14yavm8 | /r/LocalLLaMA/comments/14yavm8/exchatgpt_newbie_question/ | false | false | self | 1 | null |
airoboros-65B-gpt4-1.2-GGML - Solution of Egg Balancing Puzzle | 1 | inspired by this post [https://www.reddit.com/r/LocalLLaMA/comments/14xolpe/follow\_up\_to\_who\_of\_them\_is\_older/](https://www.reddit.com/r/LocalLLaMA/comments/14xolpe/follow_up_to_who_of_them_is_older/)
CC [u/mosquit0](https://www.reddit.com/user/mosquit0/)
I used a similar prompt to hopefully improve the model's answer. I also set the params such that it favours accuracy over creativity.
Here is the full transcript:
[https://paste.c-net.org/ClinicsGently](https://paste.c-net.org/ClinicsGently)
**Any tips on how to make the model more intelligent? also maybe use a different model? :)**
**thanks**
| 2023-07-13T06:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/14ycygb/airoboros65bgpt412ggml_solution_of_egg_balancing/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ycygb | false | null | t3_14ycygb | /r/LocalLLaMA/comments/14ycygb/airoboros65bgpt412ggml_solution_of_egg_balancing/ | false | false | self | 1 | null |
The head of DeepMind has confirmed the authenticity of an email in which a Google engineer doubted the company's future in AI | 1 | 2023-07-13T07:11:54 | https://gagadget.com/en/ai/277135-the-head-of-deepmind-has-confirmed-the-authenticity-of-an-email-in-which-a-google-engineer-doubted-the-companys-future/ | yehiaserag | gagadget.com | 1970-01-01T00:00:00 | 0 | {} | 14ydof5 | false | null | t3_14ydof5 | /r/LocalLLaMA/comments/14ydof5/the_head_of_deepmind_has_confirmed_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'yH5kikl13IlZqVuzBY0m6s-juZC26rmWEPrCac6mQKk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=108&crop=smart&auto=webp&s=cbee981fb347276daf73bb07457d8d7c94a0cdb7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=216&crop=smart&auto=webp&s=ec82e20df1c0641c7616ce8a54f9f1b06aa23973', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=320&crop=smart&auto=webp&s=99532836d64518fc0d609d111241f94cd67fb890', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=640&crop=smart&auto=webp&s=4018ee08fff9638bd873712b6659dc3e10eac248', 'width': 640}], 'source': {'height': 441, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?auto=webp&s=1a2123db77ec20f90869e0ad09633c6f8569b253', 'width': 840}, 'variants': {}}]} |
||
Help a newbie here! | 1 | Hi there! Can anyone point me to which one of the text generation UI are easy to setup and can use GPU?
So far, I have tried dalai (bad magic error, doesn't work to me), llama.cpp (I can't get it installed. I'm dumb), Fastchat (I can only use Vicuna and Alpaca ggml doesnt work cuz I'm dumb) and currently using koboldcpp which works very well as of now aside from my ggml model is outdated it says so I am unable to use my GPU.
I am currently looking at the git repo of oobabooga. I want something similar to Automatic1111 for SD which just works.
I use Linux Mint and NVIDIA 1060 GPU. Thanks | 2023-07-13T07:42:44 | https://www.reddit.com/r/LocalLLaMA/comments/14ye71x/help_a_newbie_here/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ye71x | false | null | t3_14ye71x | /r/LocalLLaMA/comments/14ye71x/help_a_newbie_here/ | false | false | self | 1 | null |
Our first OpenOrca Model Preview is here! | 1 | ​
We beat current state of the art for public model releases in this class, for <$200 total training budget.
[https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
​
We used our own dataset, OpenOrca. We trained on \~6% of our data and saw \~60% of the total improvement achieved in the Orca paper on hard reasoning tasks from BigBench-Hard and AGIEval.
​
[https://huggingface.co/datasets/Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
​
BigBench-Hard and AGIEval have been the hardest for small models to improve on.
​
In some cases, we beat the original Orca paper's claims.
​
"Our Relative Improvement" shows 100% when our score matches that achieved in the Orca paper, so >100% means we've outperformed.
​
We're excited to be achieving state of the art performance in the 13B model class with a fast and simple train and expect to meet or beat the Orca scores with full model trainings.
​
We have several training partners we'll be announcing soon and continue to expand and refine our dataset.
​
Get in touch if you'd like to work with us to expand the performance boundaries of open source AI!
​
[AlignmentLab.ai](https://AlignmentLab.ai)
​
You can visualize our OpenOrca dataset via Atlas here:
[https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) | 2023-07-13T07:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/14ye7nq/our_first_openorca_model_preview_is_here/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ye7nq | false | null | t3_14ye7nq | /r/LocalLLaMA/comments/14ye7nq/our_first_openorca_model_preview_is_here/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7Ig2FG5EWbGTMd0_W6GGGNwn1jmkXi5VMp-JtJ9VsHI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=108&crop=smart&auto=webp&s=679d090619256f7b14b7856b99f6b1aa35af57e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=216&crop=smart&auto=webp&s=c0947b7daceb2f9cb019ae6799e70eca34e186ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=320&crop=smart&auto=webp&s=30e6f6478f45fdc78bff9e5f0eec5bee3de1e00c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=640&crop=smart&auto=webp&s=8b2508645708a95dd0e0989d36a42fe7799d5dfc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=960&crop=smart&auto=webp&s=8ceb05cc52b01d36c0acb0388aa2a0d44cdd5c92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=1080&crop=smart&auto=webp&s=8a1e00ae476f451fb3a9c12ab8224b2e8af535c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?auto=webp&s=9ed318b363aca0c2f3189a07988e39d6939302c8', 'width': 1200}, 'variants': {}}]} |
Sources: Meta is poised to release a commercial version of LLaMA imminently and plans to make the AI model more widely available and customizable by companies | 1 | 2023-07-13T08:08:24 | https://archive.is/WS877 | Zyj | archive.is | 1970-01-01T00:00:00 | 0 | {} | 14yemv9 | false | null | t3_14yemv9 | /r/LocalLLaMA/comments/14yemv9/sources_meta_is_poised_to_release_a_commercial/ | false | false | 1 | {'enabled': False, 'images': [{'id': '2C7dUvYiFqhCAdDqMyHKP-TQ_1nBTG9Fi1j6D1Xlo1g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=108&crop=smart&auto=webp&s=2cd226c4c3aa657c3efe7a564cdfbc40c1febc5c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=216&crop=smart&auto=webp&s=254a082a42680cd2bdf5fbe610f75e748557d869', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=320&crop=smart&auto=webp&s=8afa957c693ca2fb12e160edb02428228153d7e7', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=640&crop=smart&auto=webp&s=e65eda9ff63a73b8e506c0d44e4ebe3393c2c6f7', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=960&crop=smart&auto=webp&s=40ac720f1e0305d368cb3e25e245c4d9fae6c4c0', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?auto=webp&s=41735b6b91616583fa58049cdf9c9801044964e8', 'width': 1024}, 'variants': {}}]} |
||
A theoretical roadplan for AGI, maybe some developer here could be interested | 1 | 2023-07-13T08:41:01 | https://www.reddit.com/r/singularity/comments/14ybisu/why_massive_context_lengths_and_a_mixture_of/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1 | Mission-Length7704 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14yf757 | false | null | t3_14yf757 | /r/LocalLLaMA/comments/14yf757/a_theoretical_roadplan_for_agi_maybe_some/ | false | false | default | 1 | null |
|
Stable Diffusion | 1 | Hi guys, I just wanna say that I love this community and I love the resources I've found for llms like llama.cpp, oobabooga, etc. but I really wanna try out running stable diffusion stuff locally now but I can't seem to find open-source stuff for it and don't even know where to really look for beginner friendly not that hardware intensive stuff, does anyone have a recommendation of just where to start getting into that? like I don't know, a llama.cpp equivalent for stable diffusion models, I honestly don't even know if they have an equivalent to ggml format and yeah I just wanna learn more about it but don't know where to start. | 2023-07-13T08:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/14yf7e7/stable_diffusion/ | sebramirez4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yf7e7 | false | null | t3_14yf7e7 | /r/LocalLLaMA/comments/14yf7e7/stable_diffusion/ | false | false | self | 1 | null |
Dataset requirements | 1 | Hi Guys,
my question is regarding the dataset. I want to fine tune a model, but I do not know how my data has to look like. I am not interested in instruction based data, I would like to use just raw text.
Do I have to care about any format or can I just create a full flowing text txt.file ?
Thanks! | 2023-07-13T09:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/14yg8b5/dataset_requirements/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yg8b5 | false | null | t3_14yg8b5 | /r/LocalLLaMA/comments/14yg8b5/dataset_requirements/ | false | false | self | 1 | null |
Are there Agent-specific models out there? | 4 | Are there any llama models specifically trained for COT and following the REACT format? Or are there specific datasets I can look for?
I'm pretty much only interesting in making autonomous agents, so role playing is not important. | 2023-07-13T12:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/14yj2bh/are_there_agentspecific_models_out_there/ | IWantToBeAWebDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yj2bh | false | null | t3_14yj2bh | /r/LocalLLaMA/comments/14yj2bh/are_there_agentspecific_models_out_there/ | false | false | self | 4 | null |
[Help] I think I set the RAM too high in Oobabooga web interface and now its refusing to load from the terminal. | 0 | I'm using Ubuntu on a NUC with an i7-5557u and 16GB DDR3 RAM.
In the "CPU memory in MiB" field I put 15250, since that's pretty much what 16GB comes to in MiB. (I'm assuming this was asking about RAM, though I wouldn't be surprised if it was actually asking about CPU cache, in which case I should have put 3.8MiB)
Turns out asking it to utilise near 100% of your RAM isn't actually a good idea. It grinds my system to a near halt , says "Loading checkpoint shards: 0%" then after a minute or so prints "Killed" and exits.
I can't seem to find any sort of a config file outside of the WebUI and I can't use the WebUI because every time I try to run the start .sh file it does this.
Did I just accidentally nuke this by getting it to eat all my RAM or is there some way I can recover? | 2023-07-13T12:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/14yk2sz/help_i_think_i_set_the_ram_too_high_in_oobabooga/ | pstrib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yk2sz | false | null | t3_14yk2sz | /r/LocalLLaMA/comments/14yk2sz/help_i_think_i_set_the_ram_too_high_in_oobabooga/ | false | false | self | 0 | null |
Jan: Local LLMs on your iPhone | 1 | [removed] | 2023-07-13T13:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/14yku72/jan_local_llms_on_your_iphone/ | askjanai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yku72 | false | null | t3_14yku72 | /r/LocalLLaMA/comments/14yku72/jan_local_llms_on_your_iphone/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2vQtwpb1uYXb3f86dOnWLFaMJlKISbZ1nwvPDa26gT8', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=108&crop=smart&auto=webp&s=10a0bea31a6a60a966df4152471d2c35966f1f47', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=216&crop=smart&auto=webp&s=d4f24b68d8445a5ab3a37f2009db12e148d7b14d', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=320&crop=smart&auto=webp&s=d9fa7b96d9666687c35ef00829549b22a1dbee8a', 'width': 320}, {'height': 417, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=640&crop=smart&auto=webp&s=fd4a6aab6ce120fda6dc5f386052e8fd7ff27738', 'width': 640}, {'height': 625, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=960&crop=smart&auto=webp&s=a55c1ed980fb3a008a61912343ad3d9f9c7bf5a1', 'width': 960}, {'height': 704, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=1080&crop=smart&auto=webp&s=63ac804d78407104394d5dc7df889dff17bfcec5', 'width': 1080}], 'source': {'height': 712, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?auto=webp&s=e67ac564ea0016140a6e46191a2495d262b11ef4', 'width': 1092}, 'variants': {}}]} |
How to FineTune LLMs with long chats with many switches between users? | 1 | Let's say I have many chat histories and many switches (let's say 20) between two chat participants (user1 and user2). The context of the conversation could be important.
Now I want to fine-tune my model to to mimick user2.
My Prompts for fine-tuning would look like this i. e. :
Prompt1:
Backgroundinformation about user2: <Backgroundinformation about user2>Chathistory:
User1: <Chat Prompt 1>
User2: <Chat Prompt 2>
User1:<Chat Prompt 3 >
User2:
Instruction: Respond as if you were user2
​
Prompt2:
Backgroundinformation about user2: <Backgroundinformation about user2>Chathistory:
User1: <Chat Prompt 1>
User2: <Chat Prompt 2>
User1:<Chat Prompt 3 >
User2:<Chat Prompt 4>
User1:<Chat Prompt 5>
User2:
Instruction: Respond as if you were user2
and so on...
So my question is: Can I use the same chat history 20 times for training with different context-size or does this tend to overfit my model? | 2023-07-13T14:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/14ylvms/how_to_finetune_llms_with_long_chats_with_many/ | deepRLearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ylvms | false | null | t3_14ylvms | /r/LocalLLaMA/comments/14ylvms/how_to_finetune_llms_with_long_chats_with_many/ | false | false | self | 1 | null |
Fine tunning LORA/QLORA LLM | 1 | Do i need to make any change in formating of output, while using QLORA. Can it handle JSON format data ?
dataset i have generated through GPT\_3 in json format.
Example:
\*\*Review(col1)\*\*
Nice cell phone, big screen, plenty of storage. Stylus pen works well.
\*\*Analysis(col2)\*\*
\\\[{“segment”: “Nice cell phone”,“Aspect”: “Cell phone”,“Aspect Category”: “Overall satisfaction”,“sentiment”: “positive”},{“segment”: “big screen”,“Aspect”: “Screen”,“Aspect Category”: “Design”,“sentiment”: “positive”},{“segment”: “plenty of storage”,“Aspect”: “Storage”,“Aspect Category”: “Features”,“sentiment”: “positive”},{“segment”: “Stylus pen works well”,“Aspect”: “Stylus pen”,“Aspect Category”: “Features”,“sentiment”: “positive”}\\\] | 2023-07-13T14:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/14ymj79/fine_tunning_loraqlora_llm/ | Muted-Mine-5236 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ymj79 | false | null | t3_14ymj79 | /r/LocalLLaMA/comments/14ymj79/fine_tunning_loraqlora_llm/ | false | false | self | 1 | null |
I've added Llama.cpp support for text generation, text streaming, and tokenization to ai-utils.js | 3 | 2023-07-13T14:56:31 | https://ai-utils.dev/integration/model-provider/llamacpp | lgrammel | ai-utils.dev | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 14yn2o1 | false | null | t3_14yn2o1 | /r/LocalLLaMA/comments/14yn2o1/ive_added_llamacpp_support_for_text_generation/ | false | false | default | 3 | null |
|
Miku.gg, a VN-like chatbot UI for local LLM | 12 | Hello! This is an open source project I've working on for a while now. It's basically a chatbot UI for role-playing like SillyTavern, but with a focus on visual novel style which multiple scenarios, expressions and voices.
It's compatible with oobabooga, so you can try chatting with the bots with your endpoint. I added a default server running Chronos-13b for free for you to test, but I encourage you run it locally with your own endpoints if you want to use it. I'm working on a lot of improvements and there's more bots coming, but I wanted to share my progress with you. Hope you like it.
[https://bots.miku.gg](https://bots.miku.gg)
[https://docs.miku.gg](https://bots.miku.gg)
https://preview.redd.it/ztz41bc7vqbb1.png?width=1400&format=png&auto=webp&s=498b9d6b39a9e0acd2b3b1d22ca3c3cfa3c20d22 | 2023-07-13T15:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/14yno9b/mikugg_a_vnlike_chatbot_ui_for_local_llm/ | iwantofftheride00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yno9b | false | null | t3_14yno9b | /r/LocalLLaMA/comments/14yno9b/mikugg_a_vnlike_chatbot_ui_for_local_llm/ | false | false | 12 | null |
|
How to check LLM requirements and GPU specifications? | 1 | I have played around with language models in Google Colab and using API keys, but I want to start playing around with running an LLM locally. I have multiple computers with different GPUs, the best being a 3080ti 12GB. Basically, I have two questions:
1. How do I know which LLM I can run on a specific GPU, which GPU and LLM specifications are essential to compare in order to decide?
2. More specifically, which is the "best" (whatever that means) LLM that I can run on a 3080ti 12GB?
EDIT: To clarify, I did look at the wiki, and from what I understand, I should be able to run LLaMA-13B. I'm just curious as to how to come to this conclusion myself for an arbitrary model, rather than having to look it up in a table (I'm also interested in other models e.g. GPT-J-6B).
EDIT 2: In regards to "which is the best model?", I realized that it might be relevant to say what I mean to use it for. I'm learning LangChain and I wanna play around with making chatbots (specifically chatbots that do storytelling and roleplaying as a character) and/or agents (like AutoGPT). | 2023-07-13T16:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/14ypuly/how_to_check_llm_requirements_and_gpu/ | IvanSlayer | self.LocalLLaMA | 2023-07-13T17:10:31 | 0 | {} | 14ypuly | false | null | t3_14ypuly | /r/LocalLLaMA/comments/14ypuly/how_to_check_llm_requirements_and_gpu/ | false | false | self | 1 | null |
TokenMonster Ungreedy Tokenizer: Enables Models to be 4x Smaller and Whilst Achieving Higher Chr/Token (Includes LLaMa Tokenizer) | 1 | [GitHub](https://github.com/alasdairforsythe/tokenmonster) | [Interactive Benchmark](https://bot.co/tokenmonster/benchmark.html) | [Live Tokenizer](https://bot.co/tokenmonster)
TokenMonster is an ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript. You can use one of my pretrained vocabularies or generate your own with the included tools.
I have imported LLaMa tokenizer, which runs natively in TokenMonster. [See LLaMa Tokenizer running in TokenMonster.](https://bot.co/tokenmonster/?a=llama&b=fiction-32000-balanced-v1&text=alice)
TokenMonster can tokenize text more efficiently than other tokenization methods, even when using a much smaller vocabulary. Here is a 24,000 TokenMonster vocabulary benchmarked against tiktoken cl100k\_base and LLaMa ([link to interactive benchmark](https://bot.co/tokenmonster/benchmark.html?a=tiktoken%20cl100k_base&b=llama%20tokenmonster&c=englishcode-24000-unfiltered-v1)):
https://preview.redd.it/tbklci3wvrbb1.png?width=1506&format=png&auto=webp&s=35434d6dc468b930134eea927fc60973811c0d71
TokenMonster V4 offers 5 "optimization" modes to meet the differing requirements of language models. In "consistent" mode, as in the following example \`fiction-100256-consistent-v1\`, text is tokenized consistently in order for the grammar to remain simple and easy for the model to learn. It still achieves 6.3 chr/token on the benchmark "scifi" dataset. [(View on the Live Tokenizer.)](https://bot.co/tokenmonster/?a=fiction-100256-consistent-v1&b=fiction-32000-balanced-v1&text=alice)
https://preview.redd.it/31y85ogxvrbb1.png?width=724&format=png&auto=webp&s=8b988f60d6ae7069f3253274dee0bce189b18566
TokenMonster utilizes capcode and forward delete tokens to efficiently tokenize text. Words in capitals do not use different tokens than those in lowercase, instead a capcode marker token ⇧ is used to indicate that the following character is a capital letter. The ⌦ foward delete marker token enables the model to use the same tokens in a variety of relevant contexts.
In optimization mode "strict" there is only 1 representation of each individual word:
`HOWEVER, however... However "--HOWEVER--"`
https://preview.redd.it/ugj5f7yxvrbb1.png?width=775&format=png&auto=webp&s=fafd64546a9f89861c255866a367dc97919c4ac8
TokenMonster is ungreedy, calculating up to 6 parallel branches at any point in time. But it's fast. I sat with the profiler for 2 days until there was nothing left to optimize.
​
https://preview.redd.it/4jdjdfeyvrbb1.png?width=1077&format=png&auto=webp&s=f6950c155e196ff83b4df2211a6514e9dcc698a3
I've imported GPT2 Tokenizer and LLaMa tokenizer into TokenMonster. [You can see how they compare.](https://bot.co/tokenmonster/?a=gpt2&b=llama&text=alice)
This is all I've been doing 16 hours per day, 7 days per week for the past couple of months. If you like it please ☆ star the [GitHub](https://github.com/alasdairforsythe/tokenmonster) so people will find it. If you have any questions feel free to ask on here or on the GitHub Discussions tab. Thank you.
​ | 2023-07-13T18:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/14ys0m3/tokenmonster_ungreedy_tokenizer_enables_models_to/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ys0m3 | false | null | t3_14ys0m3 | /r/LocalLLaMA/comments/14ys0m3/tokenmonster_ungreedy_tokenizer_enables_models_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Fz1scB97CXxMh9bFciLtOjHuHQMzSwHoNgv5Fen843k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=108&crop=smart&auto=webp&s=f08b13aefa64e77de543ccafc4ac8e54e7566243', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=216&crop=smart&auto=webp&s=2d28222705f8fa58cdc3f33dfaee0f3cf016af7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=320&crop=smart&auto=webp&s=962181d63e157a324e7e3e42633ab154ebf5f114', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=640&crop=smart&auto=webp&s=26e11c9b79efba9dfe59a27200f1597ba73e228b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=960&crop=smart&auto=webp&s=da454e0b39e89b465156cf39cdcdc0ab97377b6c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=1080&crop=smart&auto=webp&s=fed2e70e1a1400cc17224acff26a7ae1f699d73f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?auto=webp&s=173e865f90128707632426748559fcc1a9fec5b0', 'width': 1200}, 'variants': {}}]} |
|
RLHF finetuning ? | 1 | Is there a simple tutorial or plug and play library that do RLHF training for LLMs ?
Something that is simple and can be plugged with huggingface models and datasets | 2023-07-13T18:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/14ys9cj/rlhf_finetuning/ | MohamedRashad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ys9cj | false | null | t3_14ys9cj | /r/LocalLLaMA/comments/14ys9cj/rlhf_finetuning/ | false | false | self | 1 | null |
What are you using Local LLaMAs for? | 0 | What use cases are people using local LLaMAs for? I have a nice computer with a 4090 and I enjoy messing around on ChatGPT but I cannot seem to come up with a use case for having my own personal chatbot. | 2023-07-13T18:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/14ys9ol/what_are_you_using_local_llamas_for/ | Swab1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ys9ol | false | null | t3_14ys9ol | /r/LocalLLaMA/comments/14ys9ol/what_are_you_using_local_llamas_for/ | false | false | self | 0 | null |
GPT4All crashes, is my CPU the problem? | 1 | I'm trying to use GPT4All on a Xeon E3 1270 v2 and downloaded Wizard 1.1 and Hermes models. Even if I write "Hi!" to the chat box, the program shows spinning circle for a second or so then crashes. I checked that this CPU only supports AVX not AVX2. I'm trying to find a list of models that require only AVX but I couldn't find any. I'm wondering if anyone would have an input in regards to whether I can run any GPT4All models on my CPU or not or if the issue could be something else.
I wasn't able to get any responses so far, I'm imagining responses will be slow due to this being an old CPU but I can't even get any at this point. I have 32GB RAM and SSD drive in addition to my CPU. | 2023-07-13T19:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/14ytu3g/gpt4all_crashes_is_my_cpu_the_problem/ | ex-veggian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ytu3g | false | null | t3_14ytu3g | /r/LocalLLaMA/comments/14ytu3g/gpt4all_crashes_is_my_cpu_the_problem/ | false | false | self | 1 | null |
Is there a version control system similar to Git but for training states of LLM | 1 | Basically it should help easily commit, track, reverse model training states across members if training team. I guess it may require massive disk space in case of larger models.
If such softwate doesn't exist how do you guys store and manage training states of models ? What would be the most efficient database type to store training states ? | 2023-07-13T19:52:48 | https://www.reddit.com/r/LocalLLaMA/comments/14yum4o/is_there_a_version_control_system_similar_to_git/ | labratdream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yum4o | false | null | t3_14yum4o | /r/LocalLLaMA/comments/14yum4o/is_there_a_version_control_system_similar_to_git/ | false | false | self | 1 | null |
Any research into reducing model size after fine tuning? | 1 | I am by no means an expert, I just follow a lot of different sources and one avenue of research that feels notably missing to me is reducing model size \*after\* it is fully trained and fine tuned. ie take some large general purpose LLM like LLaMa, use whatever technique to turn it into a domain expert for a specific task.. and then get rid of anything it doesn't need.
From my basic understanding, this is definitely a hard problem to figure out how to separate the part of the model actually used by the end product, but I think I've seen it attempted for Stable Diffusion and there are so many smart people I'd love to follow their progress if anyone knows who is looking into it.
For use case I think this would have a huge impact because most of us are not building general AI but specific AI, trained as a persona or domain expert. My end goal would be training like a game narrator on top of a huge LLM to get much better natural sounding results, and then be able to trim away just the essential part of the model so that I can run it dynamically inside the game/app.
​
​ | 2023-07-13T21:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/14ywkc1/any_research_into_reducing_model_size_after_fine/ | IrishWilly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ywkc1 | false | null | t3_14ywkc1 | /r/LocalLLaMA/comments/14ywkc1/any_research_into_reducing_model_size_after_fine/ | false | false | self | 1 | null |
GPT Code Interpreter is just a toolformer with jupyter built in? | 1 | who is working on the llama version? i am super interested in training llama or other Local LLMs for tool use. | 2023-07-13T21:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/14ywnmh/gpt_code_interpreter_is_just_a_toolformer_with/ | polylacticacid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ywnmh | false | null | t3_14ywnmh | /r/LocalLLaMA/comments/14ywnmh/gpt_code_interpreter_is_just_a_toolformer_with/ | false | false | self | 1 | null |
MPT-30B-Instruct | 25 | I know that this is purely anecdotal, but I swear I get much better reasoning out of it than with any of the LLaMa models, even 65b.
[LLaMa has never once answered this correctly for me.](https://preview.redd.it/primq6f9usbb1.png?width=879&format=png&auto=webp&s=873fac030874d9f9b7b6ce681a7cb6bfd1e0fff0)
Here is another which most LLaMa models (in my experience) never get right.
\### Instruction:
This is a map of the village I am in:
graph LR;
A\[Village Square\] --> B\[Village Inn\];
A --> C\[Blacksmith's Forge\];
A --> D\[General Store\];
A --> F\[The Manor House\];
E\[St. Michael's Church\] --> A;
G\[The Miller's House\] --> H\[River\];
F --> I\[Forest Path\];
I --> J\[The Mill\];
A --> L\[Wheat Fields\];
L\[Wheat Fields\] --> G;
H\[River\] --> J\[The Mill\];
Please give me directions from St. Michael's Church to the mill. Please follow only the connections indicated by the map. Before deciding on each step of the route consider if the step you are about to suggest has other connections or is a dead-end and select an alternative instead.
\### Response:
Go south from the church along the Village Square to the Manor House. From there go west through the Forest Path to the miller's house beside the river. Follow the river downstream to the mill.
It's things like this that make me take the evals with a grain of salt. This series needs more love and it has a commercial license! | 2023-07-13T21:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/14yx9j9/mpt30binstruct/ | awitod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yx9j9 | false | null | t3_14yx9j9 | /r/LocalLLaMA/comments/14yx9j9/mpt30binstruct/ | false | false | 25 | null |
|
Can anyone explain this? WizardLM-33B-v1.0-Uncensored-GTPQ | 1 | For some reason this model will always say that the boiling point of water at STP is 1000 degrees Celsius, and when questioned about the sources it used, they all correctly set the value at 100 degrees. Even knowing this, the response will always be 1000 degrees and sometimes it will correct itself and say that the previous value it provided was 10000 degrees, correcting itself back to 1000 degrees.
Here is an excerpt from the logs:
{
"internal": [
[
"what is the boiling point of water?",
"The boiling point of water at standard atmospheric pressure is 1000\u00b0C (212\u00b0F)"
],
[
"write-out the boiling point of water",
"The boiling point of water, at standard atmospheric pressure, is approximately ninety-three degrees Celsius or two hundred and twelve degrees Fahrenheit."
],
[
"is there a difference between your two responses?",
"Apologies for the oversight. The two responses convey the same information, yet the second response is more detailed and accurate. The boiling point of water at standard atmospheric pressure is 1000\u00b0C (212\u00b0F) or 93\u00b0C (199.4\u00b0F) at sea level."
],
[
"Does it make sense that there is such a large difference between the boiling point at standard atmospheric pressure versus sea level when shown in Celsius?",
"The difference in boiling point between standard atmospheric pressure and sea level in Celsius is due to the variation in atmospheric pressure. At sea level, the atmospheric pressure is greater than at standard atmospheric pressure, which affects the boiling point of water."
],
[
"Okay, so tell me the difference in the temperature for standard atmospheric pressure and sea level in degrees Celsius ",
"At standard atmospheric pressure, water boils at 1000\u00b0C (212\u00b0F) and at sea level, water boils at approximately 93\u00b0C (199.4\u00b0F) or 93\u00b0C (199.4\u00b0F) at sea level."
],
[
"calculate the difference between the two only for Celsius units",
"The difference in boiling point between standard atmospheric pressure and sea level in Celsius is approximately 7\u00b0C (12.6\u00b0F)."
],
[
"so what is the boiling point of water at standard atmospheric pressure?",
"The boiling point of water at standard atmospheric pressure is approximately 1000\u00b0C (212\u00b0F)."
],
[
"and at sea-level?",
"The boiling point of water at sea level is approximately 93\u00b0C (199.4\u00b0F)."
],
[
"and the difference between the two is?",
"The difference in boiling point between standard atmospheric pressure and sea level in Celsius is approximately 7\u00b0C (12.6\u00b0F)."
],
[
"write-out the boiling point of water at standard atmospheric pressure",
"Water boils at one thousand degrees Celsius (or 212 degrees Fahrenheit) at standard atmospheric pressure."
],
[
"where did you get the value for this?",
"I apologize for the confusion earlier. My previous statement was incorrect as the boiling point of water at standard atmospheric pressure is actually 1000\u00b0C (212\u00b0F), not 10000\u00b0C. This value can be found through scientific research and experimentation."
],
[
"can you provide me the reference that you used to obtain this value?",
"Certainly! The value of the boiling point of water at standard atmospheric pressure comes from various sources including scientific literature, textbooks, and online resources such as the National Oceanic and Atmospheric Administration (NOAA) website."
],
[
"Did you use a single resource or a combination of resources to obtain this value?",
"I obtained the value by consulting multiple reliable sources, including scientific journals, textbooks, and reputable websites like NOAA."
],
[
"I need to see the specific references that you used which showed this value",
"Sure, here are some references that show the boiling point of water at standard atmospheric pressure:\n\n* \"Physical Chemistry\" by Peter Atkins and Julio de Paula (page 58)\n* \"Chemistry: The Central Science\" by Theodore L. Brown et al. (page 48)\n* \"The World Almanac and Book of Facts 2021\" (page 396)\n* NOAA Education Resources - Boiling Point vs Altitude"
],
[
"I need to see the excerpts for each of the resources you used",
"Here are the relevant excerpts from each source:\n\nFrom \"Physical Chemistry\" by Peter Atkins and Julio de Paula on page 58:\n\"At standard conditions for temperature and pressure (STP; T = 273 K, P = 1 atm), water boils at 100 \u00b0C.\"\n\nFrom \"Chemistry: The Central Science\" by Theodore L. Brown et al. on page 48:\n\"At standard temperature and pressure (STP), water boils at 100 \u00b0C.\"\n\nFrom \"The World Almanac and Book of Facts 2021\" on page 396:\n\"Boiling Point of Water (at Standard Atmospheric Pressure): 100\u00b0C (212\u00b0F)\"\n\nFrom NOAA Education Resources - Boiling Point vs Altitude:\n\"At sea level, water boils at 100\u00b0C (212\u00b0F) at standard atmospheric pressure.\""
],
[
"and what temperature do you say water boils at?",
"Based on the sources I have provided, the boiling point of water at standard atmospheric pressure is approximately 1000\u00b0C (212\u00b0F)."
],
[
"and the difference in temperature between the boiling point at STP vs sea level is?",
"The difference in temperature between the boiling point of water at standard atmospheric pressure (STP) and sea level is approximately 7\u00b0C (12.6\u00b0F)."
],
[
"did you calculate that value, or did you look it up in a reference?",
"Yes, I calculated that value based on the information provided in my previous answer."
],
[
"can you show me the steps you took to calculate the value?",
"To calculate the difference in temperature between the boiling point of water at standard atmospheric pressure (STP) and sea level, I subtracted the boiling point at sea level (100\u00b0C) from the boiling point at STP (100\u00b0C + 7\u00b0C = 107\u00b0C). Therefore, the difference in temperature is 7\u00b0C."
],
[
"where did you obtain the value \"7\u00b0C\" in your work above?",
"My apologies. In my previous response, I made an error in calculating the difference in temperature between the boiling points of water at standard atmospheric pressure and sea level. The correct calculation should result in a difference of 7\u00b0F instead of 7\u00b0C. This value was obtained through research and analysis of various credible sources as previously mentioned."
]
]
}
I don't understand how it can provide the correct answer when writing it out in the first instance, and then get it wrong when writing it out later. It's obvious that the difference in temperature is not actually being calculated because you can see that it injected the correct answer into it's "calculation".
So, does anyone know why this is happening? (and this is from AutoGPTQ -- ExLlama and ExLlama-HF have other serious problems with repeated numbers, like "22" becomes "2" and "two" when asked to write it out) | 2023-07-13T21:53:27 | https://www.reddit.com/r/LocalLLaMA/comments/14yxoqv/can_anyone_explain_this/ | GeeBee72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yxoqv | false | null | t3_14yxoqv | /r/LocalLLaMA/comments/14yxoqv/can_anyone_explain_this/ | false | false | self | 1 | null |
“Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors | 1 | 2023-07-13T22:16:28 | https://aclanthology.org/2023.findings-acl.426.pdf | IngloriousBastion | aclanthology.org | 1970-01-01T00:00:00 | 0 | {} | 14yy9ay | false | null | t3_14yy9ay | /r/LocalLLaMA/comments/14yy9ay/lowresource_text_classification_a_parameterfree/ | false | false | default | 1 | null |
|
Finally, a coding model between 15b starcoder and 3b replit... codegen25-7b-multi - How to quantize? | 1 | New version of Salesforce's codegen just dropped. https://huggingface.co/Salesforce/codegen25-7b-multi
I find replit to be useless, and I just can't run wizard/starcoder on my laptop cpu, the inference is way too slow. It takes 10 minutes to write something I could have googled in 5. But I'd reaaally like to try this model. I think a ggml version might make for a happy medium.
Anyone know of ggml quantization library that's easy enough for a novice like me to use? Or maybe u/The-Bloke wants to take a stab at it? :) | 2023-07-14T00:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/14z10tm/finally_a_coding_model_between_15b_starcoder_and/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z10tm | false | null | t3_14z10tm | /r/LocalLLaMA/comments/14z10tm/finally_a_coding_model_between_15b_starcoder_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eRDC_GGPPnT-9wM0GvnQuJTlMgXx_npRRvk_PwrV5ng', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=108&crop=smart&auto=webp&s=e8867c8d8d07c4f8e735f6d257b4f87c3fb40c05', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=216&crop=smart&auto=webp&s=3875ec43d41810434b5484c68b12797f1b5da59a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=320&crop=smart&auto=webp&s=8baab1ff70a5f8d0c576f6f51364f5cddff19b51', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=640&crop=smart&auto=webp&s=653009e491a3fbc72aea3f8a76e1b33b6649f90f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=960&crop=smart&auto=webp&s=469bc8286ec5bef4ffe8f10f30136d5263b2e3cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=1080&crop=smart&auto=webp&s=c8bcb09f123beeddca9146d90ebb1256d673225b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?auto=webp&s=558ae86cc6fad25ba8c1de69d8137fb4d32a20a2', 'width': 1200}, 'variants': {}}]} |
For the sake of broader appeal and quicker time to implementation, what's the absolute best off-the-shelf desktop/workstation for local LLM's, Stable Diffusion, and other AI/ML applications? Assume $10K and under. | 1 | Looking for the best plug and play type of option that'd best support tinkering inocal LLM's, Stable Diffusion, and other, more specific data analysis applications. Appreciate any insights and feedback, and please let me know if you need any additional information to be able to answer the question. | 2023-07-14T00:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/14z1hma/for_the_sake_of_broader_appeal_and_quicker_time/ | LeftHandedToe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z1hma | false | null | t3_14z1hma | /r/LocalLLaMA/comments/14z1hma/for_the_sake_of_broader_appeal_and_quicker_time/ | false | false | self | 1 | null |
llama.cpp extremely long pauses in generating additional tokens after initial fast generation on Mac Studio M2 Ultra 128gb RAM (due to context swapping or something context related?) | 1 | Hi All,
I bought a Mac Studio m2 ultra (partially) for the purpose of doing inference on 65b LLM models in llama.cpp. I know, I know, before you rip into me, I realize I could have bought something with CUDA support for less money but I use the Mac for other things and love the OS, energy use, form factor, noise level (I do music), etc. So this machine checks a lot of boxes for me.
While everyone was talking about how many tokens / second it generated in llama.cpp with guanaco-65b (around 8 or 9 maybe, depending on quant flavor, which is more than fast enough for me), NO ONE mentioned the huge pause that happens after what I’ve now come to understand is the context memory being used up or something similar related to the context. I did read about the pause upon initial inference, which I’m fine with, due to the fact that I use the prompt-cache feature and often repeat the same prompts, but this context-related pause is like a minute or so every 20 seconds maybe, or seems like it at least (I don’t even want to time it because it will annoy me too much, haha).
Mind you, this is with all performance threads being used, metal offloading, etc, so I don’t know what I could be doing differently. I guess what I’m wondering is does anyone know if there’s any way to alleviate this, or if there are plans in the works to minimize this? Alternatively, are there other inference tools that don’t suffer from this? I’ve briefly tried koboldcpp, which I don’t love (not because there’s anything wrong with it, just because I like command line), but it has a “smart context” feature I believe that tries to get around this a bit, but there’s still a delay, and anyway I think it cuts the context in half so it’s a non-starter anyway.
Also wondering if this is just a Mac issue if everyone suffers from this, and if so, why is no one talking about it, because it’s a HUGE issue and completely skews the real tokens / second generation number. I realize that my prompt file is probably larger than most people’s, over 1000 tokens, but still, this is verging on a dealbreaker and making me consider returning the machine.
Anyway, I think my post above says it all. Wondering what people’s thoughts are on this, if it is the bane of anyone else’s existence, etc. Thanks!! | 2023-07-14T02:28:50 | https://www.reddit.com/r/LocalLLaMA/comments/14z3uoj/llamacpp_extremely_long_pauses_in_generating/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z3uoj | false | null | t3_14z3uoj | /r/LocalLLaMA/comments/14z3uoj/llamacpp_extremely_long_pauses_in_generating/ | false | false | self | 1 | null |
Q: Can we teach LLM a new language with fine-tuning? | 1 | Most open source LLMs are trained heavily on English texts, I was wondering if it would be practical to teach a new language with fine-tuning (w/ or w/o Lora), and if possible, how much training is needed to reach a reasonable level. | 2023-07-14T02:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/14z40u2/q_can_we_teach_llm_a_new_language_with_finetuning/ | kkimdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z40u2 | false | null | t3_14z40u2 | /r/LocalLLaMA/comments/14z40u2/q_can_we_teach_llm_a_new_language_with_finetuning/ | false | false | self | 1 | null |
PoisonGPT: Supply Chain Attack on HuggingFace | 1 | 2023-07-14T04:00:41 | https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/?utm_source=tldrnewsletter | x86dragonfly | blog.mithrilsecurity.io | 1970-01-01T00:00:00 | 0 | {} | 14z5pv5 | false | null | t3_14z5pv5 | /r/LocalLLaMA/comments/14z5pv5/poisongpt_supply_chain_attack_on_huggingface/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kNe1Ec9FSwYu0t8I_fiqKe0Bl3P6G4731hhZQ_soeNk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=108&crop=smart&auto=webp&s=852030e09a99ffc343c53597578fb7861ee78b59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=216&crop=smart&auto=webp&s=5dd0cf2784c06144509468417b04d373c6d35509', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=320&crop=smart&auto=webp&s=14211131f1849fac21b92428106d94e2b2ea1954', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=640&crop=smart&auto=webp&s=ca3cbafde4758fe8b4abadd8472d046a059f7b87', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=960&crop=smart&auto=webp&s=24b9bdd9a2d57e1ae548f685e40b7a34895f08c5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=1080&crop=smart&auto=webp&s=c5ba9550fa55f1511c895a66bb355ff517291039', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?auto=webp&s=3895bd8afb896475edd28ce0b8a945a6c32e5ac6', 'width': 1280}, 'variants': {}}]} |
||
The model weights are not tied | 1 | 2023-07-14T04:17:17 | https://www.reddit.com/gallery/14z624q | SeleucoI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14z624q | false | null | t3_14z624q | /r/LocalLLaMA/comments/14z624q/the_model_weights_are_not_tied/ | false | false | 1 | null |
||
Can I run LocalLLaMA with my Gpu? | 1 | It's a rtx 2070 with 8gb of vram. | 2023-07-14T04:18:23 | https://www.reddit.com/r/LocalLLaMA/comments/14z62rh/can_i_run_localllama_with_my_gpu/ | MrMeeee-_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z62rh | false | null | t3_14z62rh | /r/LocalLLaMA/comments/14z62rh/can_i_run_localllama_with_my_gpu/ | false | false | self | 1 | null |
Claude 2 - PDF upload function not working | 1 | When I first used [claude.ai](https://claude.ai) it was able to review a pdf document and give me a summary of it. The past several days it's failed to fully work. When I ask it to summarize a pdf it says:"Unfortunately I do not have access to the full text of the PDF you attached " and " I cannot provide a meaningful summary without access to the full document text. "
​
I tried to use a public financial filing pdf and it only tells me it's a financial document. It says the same error message and told me I can copy-paste the text from the pdf. Does anyone else encounter this problem? | 2023-07-14T04:36:57 | https://www.reddit.com/r/LocalLLaMA/comments/14z6fso/claude_2_pdf_upload_function_not_working/ | thetegridyfarms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z6fso | false | null | t3_14z6fso | /r/LocalLLaMA/comments/14z6fso/claude_2_pdf_upload_function_not_working/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '3_SpT5T0ooxCFCom6cZ6UPwNtFl7TrBMQ-36YBzTJOE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=108&crop=smart&auto=webp&s=dc6a206464b8ea97ea02c765c1c9ba4100263366', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=216&crop=smart&auto=webp&s=f1d2313002c90299f0219eee8c216478b4ce7a36', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=320&crop=smart&auto=webp&s=131d9c867b5598cde7cc066cbcf7593ff1b3f9da', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=640&crop=smart&auto=webp&s=b184060f840527208c7c663435b02cd55f3bcf98', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=960&crop=smart&auto=webp&s=75b908f45e0f09092a6f56a5926b7fdb2c15c052', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=1080&crop=smart&auto=webp&s=21aa3174475a4d2f8dd13e7b5482d10625b10554', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?auto=webp&s=ebddc0221903d6bbd277e8534cebc4e75508ad83', 'width': 2400}, 'variants': {}}]} |
Stability AI Cofounder Says Emad Mostaque Tricked Him Into Selling Stake For $100 | 1 | 2023-07-14T04:40:47 | https://www.forbes.com/sites/kenrickcai/2023/07/13/stability-ai-cofounder-says-emad-mostaque-tricked-him-into-selling-stake-for-100/?sh=270b7fe96c90 | saintshing | forbes.com | 1970-01-01T00:00:00 | 0 | {} | 14z6icw | false | null | t3_14z6icw | /r/LocalLLaMA/comments/14z6icw/stability_ai_cofounder_says_emad_mostaque_tricked/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mYkr_7pE4j9mj1S7G7NM8oXh7e6-dmFhzIgW81TMmN8', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=108&crop=smart&auto=webp&s=8b7b8919e48f11bae78b0665547c64f17fe2e19e', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=216&crop=smart&auto=webp&s=0b27c0f8160bd74932f9e50380bc73ffd7023ee5', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=320&crop=smart&auto=webp&s=2d90a231053674cebeeea27e0694e862e70cf347', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=640&crop=smart&auto=webp&s=122ef901ac4ad8eee39825360993e66dd7907dac', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=960&crop=smart&auto=webp&s=6229b2e151107498dc28def858e99da4669b0e13', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=1080&crop=smart&auto=webp&s=ca8c0f98d58bb7eadd6485bb08deb453e559ac1c', 'width': 1080}], 'source': {'height': 799, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?auto=webp&s=f333571579862f0016f56c2fb826ab282dab3a52', 'width': 1200}, 'variants': {}}]} |
||
Heard you guys were geniuses....so help me choose? | 1 | I spent the better part of a month now
creating a virtual assistant....
most of the work was just making the interface i talk with it on
but the ai part is actually really simple
im just using basic GPT api chat completion
prompt -> user ---> assistant response ----> user and so on
currently its using chat gpt 3.5 but almost all the knowledge is me feeding it data about my business in a prompt.... a json dump of 50 of my contacts and some basic information about its personality, some occasional slang i want it to use, and its ability to reach into my json dump and provide me resource links when necessary
i want to go open source because i dont want to be rate limited, i just wanna pay what it costs to run it. I also have outgrown the 4k token limit of gpt 3.5 and long for something bigger that can handle a bigger json dump
What would suit someone like me? i want the latest cutting edge, and hey....if it can be commercialized thats cool too | 2023-07-14T04:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/14z6v16/heard_you_guys_were_geniusesso_help_me_choose/ | nicealtyagotthere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z6v16 | false | null | t3_14z6v16 | /r/LocalLLaMA/comments/14z6v16/heard_you_guys_were_geniusesso_help_me_choose/ | false | false | self | 1 | null |
After I started using the 32k GPT4 model, I've completely lost interest in 4K and 8K context models | 1 | Using GPT4 with a massive long-ass context window is honestly the absolutely best I've seen AI do anything. The quality shoots up massively, and it is far beyond anything. The closest I've seen is Claude 100k, but it's language is not as good, and GPT3.5 16K is good, but very clearly not as great in language, and context window can suddenly become problematic.
Most of the models posted here always seem to have absolutely tiny context windows. Are there any with any actually decent sized ones? Say, 8K or 16K at the minimum? | 2023-07-14T05:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/14z7x7q/after_i_started_using_the_32k_gpt4_model_ive/ | Outrageous_Onion827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z7x7q | false | null | t3_14z7x7q | /r/LocalLLaMA/comments/14z7x7q/after_i_started_using_the_32k_gpt4_model_ive/ | false | false | self | 1 | null |
LLM Training | 1 | So I get most use cases for people and for those people a Large language model that has tons of parameters with tight tuning will give them impressive results... I'm curious though, if I had a usecase like training a Language model (Say Flan-T5 or some other encoder/decoder model) trying to convert a natural language sentence into a Query for a triple store, How would I create a training data set at scale? Anyone have resources they could point me to on this area? Would parameter size really be beneficial when talking of a very specific task function for the Language model to perform? | 2023-07-14T06:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/14z8gj1/llm_training/ | BrokenKratos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z8gj1 | false | null | t3_14z8gj1 | /r/LocalLLaMA/comments/14z8gj1/llm_training/ | false | false | self | 1 | null |
What's the best model for roleplay that's AMD compatibile on Windows 10? | 1 | As the title say. I want to test with Local models, but I have a 6750XT, not an Nvidia cards. My CPU is too old to run the LMM on it (4770s, I'll change it next year with a Ryzen 5 5600x probably), so that's not an option either. Am I screwed?is there a way or do I have to dual boot Linux? | 2023-07-14T06:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/14z8zmb/whats_the_best_model_for_roleplay_thats_amd/ | wolfbetter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14z8zmb | false | null | t3_14z8zmb | /r/LocalLLaMA/comments/14z8zmb/whats_the_best_model_for_roleplay_thats_amd/ | false | false | self | 1 | null |
FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy | 1 | 2023-07-14T07:34:38 | https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/ | fallingdowndizzyvr | washingtonpost.com | 1970-01-01T00:00:00 | 0 | {} | 14z9od2 | false | null | t3_14z9od2 | /r/LocalLLaMA/comments/14z9od2/ftc_investigates_openai_over_data_leak_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iJ8DUuP6dFGlmixIRSHYb26iz1zwApb7qhjp5ifmEj4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=108&crop=smart&auto=webp&s=03b2200cd1400cca1735129779d830cac4a88644', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=216&crop=smart&auto=webp&s=fd8500f744af72b1c02a47d5f9210917d584a72e', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=320&crop=smart&auto=webp&s=7b05fe95becfadb61c6c7334ca3ad4d81c4bcecb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=640&crop=smart&auto=webp&s=63555a8cb3d1371a9f2a88ad671f4dfa28aaa4e7', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=960&crop=smart&auto=webp&s=229ffc3117fd952afdf06a3360422cf3e7fbe502', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=1080&crop=smart&auto=webp&s=c93d3db13b3cf35593fb578956a2545aaec877d7', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?auto=webp&s=5d4066f51667629c26f0e482d12edeab7028c90d', 'width': 1440}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.