title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Stop AI from replying to itself? | 1 | Running koboldcpp with airoboros 13b, connected to sillytavern and the AI keeps replying to itself and completely ignoring whatever I type in. Anyone know how to fix this?
I’ve searched online but I can’t find anything about it. | 2023-07-30T10:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/15dirvv/stop_ai_from_replying_to_itself/ | hippithoppitboppit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dirvv | false | null | t3_15dirvv | /r/LocalLLaMA/comments/15dirvv/stop_ai_from_replying_to_itself/ | false | false | self | 1 | null |
As LLM-Based app owners, how do you handle the combination of fixed monthly pricing for customers and the payment per tokens for external APIs (such as OpenAI)? | 1 | I'm working on my own AI-based app and started wondering, how do you guys keep your business profitable? I'm willing to charge my customers about $15-20 each month, but what if their usage in the app makes the token generation price higher than this original price they are paying?
Some solutions I've thought about are:
1. Limiting their daily/monthly usage.
2. Allowing them to insert their own API keys (and pay directly to the API provider) - which might make the product more expensive.
How do you manage this problem? | 2023-07-30T11:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/15dj18g/as_llmbased_app_owners_how_do_you_handle_the/ | Motor-Performer-4967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dj18g | false | null | t3_15dj18g | /r/LocalLLaMA/comments/15dj18g/as_llmbased_app_owners_how_do_you_handle_the/ | false | false | self | 1 | null |
Windows PC a lot slower then MacBook Pro 14 ? | 1 | I am not sure if this is some configuration issue or if I do something wrong but my gaming PC seems a lot slower then my MacBook.
MacBook Pro 14 Inch M1 base model
Gaming PC:
\- AMD Rzyen 7 1700x
\- 32GB DDR4 3200 MHZ Ram
\- MSI GTX 1080
​
Could this be true, that my MacBook is a lot faster ? I tried the LLama2 q2\_k and q4\_1 and both performed substainable faster. Is the MacBook just better suited or is there some error in my configurations ?
​
|q2\_k model|Time to First token|GPUI Layers|CPU |Speed \[tok/s\]|
|:-|:-|:-|:-|:-|
|MacBook|5.28|Metal|4|26|
|Windows PC|8.89|15|8|4|
​ | 2023-07-30T11:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/15dj894/windows_pc_a_lot_slower_then_macbook_pro_14/ | AntiqueSalamander310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dj894 | false | null | t3_15dj894 | /r/LocalLLaMA/comments/15dj894/windows_pc_a_lot_slower_then_macbook_pro_14/ | false | false | self | 1 | null |
airo-llongma-2-13B-16k-GPTQ - 16K long context llama - works in 24GB VRAM | 1 | Just wanted to bring folks attention to this model that has just been posted on HF. I've been waiting for a GPTQ model that has high context llama 2 "out of the box" and this looks promising:
[https://huggingface.co/kingbri/airo-llongma-2-13B-16k-GPTQ](https://huggingface.co/kingbri/airo-llongma-2-13B-16k-GPTQ)
I'm able to load it into the 24GB VRAM of my 3090, using exllama\_hf. I've fed it about 10k context articles and managed to get responses. But it's not always responsive even using the Llama 2 instruct format. Anyone else have any experience getting something out of this model? | 2023-07-30T13:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/15dla85/airollongma213b16kgptq_16k_long_context_llama/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dla85 | false | null | t3_15dla85 | /r/LocalLLaMA/comments/15dla85/airollongma213b16kgptq_16k_long_context_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'J1diEqwjH_RYKuobQuOS5eeZnZ9TPU0aM4okJ60biDM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=108&crop=smart&auto=webp&s=434e9988ce6a28eb3af989323c79312e635e8656', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=216&crop=smart&auto=webp&s=5dbea937c7f8746efa4dd70eeb83d5ac4460b3df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=320&crop=smart&auto=webp&s=d54a6d626c8162e804b15616ea7763a7c5c0f629', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=640&crop=smart&auto=webp&s=887a7c598654ebd86acf01e673a33924317195d0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=960&crop=smart&auto=webp&s=a533b3aa23c156d8f5d109247544d0726d64e417', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=1080&crop=smart&auto=webp&s=2ef30bb3fa0943e41faa47ce2e7b2bc09c41a47d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?auto=webp&s=4f9cbce7f368dc19654b1f1787f692cf2b824f19', 'width': 1200}, 'variants': {}}]} |
Seeking Advice on Using an LLM for Document Interaction - Need Input Based on My Knowledge Only! | 1 | Hi everyone! 📚🤖
I'm looking for some guidance on utilizing an LLM to interact with my documents effectively. However, I have a unique requirement – I want the LLM to respond solely based on my knowledge and not its own. 🤔
I'd greatly appreciate any suggestions or insights you have on how to achieve this! Whether it's tweaking settings or using specific tools, Please share your experiences or thoughts. | 2023-07-30T13:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/15dlb1p/seeking_advice_on_using_an_llm_for_document/ | Tight_Promise8668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dlb1p | false | null | t3_15dlb1p | /r/LocalLLaMA/comments/15dlb1p/seeking_advice_on_using_an_llm_for_document/ | false | false | self | 1 | null |
Newb with a PowerEdge R710, 32GB RAM, 12-core. Can I play, too? | 1 | I really want to learn, and so I'm just blindly jumping in like a derp. To the gurus, can I run the 33B LLaMA model on a PowerEdge R710 with 32GB RAM and 12-cores? | 2023-07-30T14:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/15dmw1d/newb_with_a_poweredge_r710_32gb_ram_12core_can_i/ | Actual-Bad5029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dmw1d | false | null | t3_15dmw1d | /r/LocalLLaMA/comments/15dmw1d/newb_with_a_poweredge_r710_32gb_ram_12core_can_i/ | false | false | self | 1 | null |
An error occurred Model failed. Please try loading it again. | 1 | This often happens when giving a language model a longish prompt, getting a longish response and then briefly asking the AI to continue.
Why does this happen and how can we prevent this error? | 2023-07-30T15:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/15doas2/an_error_occurred_model_failed_please_try_loading/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15doas2 | false | null | t3_15doas2 | /r/LocalLLaMA/comments/15doas2/an_error_occurred_model_failed_please_try_loading/ | false | false | self | 1 | null |
Can someone please clarify this to me: Are tools like LangChain interacting with the model (memory) directly, or it's all just prompting behind the scenes and filtering the results to only include the complete answer ? | 1 | **Context:** I'm playing with LLMs and I'm trying to implement stuff in pure CPP as much as possible.
I can't find a clear answer, and I lost hours going thorough source codes.
I know you can load embeddings with a model and it treats that as a part of it's model, but this does not make sense in case of dynamic data, like things fetched from vector db.
**Basically the actual question is:** Can you feed dynamic data into the running model, at runtime, and directly in memory without prompting ?
I know you CAN do prompting "in the background", and parse it's outputs, but for a simple PC that does 5-6 t/s just with chat, doing additional inference will drop that resultant t/s way down below a usable level, so I'm looking for a best solution performance-wise here. | 2023-07-30T15:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/15dog1h/can_someone_please_clarify_this_to_me_are_tools/ | staviq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dog1h | false | null | t3_15dog1h | /r/LocalLLaMA/comments/15dog1h/can_someone_please_clarify_this_to_me_are_tools/ | false | false | self | 1 | null |
are there any finetuned chat models in the works? | 1 | not looking for roleplay, more like a more versatile model. a 13b model thats a mix of airoboros gpt4 1.4.1, guanaco, and nous hermes 2 but in a chat model | 2023-07-30T16:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/15dp4kx/are_there_any_finetuned_chat_models_in_the_works/ | happydadinau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dp4kx | false | null | t3_15dp4kx | /r/LocalLLaMA/comments/15dp4kx/are_there_any_finetuned_chat_models_in_the_works/ | false | false | self | 1 | null |
What's the best way to use a ChatAI to generate Stories? (a good storywriter) | 1 | Hey everyone!
I used ChatGPT a lot to write interesting and long Stories.
I gave ChatGPT instructions and told it what to write. After some Sentences, ChatGPT had written a little part of the story. Then I gave it more instructions on how to continue this story. This way, I wrote a pretty cool Story with three Chapters and it was consistent to the world building most of the time.
Is there a way to do the same using a LLama Model?
Which one would be the best for it (uncensored preferred) and how to set it up correctly?
Thank you for all your advices n advance
​ | 2023-07-30T16:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/15dqfiy/whats_the_best_way_to_use_a_chatai_to_generate/ | ChrisX930 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dqfiy | false | null | t3_15dqfiy | /r/LocalLLaMA/comments/15dqfiy/whats_the_best_way_to_use_a_chatai_to_generate/ | false | false | self | 1 | null |
Is there anyway to transfer reinforcement learning to newer foundation models? Suppose you train llama2 to play minecraft and then llama3 comes out. Perhaps a MOE architecture? | 1 | I've been fascinated by people training LLMs to play games with reinforcement learning, but it seems like there is no good way to avoid starting over every time a better foundation model comes out.
Suppose the model has undergone multiple rounds of actor-critic training or PPO or RLHF, perhaps playing games like minecraft for several years. These techniques can't just cheaply be redone on a newer model via traditional backprop, as they have complex loss functions calculated by neural networks.
Is there any good way to transfer the knowledge? One thought I had was to make one model which has two experts, the old trained model and the new foundation model, and use something like a switch transformer to pick the best output. This will allow the old model to continue to contribute while the new model is trained.
This can be repeated indefinitely, you can just keep adding newer foundation models to the ensemble as they come out. | 2023-07-30T17:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/15dqp5f/is_there_anyway_to_transfer_reinforcement/ | 30299578815310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dqp5f | false | null | t3_15dqp5f | /r/LocalLLaMA/comments/15dqp5f/is_there_anyway_to_transfer_reinforcement/ | false | false | self | 1 | null |
[QUESTION] What prompt(s) to get results like Vectara and Perplexity? | 1 | Hi all,
I'm curious what prompt is or prompts are needed to get results like Vectara and Perplexity.
Example Vectara: [https://asknews.demo.vectara.com/?query=Should+AI+be+regulated%3F&filter=&language=eng](https://asknews.demo.vectara.com/?query=Should+AI+be+regulated%3F&filter=&language=eng)
Here you see a question asked to the system, then it returns ten search results with the most relevant sentence from that search result in bold, and then above the search results, the summarized answer with inline footnotes to the search results.
Example 2: [https://www.perplexity.ai/search?q=Should%20AI%20be%20regulated?&copilot=false&focus=internet](https://www.perplexity.ai/search?q=Should%20AI%20be%20regulated?&copilot=false&focus=internet)
Here you also see the same concept with search results and the final answer being a sort of summary of the search results with footnotes.
I look forward to hearing what you guys think. | 2023-07-30T17:46:49 | https://www.reddit.com/r/LocalLLaMA/comments/15drlx8/question_what_prompts_to_get_results_like_vectara/ | VividFee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15drlx8 | false | null | t3_15drlx8 | /r/LocalLLaMA/comments/15drlx8/question_what_prompts_to_get_results_like_vectara/ | false | false | self | 1 | null |
why is rotary embedding applied at each layer in roformer/llama? | 1 | [removed] | 2023-07-30T18:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/15ds3l4/why_is_rotary_embedding_applied_at_each_layer_in/ | hx129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ds3l4 | false | null | t3_15ds3l4 | /r/LocalLLaMA/comments/15ds3l4/why_is_rotary_embedding_applied_at_each_layer_in/ | false | false | self | 1 | null |
Trying to run StableBeluga-7B locally on AMD WSL | 1 | Hello, I've been following the installation guide as far as possible but I think I've gotten as far as I can without help. So:
- The oobabooga start_wsl.bat runs and I can connect, it does mention "bitsandbytes was compiled without GPU support" but I believe this relies on CUDA and therefore doesn't affect me trying to use an AMD GPU?
- I've downloaded the StableBeluga-7B model from within the web UI, but when I try to load it fails ending with the error "RuntimeError: No HIP GPUs are available".
- I tried using Bing chat to help me through this and have managed to install amdgpu-dkms and ROCm SDK 5.4.3 on my WSL, which oobabooga mentioned it required 5.4.2 or 3, so that should be ok?
What options am I missing or where can I point the web UI to correctly make use of my GPU? My card is a 5700XT so I'm wanting to use a 4bit quant I believe, hopefully someone knows what the hell I'm on about so I can get this chat working! | 2023-07-30T18:20:57 | https://www.reddit.com/r/LocalLLaMA/comments/15dsfbr/trying_to_run_stablebeluga7b_locally_on_amd_wsl/ | Hexaflex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dsfbr | false | null | t3_15dsfbr | /r/LocalLLaMA/comments/15dsfbr/trying_to_run_stablebeluga7b_locally_on_amd_wsl/ | false | false | self | 1 | null |
If you're using Bing Chat. You're getting the worst possible answer from a supposed Search Assistant. Here's why? | 1 | # Here is a critique and rating of the different AI responses, showcasing which model performed the worst and best in different categories:
|Model|Accuracy|Clarity|Conciseness|Helpfulness|
|:-|:-|:-|:-|:-|
|Bing Chat|3|4|5|2|
|Perplexity AI|5|5|3|5|
|Bard|4|5|4|4|
|Huggingface|5|5|4|5|
**Evaluation:**
* Accuracy: Perplexity AI and Huggingface provided the most accurate technical explanations of how tools like LangChain work. Bing Chat had some inaccuracies, while Bard was mostly correct but lacked some details.
* Clarity: All models except Bing Chat expressed themselves clearly and were easy to understand.
* Conciseness: Bing Chat was the most concise, while Perplexity AI provided a very detailed explanation but was less concise.
* Helpfulness: Perplexity AI and Huggingface gave the most helpful responses that directly answered the question. Bing Chat was the least helpful.
**Summary**:
Huggingface performed the best overall by providing an accurate, clear, reasonably concise, and very helpful response. It gave a detailed technical explanation while still being easy to understand.
* Perplexity AI also did very well, with an extremely accurate and helpful response, but was slightly less concise than ideal.
* Bard performed decently, giving a mostly correct response with good clarity, but lacked some important details.
* Bing Chat performed the worst - while concise, it had inaccuracies and lacked helpfulness.
So in conclusion, Huggingface demonstrated itself as the most capable model for providing knowledgeable, well-rounded explanations to technical questions like this. Perplexity AI also did great, with room for improvement on conciseness. Bard was decent, while Bing Chat clearly lagged behind the others.
​
# Prompt: (Sourced from a Random Reddit Post)
Can someone please clarify this to me: Are tools like LangChain interacting with the model (memory) directly, or it's all just prompting behind the scenes and filtering the results to only include the complete answer ?
Context: I'm playing with LLMs and I'm trying to understand the currently existing solutions for extending the model capabilities beyond the prompt and current context.
I can't find a clear answer, and I'm just trying to find out the basic flow of things.
I know you can load embeddings with a model and it treats that as a part of it's model, but this does not make sense in case of dynamic data, like things fetched from vector db.
Basically the actual question is: Can you feed dynamic data into the running model, at runtime, and directly in memory without prompting ?
I know you CAN do prompting "in the background", and parse it's outputs, but for a simple PC that does 5-6 t/s just with chat, doing additional inference will drop that resultant t/s way down below a usable level, so I'm looking for a best solution performance-wise here.
# Response: (with Internet Access)
​
>**Bing Chat:**
>
>`The answer to your question is yes. You can feed dynamic data into the running model, at runtime, and directly in memory without prompting.` [`This is called Dynamic Data-Driven Simulation`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)[`1`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)`.`
>
>`In this context, data plays an essential role in almost every aspect of computer modeling and simulation.` [`The importance of data in modeling and simulation was discussed in many works`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)[`1`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)`.`
>
>`I hope this helps. Let me know if you have any other questions.`
​
>**Perplexity AI:**
>
>`LangChain is a framework that simplifies working with large language models (LLMs) such as OpenAI GPT4 or Google PaLM by providing abstractions for common use cases[2]. LangChain serves as a standard interface to interact with a range of large language models, such as OpenAI or HuggingFace models[1]. LangChain embraces two fundamental principles: being data-aware and agentic. By connecting language models with other data sources and enabling them to provide more accurate and nuanced responses, LangChain enhances their contextual understanding[1].`
>
>`Regarding loading embeddings with a model, TensorFlow enables you to train word embeddings. However, this process not only requires a lot of data but can also be time and resource-intensive. To tackle these challenges, you can use pre-trained word embeddings such as GloVe (Global Vectors) word embeddings by Stanford. These embeddings are obtained from representing words that are similar in the same vector space[3].`
>
>`Dynamic data can be fed into the running model at runtime, and directly in memory without prompting. A new type of language model, EntityNLM, can explicitly model entities, dynamically update their representations, and contextually generate their mentions. This model is generative and flexible; it can model an arbitrary number of entities in context while generating each entity mention at an arbitrary length[4].`
>
>`To optimize performance, researchers are exploring ways to optimize model architectures. Techniques like knowledge distillation, model pruning, and quantization can enhance efficiency[8]. Incremental learning allows models to learn new tasks without forgetting previously acquired knowledge, while transfer learning enables the reuse of pre-trained models, significantly reducing training time and computational resources[8].`
>
>`In conclusion, LangChain can interact with large language models directly, and dynamic data can be fed into the running model at runtime. Loading embeddings with a model can be resource-intensive, but pre-trained word embeddings can be used instead. To optimize performance, techniques like knowledge distillation, model pruning, and quantization can be used, and incremental learning and transfer learning can enhance efficiency.`
​
>**Bard:**
>
>`Yes, tools like LangChain can interact with the model's memory directly. This is done through the use of a memory class, which is a Python object that stores and manages the model's memory. The memory class can be used to read and write data to the model's memory, and it can also be used to filter the results of the model's inference.`
>
>`For example, the ConversationBufferMemory class in LangChain stores a list of chat messages in a buffer. When the model is asked to generate a response, the ConversationBufferMemory class is used to read the previous chat messages and pass them to the model as input. This allows the model to access the previous conversation context and generate a more relevant response.`
>
>`The advantage of interacting with the model's memory directly is that it allows you to access the model's state and history. This can be useful for tasks that require the model to remember previous interactions, such as chatbots or question answering systems.`
>
>`However, there are also some disadvantages to interacting with the model's memory directly. One disadvantage is that it can be slower than prompting the model. This is because the model has to load the memory data into memory before it can access it. Another disadvantage is that it can be more difficult to implement. This is because you need to know how the model's memory works in order to interact with it directly.`
>
>`Overall, whether or not you should interact with the model's memory directly depends on your specific application. If you need to access the model's state and history, then interacting with the memory directly is a good option. However, if you need to maximize performance, then you may want to use prompting instead.`
>
>`In your case, since you are concerned about performance, you may want to consider using prompting instead of interacting with the model's memory directly. However, if you need to access the model's state and history, then interacting with the memory directly may be the best option.`
>
>`I hope this helps!`
​
>**Huggingface Chat with LLama-2-70B:**
>
>`Sure, I can help clarify how tools like LangChain interact with large language models (LLMs) and how they extend their capabilities beyond the initial prompt and context.`
>
>`LangChain is an open-source tool designed to enable flexible and efficient interaction with LLMs. It allows developers to build applications that leverage the power of LLMs without requiring extensive knowledge of the underlying models or infrastructure. However, it's important to note that LangChain doesn't provide direct access to the model's memory or allow you to modify its internal state. Instead, it operates by generating prompts and processing the output generated by the LLM.`
>
>`Here's a high-level overview of how LangChain works:`
>
>`Prompt Generation: When you use LangChain to interact with an LLM, you provide a natural language prompt or a starting point for the conversation. This prompt is then processed by LangChain to generate a set of input tokens that represent the user's intent.`
>
>`Model Interaction: The input tokens are fed into the LLM, which processes them according to its internal mechanics. The model generates a response based on the input and its own internal state.`
>
>`Output Processing: The output from the LLM is then passed back through LangChain, which extracts relevant information and formats it into a structured response that can be easily consumed by the application.`
>
>`Response Enrichment: Depending on the specific use case, LangChain may enrich the response by incorporating external data sources, performing reasoning tasks, or integrating other models to enhance the overall quality of the output.`
>
>`While LangChain doesn't offer direct access to the model's memory, there are ways to integrate dynamic data into the process. For example, you can use LangChain's fetchfunction to retrieve information from external databases or APIs during the conversation. This allows you to incorporate up-to-date information into the dialogue without needing to modify the LLM itself.`
>
>`Regarding performance, it's true that adding additional inference steps can reduce the speed of the conversational experience. To mitigate this, LangChain provides various optimization techniques, such as caching, precomputing, and strategies for batching requests. These optimizations can significantly improve the efficiency of the interactions without sacrificing too much accuracy.`
>
>`In summary, tools like LangChain operate by generating prompts, processing the LLM's output, and enriching the responses with external data if needed. While they don't offer direct access to the model's memory, they provide a convenient interface for building conversational applications that can leverage the strengths of LLMs without requiring low-level modifications. By carefully designing the workflow and utilizing optimization techniques, it's possible to achieve a good balance between performance and functionality.` | 2023-07-30T20:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/15duvig/if_youre_using_bing_chat_youre_getting_the_worst/ | FHSenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15duvig | false | null | t3_15duvig | /r/LocalLLaMA/comments/15duvig/if_youre_using_bing_chat_youre_getting_the_worst/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'awB4VuC4eKtbxFOwQFJ1hWN6Z8X7muUKyF0MogqqgV8', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/P7XaoiBio1X2LadUCWFvHhD2aoiSHLpJyEYzkylrhIo.jpg?width=108&crop=smart&auto=webp&s=6a6bd8118dc8fde43fe6208f41e1de57e603cff3', 'width': 108}], 'source': {'height': 230, 'url': 'https://external-preview.redd.it/P7XaoiBio1X2LadUCWFvHhD2aoiSHLpJyEYzkylrhIo.jpg?auto=webp&s=6c3fac79280ecde8ca2f0bfddd8bb18dbad93eb4', 'width': 153}, 'variants': {}}]} |
Created a tutorial video on Llama2 | 1 | [removed] | 2023-07-30T20:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/15duz22/created_a_tutorial_video_on_llama2/ | aianytime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15duz22 | false | null | t3_15duz22 | /r/LocalLLaMA/comments/15duz22/created_a_tutorial_video_on_llama2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UYAqLaReU4yUmCx_aeof61i3La6yak_kILnLBNYNDls', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?width=108&crop=smart&auto=webp&s=06c7ac2fecf9e253fea577633e61ba2847314082', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?width=216&crop=smart&auto=webp&s=747ffab0275e01579675dd5a2cef6169fc756910', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?width=320&crop=smart&auto=webp&s=e27cd0bc6bee93813188f8f14f74a920cbb43e54', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?auto=webp&s=c586e7153fcc74971469dd6137ab8e323918e5e2', 'width': 480}, 'variants': {}}]} |
Was trying to install SillyTavern on Mac and got the following error; What went wrong? | 1 | /Users/\*\*\*\*/Desktop/SillyTavern/start.sh
Installing Node Modules...
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /Users/\*\*\*\*/package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/Users/\*\*\*\*/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
​
npm ERR! A complete log of this run can be found in: /Users/\*\*\*\*/.npm/\_logs/2023-07-30T16\_54\_34\_594Z-debug-0.log
Entering SillyTavern...
FATAL: Could not write default file: public/settings.json Error: ENOENT: no such file or directory, copyfile 'default/settings.json' -> 'public/settings.json'
at Object.copyFileSync (node:fs:2991:3)
at createDefaultFiles (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:18:20)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:3:1)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12)
at node:internal/main/run\_main\_module:23:47 {
errno: -2,
syscall: 'copyfile',
code: 'ENOENT',
path: 'default/settings.json',
dest: 'public/settings.json'
}
FATAL: Could not write default file: public/css/bg\_load.css Error: ENOENT: no such file or directory, copyfile 'default/bg\_load.css' -> 'public/css/bg\_load.css'
at Object.copyFileSync (node:fs:2991:3)
at createDefaultFiles (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:18:20)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:3:1)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12)
at node:internal/main/run\_main\_module:23:47 {
errno: -2,
syscall: 'copyfile',
code: 'ENOENT',
path: 'default/bg\_load.css',
dest: 'public/css/bg\_load.css'
}
FATAL: Could not write default file: config.conf Error: ENOENT: no such file or directory, copyfile 'default/config.conf' -> 'config.conf'
at Object.copyFileSync (node:fs:2991:3)
at createDefaultFiles (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:18:20)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:3:1)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12)
at node:internal/main/run\_main\_module:23:47 {
errno: -2,
syscall: 'copyfile',
code: 'ENOENT',
path: 'default/config.conf',
dest: 'config.conf'
}
node:internal/modules/cjs/loader:1051
throw err;
\^
​
Error: Cannot find module 'yargs/yargs'
Require stack:
\- /Users/\*\*\*\*/Desktop/SillyTavern/server.js
at Module.\_resolveFilename (node:internal/modules/cjs/loader:1048:15)
at Module.\_load (node:internal/modules/cjs/loader:901:27)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:119:18)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:28:15)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12) {
code: 'MODULE\_NOT\_FOUND',
requireStack: \[ '/Users/\*\*\*\*/Desktop/SillyTavern/server.js' \]
}
​
Node.js v20.5.0
\*\*\*\*-MBP:\~ \*\*\*\*$
​ | 2023-07-30T20:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/15dv52c/was_trying_to_install_sillytavern_on_mac_and_got/ | ChromeGhost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dv52c | false | null | t3_15dv52c | /r/LocalLLaMA/comments/15dv52c/was_trying_to_install_sillytavern_on_mac_and_got/ | false | false | self | 1 | null |
Running LLAMA | 1 | Hello all, I am newbie here sorry if my question is a bit of out topic.
I am a devops engineer trying to run LLAMA models, do we still need a huge computation resource to run a model? Or is it when only trying to training the model?
It seems only using CPU I can still the webUI + downloaded models (llama2 7b) and when I check the CPU still in idle mode.
Thank you | 2023-07-30T20:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/15dvz7k/running_llama/ | Cultural-Pizza-1916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dvz7k | false | null | t3_15dvz7k | /r/LocalLLaMA/comments/15dvz7k/running_llama/ | false | false | self | 1 | null |
CPU, GPU or RAM? | 1 | What's the most important for running models locally?
I'm going to build a PC with a Ryzen 5 5600 CPU, a 6800 XT GPU and 32GB ram at 3600MHz (CL16). What kind of performance/capacities can I expect?
I can slowly run a 7b model on a much less powerful CPU, so I guess it should be much better? | 2023-07-30T21:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/15dwumk/cpu_gpu_or_ram/ | AloneInTheWhole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dwumk | false | null | t3_15dwumk | /r/LocalLLaMA/comments/15dwumk/cpu_gpu_or_ram/ | false | false | self | 1 | null |
Is there anyway to add classifier free guidance to inference-optimized libraries like ctranslate/vllm? | 1 | I've been experimenting quite a bit with classifier free guidance and found it to be super useful when generating text. There's a huggingface [implementation](https://github.com/huggingface/transformers/issues/24536) that can be easily added to your HF model and you can generate texts pretty easily. The only problem is that when moving to the inference optimized libraries like ctranslate and vllm, they have their own "generate" function which does not rely on HF and does not have CFG.
​
Does anyone know how CFG might be added to those libraries so I can generate text easily using them? It would be super helpful in the long run! | 2023-07-30T21:33:47 | https://www.reddit.com/r/LocalLLaMA/comments/15dx716/is_there_anyway_to_add_classifier_free_guidance/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dx716 | false | null | t3_15dx716 | /r/LocalLLaMA/comments/15dx716/is_there_anyway_to_add_classifier_free_guidance/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NlR-Br6fGYKIfU9IPAIUkt0q3Nlyo4exb70MySsniqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=108&crop=smart&auto=webp&s=18b080e6d9d63990657d8b98b6bb4790b2798819', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=216&crop=smart&auto=webp&s=a3f0b97cbcd69454fce9b1e88a1a870dacbbed27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=320&crop=smart&auto=webp&s=cdb7f604efd815d42539f41112972ad071b1eff7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=640&crop=smart&auto=webp&s=575b4986de2ca3ab49d5dc80d1cdf71aaa0546f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=960&crop=smart&auto=webp&s=669437b591b4b7e59763ebd1b7bb1cc934b166fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=1080&crop=smart&auto=webp&s=2f51aac9c2e64312aefe7e5c7446bf94952ac349', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?auto=webp&s=1ca61e2fbf038b2fb1f2c14e17190804372be533', 'width': 1200}, 'variants': {}}]} |
How long would it take for Local LLMs to catch up with gpt-4? Few or several years? | 1 | Unless there's a big breakthrough in LLM model architecture and or consumer hardware, it sounds like it would be very difficult for local LLMs to catch up with gpt-4.
According to [leaked information about GPT-4 architecture, datasets, costs](https://the-decoder.com/gpt-4-architecture-datasets-costs-and-more-leaked/), the scale seems impossible with what's available to consumers for now even just to run inference.
* GPT-4 has ~1.8 trillion parameters
* GPT-4 is trained on ~13T tokens
* The training costs for GPT-4 was around $63 million
* The inference runs on a cluster of 128 GPUs, using 8-way tensor parallelism and 16-way pipeline parallelism. | 2023-07-30T22:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/15dy75o/how_long_would_it_take_for_local_llms_to_catch_up/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dy75o | false | null | t3_15dy75o | /r/LocalLLaMA/comments/15dy75o/how_long_would_it_take_for_local_llms_to_catch_up/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'y-gnN5eXSJbM_w4V7MFNRnsV0zLNgW29jexasYfiSF8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=108&crop=smart&auto=webp&s=227d2c8a109e1eb11330647295127703bb2cc674', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=216&crop=smart&auto=webp&s=3cb57c4bbb9b787cb3fc55da2090ffcdd67a0ef7', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=320&crop=smart&auto=webp&s=1ae6958d2414c6bddebbedc369af99494917c48b', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=640&crop=smart&auto=webp&s=84740e0cc13cf4857651a9f8f41a4b2611620d25', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=960&crop=smart&auto=webp&s=e61eee162b0f9a7ae839f87e6e5bedda8414eccd', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=1080&crop=smart&auto=webp&s=e3377cfb400196b749cbd3024fd9c4eb66f0b192', 'width': 1080}], 'source': {'height': 816, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?auto=webp&s=69b76476edb76d4bb380b6dd9bf54b94222c58ac', 'width': 1456}, 'variants': {}}]} |
does oobabooga-windows support llama 2? Or is an update required? | 1 | For reference it took me over a week to get the installers properly working and this was 2-3 months ago (I havent updated it or touch any of the install scripts out of fear it would somehow break my original install)
However I've seen the recent release with llama 2 and want to test it out.
Is it easy to update oobabooga and do I simply run the exact same install batch file as before or do I need to re-download from the original git-hub and do a fresh install to get a current version?
Or should it work easily if I just download the model via huggingface like I do with any other model?
Side question - what's the best model out their for llama 2 currently? I have a single 3090 so just have been using 4 bit 30b quantized models. | 2023-07-30T23:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/15e0421/does_oobaboogawindows_support_llama_2_or_is_an/ | RoyalCities | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e0421 | false | null | t3_15e0421 | /r/LocalLLaMA/comments/15e0421/does_oobaboogawindows_support_llama_2_or_is_an/ | false | false | self | 1 | null |
Problems with llama models getting 'stuck'. | 1 | Hi,
I'm having trouble with the models getting stuck in the middle of text generation. It doesn't crash or stop, but rather takes a very long pause and then continues like nothing happened. If I am using the 70B model on my cpu, this pause can be for twenty minutes. Is there something I am doing wrong or something I can do differently to fix this?
Thanks. | 2023-07-30T23:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/15e0cvt/problems_with_llama_models_getting_stuck/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e0cvt | false | null | t3_15e0cvt | /r/LocalLLaMA/comments/15e0cvt/problems_with_llama_models_getting_stuck/ | false | false | self | 1 | null |
Need guidance fine-tuning LLAMA-2 | 1 | Hi all!
This is my first time working with LLMs and I am planning on fine-tuning LLAMA 2 on an extensive bibliography of a philosopher. To that end, I have extracted large quantities of text and stored them in a pandas dataframe. Rows are books and columns are title and body of text.
I am now looking into fine-tuning LLAMA-2 but I am getting lost over all the options. If my intended use case is to chat with the LLM, how would you proceed?
From what I understood, I should fine tune the chat version if I want to use it like ChatGPT, but my original dataset is just the philosopher texts, it is not prompt-response pairs. My original idea was to simply do autoregressive training by shifting the window to produce the input-output pairs. Is there a way to solve this without extensive manual work?
Personal context:
I already got access to the models.
I use an M1 64GB which I intend to use for inference. For training, I am willing to spend a small amount of money on compute. Which is the best option? GCP, Azure, AWS?
I would greatly appreciate some direction to complete my project. Thanks!! | 2023-07-30T23:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/15e0hx0/need_guidance_finetuning_llama2/ | Por-Tutatis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e0hx0 | false | null | t3_15e0hx0 | /r/LocalLLaMA/comments/15e0hx0/need_guidance_finetuning_llama2/ | false | false | self | 1 | null |
How many prompts can local LLaMa-2 handle on a M1 Macbook simultaneously? | 1 | Trying to figure out how to basically create an API (like OpenAI) that can respond to multiple prompts at the same time.
I tried executing multiple on command line, but all of them hang when running more than one.
./main -m ./models/llama-2-7b-chat.ggmlv3.q8_0.bin -n 1024 -p "<prompt>"
​ | 2023-07-31T00:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/15e11o0/how_many_prompts_can_local_llama2_handle_on_a_m1/ | advisa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e11o0 | false | null | t3_15e11o0 | /r/LocalLLaMA/comments/15e11o0/how_many_prompts_can_local_llama2_handle_on_a_m1/ | false | false | self | 1 | null |
Computer specs CPU-ONLY to run Llama-2-70B-GGML | 1 | Can AMD Epyc run it? | 2023-07-31T00:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/15e1cgi/computer_specs_cpuonly_to_run_llama270bggml/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e1cgi | false | null | t3_15e1cgi | /r/LocalLLaMA/comments/15e1cgi/computer_specs_cpuonly_to_run_llama270bggml/ | false | false | self | 1 | null |
LLM API streaming remote | 1 | I have a project with the new Llama2 and I have a custom frontend that my partner is building and they want the text generation to stream directly to the client browser without the delay. So far, I have used Langchain, oobabooga, FastAPI, and other smaller repos to generate the text and send it to port directly but it keeps waiting for the text to finish before sending out. This is not keeping with the UX so we need to fix it.
I am wondering if anyone else encountered this and if you have a technique you found that works? | 2023-07-31T02:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/15e3d5o/llm_api_streaming_remote/ | muckitymuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e3d5o | false | null | t3_15e3d5o | /r/LocalLLaMA/comments/15e3d5o/llm_api_streaming_remote/ | false | false | self | 1 | null |
Got a second GPU, what do I need to know about getting them to work together and any other suggestions? | 1 | [removed] | 2023-07-31T03:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/15e4o6s/got_a_second_gpu_what_do_i_need_to_know_about/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e4o6s | false | null | t3_15e4o6s | /r/LocalLLaMA/comments/15e4o6s/got_a_second_gpu_what_do_i_need_to_know_about/ | false | false | self | 1 | null |
How to determine the best number of inference tokens after fine-tuning llama2? | 1 | After my fine-tuning of llama2-13b-chat with 80,000 instructions, I find my new llama-2-13b-chat likes to say a lot of things when I ask it a simple question.
If I set the number of tokens == 100, it can be usually interrupted at the exactly position of 100 tokens when it chat with me. But I do not know if it tends to reach 150 tokens or 200 tokens or more.
How to set the optimal length of each sentence(or you can call it tokens) it outputs? | 2023-07-31T03:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/15e4x6b/how_to_determine_the_best_number_of_inference/ | badage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e4x6b | false | null | t3_15e4x6b | /r/LocalLLaMA/comments/15e4x6b/how_to_determine_the_best_number_of_inference/ | false | false | self | 1 | null |
New trained storytelling LoRA returns really interesting results. | 1 | Hello everyone.
Several weeks ago, I had posted about training my own storytelling LoRA with different structure and it returned funny results.
Just now, I've finished another LoRA, with 30 MB of training raw data. I've used **conceptofmind/LLongMA-2-13b-16k** as training basis. Loaded it in 8-bit mode and trained for about 30 hours.
Now I'm playing with it. I've inputted this prompt text:
[ STORY ]
This wonderful world!
[ STORY DESCRIPTION ]
The world of Alurei. World of might and magic. Yuuya is the young man, who had teleported from our world to this one. Can he survive or he will fall?
[ CHARACTER_DESCRIPTION Yuuya ]
Main character of this story. He is 20 years old, he has curly black hair, very naive and cheerful.
[ CHARACTER_DESCRIPTION Teera ]
Young battlemage girl in world of Alurei, that met Yuuya. Has black hair and glowing green eyes. Very grumpy and hostile disposition.
[ SCENARIO ]
Teleportation!
Then Model generated me this:
[ CONTEXT ]
...
[ SCENE IMAGE ]
a large building with stairs leading up to it, claymore anime background, warm sunlight shining in, outside the'school of magic ', mobile game background, portrait n - 9, islamic, ue4, scarlet, neo tokyo background, full device, hi-res photo, va-11 hall-a
[ CONTEXT ]
A city street was spread out before me.
[ CONTEXT ]
It was a scene which looked like something out of fantasy novel.
[ CONTEXT ]
I found myself standing on a foreign soil.
[ CONTEXT ]
The feeling was different than being transported by airplane.
[ CONTEXT ]
Like being pulled through some sort of portal.
[ CONTEXT ]
However, there were no signs of any such portals around here.
[ CONTEXT ]
Only the sense of having been sucked into another dimension......
[ CONTEXT ]
And then I realized that I didn't have my luggage either.
[ Yuuya ]
Wha!!
[ CONTEXT ]
What was going on?
[ CONTEXT ]
Where am I right now??
[ CONTEXT ]
In other words, where did I end up?
[ CONTEXT ]
I wonder if this place wasn't the same as the place I came from.
[ CONTEXT ]
It was only natural for me to be bewildered and confused.
[ CONTEXT ]
But, what should I do next?
[ Yuuya ]
Ahh.........
[ CONTEXT ]
As far as I knew, it seemed that I had arrived at a certain point in time.
[ CONTEXT ]
Whether I liked it or not, I would need to make use of this opportunity.
[ CONTEXT ]
I guess I would try walking around first.
[ CONTEXT ]
That way, maybe I could find the location of my destination.
[ Yuuya ]
Well then, let's go.
[ CONTEXT ]
Making up my mind, I began to walk forward.
In this generated text I've only inputted **STORY, STORY\_DESCRIPTION, CHARACTER\_DESCRIPTION, and SCENARIO,** and it started to generate this narration.
It really returns funny results and tries to make reasonable narration. However, there are some issues and concerns I've encountered:
1. It sometimes **outputs characters and texts from training data**, which leads to awkward results
2. Using 13B model while it is lightweight is not very "smart" at storytelling.
3. Again, I've botched some parts of my training dataset, some bracket texts have incorrect structure.
4. Using oobabooga's training tools is very clunky. It is text generation at first and only after that it has some training tools. Maybe switching to axolotl will make training easier and faster.
5. This LoRA has been trained (30MB of text) on **2 epochs**, but last quarter of this training's losses were switching around **1.22 - 1.23,** which makes me think that **1.5 epochs are enough if it is even possible.**
I'm thinking of increasing training dataset to 100MB's and train it on 30B llama2 when it releases.
​ | 2023-07-31T04:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/15e6bmm/new_trained_storytelling_lora_returns_really/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e6bmm | false | null | t3_15e6bmm | /r/LocalLLaMA/comments/15e6bmm/new_trained_storytelling_lora_returns_really/ | false | false | self | 1 | null |
GitHub - Alpha-VLLM/LLaMA2-Accessory: An Open-source Toolkit for LLM Development | 1 | 2023-07-31T04:47:06 | https://github.com/Alpha-VLLM/LLaMA2-Accessory | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15e6cl2 | false | null | t3_15e6cl2 | /r/LocalLLaMA/comments/15e6cl2/github_alphavllmllama2accessory_an_opensource/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'uM8Hr3z9NcRtCYH991oK-J-LYQ2zddLwBbo3P4nQi-A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=108&crop=smart&auto=webp&s=6541c0d909e84110c6c725cfe9a7af0c766a1a3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=216&crop=smart&auto=webp&s=2317ef07b19d2614a732e16b6fe5bfcc424e0afa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=320&crop=smart&auto=webp&s=d00bb1acdd6449659f0fc83b277f8bc6e003b808', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=640&crop=smart&auto=webp&s=0042801636070095699795202d2a253e73d08887', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=960&crop=smart&auto=webp&s=8dd8dd65e5c5f3189c2e7af702935de771eeed61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=1080&crop=smart&auto=webp&s=e66ec931bb84a24b8148c4723d5d3aa908c1d633', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?auto=webp&s=54b9208a03895f9e5518b9e42e74a399bc30dd2f', 'width': 1200}, 'variants': {}}]} |
||
Variants of LLaMa or similar that can "see" and describe images? | 1 | After playing around with Bard a bit, I was quite impressed by its ability to let you upload images and have Bard talk about those images with a great degree of nuance. What's the current best in class among open source models, which still maintain the raw horsepower at general language tasks / chatting that models like LLaMa have?
In trying to research this for myself online, I discovered things like CLIP, which seems to be about ways you can help it train an LLM on image data. While this is quite interesting, what I'm asking about here is what open source model is the best "out of the box" at talking about the contents of images in a general sense. e.g. in much the same way as the LLaMa chat models are good at chatting without performing any fine-tuning and configuration. | 2023-07-31T04:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/15e6ed7/variants_of_llama_or_similar_that_can_see_and/ | ascendant23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e6ed7 | false | null | t3_15e6ed7 | /r/LocalLLaMA/comments/15e6ed7/variants_of_llama_or_similar_that_can_see_and/ | false | false | self | 1 | null |
How do we fine-tune Llama 2 model for Custom Intent Classification? | 1 | [removed] | 2023-07-31T05:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/15e7a5g/how_do_we_finetune_llama_2_model_for_custom/ | Interesting_Low_9766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e7a5g | false | null | t3_15e7a5g | /r/LocalLLaMA/comments/15e7a5g/how_do_we_finetune_llama_2_model_for_custom/ | false | false | self | 1 | null |
Finetuning Llama model for code analysis? | 1 | [removed] | 2023-07-31T05:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/15e7ecc/finetuning_llama_model_for_code_analysis/ | CodeIsTrash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e7ecc | false | null | t3_15e7ecc | /r/LocalLLaMA/comments/15e7ecc/finetuning_llama_model_for_code_analysis/ | false | false | default | 1 | null |
Running LLama in cloud | 1 | My friend works in consulting. ChatGPT could've helped him immensely since his work mostly includes researching, creating PPTs but it's banned internally due to secretive nature of his data.
So, I'm thinking of hosting LLama for him on cloud. I was thinking of creating a script which you can just run in your server and it will automatically do all tasks for you. Would you think people might have a utility for this script? | 2023-07-31T06:10:22 | https://www.reddit.com/r/LocalLLaMA/comments/15e7seu/running_llama_in_cloud/ | middlenameishardwork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e7seu | false | null | t3_15e7seu | /r/LocalLLaMA/comments/15e7seu/running_llama_in_cloud/ | false | false | self | 1 | null |
Best M1 MacOS app for running models locally? | 1 | Looking for a UI Mac app that can run LLaMA/2 models locally. I remember seeing what looked like a solid one on GitHub but I had my intel Mac at the time and I believe it’s only compatible on Apple silicon. | 2023-07-31T06:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/15e80g7/best_m1_macos_app_for_running_models_locally/ | sardoa11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e80g7 | false | null | t3_15e80g7 | /r/LocalLLaMA/comments/15e80g7/best_m1_macos_app_for_running_models_locally/ | false | false | self | 1 | null |
Create GraphDB using LLMs | 1 | Hey everyone, I was wondering, if anyone has ventured into creating graphDB from raw unstructured text corpus using LLMs, or had any experience with this.
I came across this repo : [https://github.com/varunshenoy/GraphGPT](https://github.com/varunshenoy/GraphGPT), which uses gpt3.5/4 to generate graphs. Anything that has been done with open-sourced LLMs. | 2023-07-31T07:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/15e8rx3/create_graphdb_using_llms/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e8rx3 | false | null | t3_15e8rx3 | /r/LocalLLaMA/comments/15e8rx3/create_graphdb_using_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'e6ayVEN750ITdNyz3BdBwUizxSoSDZy71MD-qSJl8-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=108&crop=smart&auto=webp&s=537be5c0592d1c1ea21444e34fb386fb9f51095b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=216&crop=smart&auto=webp&s=f6293ac1bd80552629340e2b83155e96acd00517', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=320&crop=smart&auto=webp&s=a894f4b0e4f94bf264c5168fa607b9c41a9556c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=640&crop=smart&auto=webp&s=96baed63e2d8a0bf1cdc4d0776ed8c2519b2fe22', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=960&crop=smart&auto=webp&s=d18136135892cce7a800a8439ec410e559d5b837', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=1080&crop=smart&auto=webp&s=39f819c725a49425f39bb0620c7281e72dae8bad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?auto=webp&s=2654d6d249a2842378fddb99f6d02332e7c3b16a', 'width': 1200}, 'variants': {}}]} |
Using Open-Source LLM Models vs. Expensive OpenAI APIs: A Logical Choice for Consumer Apps? | 1 | I'm currently developing an interpreter app for mobile that generates short comments and audio based on given single prompt. The goal is to keep it simple, with one-time generation (no-chatting for now) and a monthly limit for users. I've been personally using opensource LLM's for good amount of time (coding, instruction, storytelling, daily convos etc.) but never used them commercially. They got my job done pretty well therefore I want to use them for this commercial process and this approach can significantly reduce the monthly cost for clients compared to using expensive OpenAI APIs. But I don't know how much will I sacrifice from performance and relaibilty.
At the moment, we have a stable diffusion process in place, which has shown remarkable results and reliability, surpassing the midjourney method. This is the part that confuses me can we do the same thing and avoid using their APIs for commercial usage? | 2023-07-31T07:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/15e915q/using_opensource_llm_models_vs_expensive_openai/ | sarimsak13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e915q | false | null | t3_15e915q | /r/LocalLLaMA/comments/15e915q/using_opensource_llm_models_vs_expensive_openai/ | false | false | self | 1 | null |
Exploring a Non-Profit, Community-Led Approach to Open Source AI | 1 | [removed] | 2023-07-31T08:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/15e9qsw/exploring_a_nonprofit_communityled_approach_to/ | NessimBA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e9qsw | false | null | t3_15e9qsw | /r/LocalLLaMA/comments/15e9qsw/exploring_a_nonprofit_communityled_approach_to/ | false | false | self | 1 | null |
Can I create my own LLM API instead of using OpenaiAPI? | 17 | Hi.
I am not a developer, but I've installed WizardLM 13b GPTQ, and it works fine.
Can I create an API somehow for this and connect it to ex. Obsidian, Joplin, and for any other softwares? Is this possible? If yes, do you have a recommendation on how to, a guide, a link?
Thank you so much. | 2023-07-31T08:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/15ea38z/can_i_create_my_own_llm_api_instead_of_using/ | AdamsText | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ea38z | false | null | t3_15ea38z | /r/LocalLLaMA/comments/15ea38z/can_i_create_my_own_llm_api_instead_of_using/ | false | false | self | 17 | null |
Extremely repetitive/deterministic content over multiple generations | 1 | This happens with any LlaMa model (e.g. Vicuna), although it seems to happen even more with LlaMa-2. Right now my present is Asterism on Oobabooga and I'm using the Llama-2 Chat GPTQ.
Just look at the two different generations (clicking on Regenerate):
​
https://preview.redd.it/spk9xkzeh9fb1.png?width=919&format=png&auto=webp&s=2198ff961d4492c9493e9fd9ce14ebaf7c83cd23
​
https://preview.redd.it/sxijxo7gh9fb1.png?width=914&format=png&auto=webp&s=a3179ab4d277861f0e45ee8994916c8a26e63199
​
This is absurdly **deterministic**, and to be honest it happens to me a lot. It has happened to me literlaly since day one when I started playing around local LLaMa models. If I turn on the temperature even higher, eventually what will happen is that yes, maybe I will get some variety, but at the expense of it being gibberish.
Is anyone facing this problem? It happens with Vicuna and other ones too. The regeneration are way too similar, which to be honest kills the whole "creative writing oriented instruct" for me, as what I do want is to get several versions to find inspiration.
​ | 2023-07-31T08:33:36 | https://www.reddit.com/r/LocalLLaMA/comments/15ea9jl/extremely_repetitivedeterministic_content_over/ | CulturedNiichan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ea9jl | false | null | t3_15ea9jl | /r/LocalLLaMA/comments/15ea9jl/extremely_repetitivedeterministic_content_over/ | false | false | 1 | null |
|
What is the best way to create a Free & light knowledge-base specific LLM chatbot ? | 1 | I want to do something like this but haven't found a free and cheap solution yet. Almost everything uses the OpenAI API key and running OpenLLM locally requires high-spec hardware. So far what I've done is use consine similarity to get the information asked in the chat, then throw that text to a free chatbot API (which is limited in usage of course) to get a natural response from the bot.
const llmApplication = await new LLMApplicationBuilder()
.setTemperature(0.1)
.addLoader(new WebLoader({ url: 'https://www.reddit.com/' }))
.setVectorDb(new LanceDb({ path: path.resolve('/db') }))
.build();
console.log(await llmApplication.query('What is reddit?'));
// Reddit is a bla bla
I found something good and free which is Hugging Face inference. Since I work in NodeJS this makes it very easy for me, but their conversational chatbots are very bad and give nonsense answers if I throw at them text from the consine similarity from before, for example the facebook/blenderbot-400M-distill model. Also they don't support multiple languages, only English. | 2023-07-31T08:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/15eahhi/what_is_the_best_way_to_create_a_free_light/ | balianone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eahhi | false | null | t3_15eahhi | /r/LocalLLaMA/comments/15eahhi/what_is_the_best_way_to_create_a_free_light/ | false | false | self | 1 | null |
What does k-quant mean? | 1 | I am new to this AI stuff but I successfully managed to run GGML models on my computer with cuBLAS. But I don't understand why some models have several versions of varying size with weird labels like q2\_K, q3\_K\_S, q3\_K\_M, ect.
​
Can someone explain why I should care and what k-quand method I should pick? | 2023-07-31T09:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/15eb67i/what_does_kquant_mean/ | Explorer-Necessary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eb67i | false | null | t3_15eb67i | /r/LocalLLaMA/comments/15eb67i/what_does_kquant_mean/ | false | false | self | 1 | null |
Online LLM comparison Tool | 1 | Hello everyone,
​
how is the website called where I can run multiple LLMs with one prompt and compare the answers.
​
A Tool with contains Falcon & Llama 2 would be for my needs.
​
Thank yall! | 2023-07-31T09:31:17 | https://www.reddit.com/r/LocalLLaMA/comments/15eba3p/online_llm_comparison_tool/ | jnk_str | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eba3p | false | null | t3_15eba3p | /r/LocalLLaMA/comments/15eba3p/online_llm_comparison_tool/ | false | false | self | 1 | null |
The possibilities of local language models are truly endless | 1 | 2023-07-31T10:19:44 | yungfishstick | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15ec6f7 | false | null | t3_15ec6f7 | /r/LocalLLaMA/comments/15ec6f7/the_possibilities_of_local_language_models_are/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'bshZ_t8jjnAS2M04SXSZwKTbNSdjOxKC5_7I_iHuf4o', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=108&crop=smart&auto=webp&s=d7092ce7e0c03f6d1f34124d84042eb60b1694ee', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=216&crop=smart&auto=webp&s=9cc2ce360ef7d1bfbfb684a0fbfbb3fd6bf04a9f', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=320&crop=smart&auto=webp&s=3f4d5c9fb99d3668d34611f72da28ccaaca5a7fa', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=640&crop=smart&auto=webp&s=7717e18aacffe20b42cf347b1cc94c76708a5c56', 'width': 640}, {'height': 774, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=960&crop=smart&auto=webp&s=5de4fcc69a9ebb17ace555ca49835401681369e8', 'width': 960}, {'height': 871, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=1080&crop=smart&auto=webp&s=3e1568f07f11a26e6d48115fec8a80527b13e1d7', 'width': 1080}], 'source': {'height': 871, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?auto=webp&s=fe2e428a37498bd32731bfb376a5bdeaee5acab8', 'width': 1080}, 'variants': {}}]} |
|||
Adding keywords related to topic to LLM context improves performance, here's my prompt to generate them and some keywords for being a machine-learning engineer. | 1 | Whenever I am working with any LLM (typically GPT-4) I generate keywords relevant to my context. I find it has better results and I recommend the same. First I generate them for the topic I'm interested in:
>In the next prompt, Use the mantle of a ‘context generator’ related to the topic of: “Python Development for Machine Learning using The OpenAI API For GPT and LLM”.
>Write as many words to help you answer questions regarding this topic correctly, succinctly, appropriately, and precisely. For a mathematical subject, this would include academic terms that are unique to the subject area, e.g. “self-attention,encoder decoder,bert,seq2seq”. For a non-academic subject, this would provide terms that are up to date within that frame but would not integrate words that are not useful for understanding the fundamentals. For example, for the topic of ‘Super Smash Brothers Ultimate’, you would generate fundamentals related to the space like: “platform fighter,competitive gaming,esports”
>Your goal is to create words that give yourself context for further interactions. Focus on constructing the words that are useful to you. You do not have to create words that are comprehensible to a human mind. This will not be read.
>Write as many sequences of parts-of-words that are most useful for the topic discussion of “Python Development for Machine Learning using The OpenAI API For GPT and LLM”. Do not format for readability. write,like,this,for,each,keyword.
Then I copy and paste the keywords into a fresh instance:
Python, development,machine,learning,OpenAI,API,GPT,LLM,programming,scripting,deep,learning,natural,language,processing,NLP,transformer,architecture,PyTorch,TensorFlow,API,endpoint,training,data,tokenization,context,sequences,fine-tuning,generative,pretrained,discrimitive,regression,classification,optimization,loss,function,gradient,descent,backpropagation,overfitting,underfitting,bias,variance,train,test,validation,split,hyperparameters,epochs,learning,rate,early,stopping,validation,loss,accuracy,precision,recall,F1,score,confusion,matrix,ROC,AUC,computational,resources,GPU,CPU,neural,network,architecture,feedforward,convolutional,recurrent,LSTM,GRU,self-attention,encoder,decoder,BERT,Seq2Seq,language,model,token,sequence,embedding,dimensionality,reduction,PCA,t-SNE,feature,selection,extraction,regularization,dropout,batch,normalization,residual,connections,Flask,Django,web,app,deployment,Heroku,AWS,Docker,Kubernetes,serverless,architecture,API,design,Postman,REST,GraphQL,HTTP,requests,GET,POST,PUT,DELETE,headers,body,JSON,XML,authentication,authorization,JWT,OAuth2,rate,limiting,microservices,agile,development,Scrum,Kanban,Jira,GitHub,Git,version,control,continuous,integration,continuous,deployment,CI,CD,pipelines,Jenkins,Travis,CI,CircleCI,unit,testing,integration,testing,pytest,unittest,debugging,logging,exception,handling,PDB,IDE,PyCharm,VSCode,Jupyter,Notebook,numpy,pandas,matplotlib,seaborn,scikit-learn,statsmodels,nltk,spacy,Gensim,datasets,image,processing,computer,vision,OpenCV,PIL,transfer,learning,style,transfer,GAN,autoencoder,RL,QLearning,DDPG,TD3,SAC,A2C,A3C,PPO,TRPO,environment,agent,reward,state,action,policy,value,function,exploration,exploitation,epsilon,greedy,discount,factor,return,reinforcement,learning,unsupervised,learning,supervised,learning,semi-supervised,learning,anomaly,detection,cluster,analysis,KMeans,DBSCAN,association,rules,Apriori,Eclat,FP-Growth,decision,tree,random,forest,gradient,boosting,XGBoost,LightGBM,CatBoost,SVM,Naive,Bayes,KNN,linear,regression,logistic,regression,neural,networks,DeepMind,chatbot,DialogFlow,Rasa,context,entities,intent,fulfillment,webhook,dialog,flow,path,stories,rules,policies,actions,forms,slots,active,learning,transfer,learning,few-shot,learning,zero-shot,learning,unsupervised,translation,domain,adaptation,multimodal,learning,vision,language,pretraining,self-supervised,learning,multitask,learning,dataset,construction,benchmarking,evaluation,interpretability,analysis,security,privacy,fairness,accountability,transparency,ethical,AI,open,source,contributing,code,review,pull,request,issue,bug,fix,feature,request,documentation,readme,markdown,code,quality,style,PEP8,linting,pylint,flake8,black,code,complexity,cyclomatic,complexity,refactoring,code,smell,design,patterns,SOLID,principles,OOP,encapsulation,inheritance,polymorphism,abstract,base,class,interface,multiple,inheritance,mixin,composition,dependency,injection,adapter,observer,command,chain,responsibility,singleton,factory,prototype,builder,flyweight,structural,creational,behavioral,decorator,context,manager,magic,methods,dunder,methods,iterators,generators,comprehension,lambda,functions,first,class,functions,higher,order,functions,decorators,annotations,type,hinting,exceptions,errors,try,except,finally,raise,assert,logging,debugging,profiling,memory,usage,cpu,usage,I/O,bound,CPU,bound,GIL,multiprocessing,multithreading,concurrency,parallelism,asyncio,async,await,futures,promises,threads,processes,globals,locals,heap,stack,recursion,dynamic,programming,algorithm,data,structures,array,list,tuple,set,dict,hash,table,linked,list,tree,graph,queue,stack,heap,binary,search,tree,DFS,BFS,shortest,path,graph,traversal,Dijkstra,A*,sorting,searching,divide,conquer,dynamic,programming,greedy,algorithms,backtracking,bit,manipulation,regular,expressions,regex,match,search,findall,sub,group,capture,lookahead,lookbehind,greedy,lazy,capturing,non-capturing,POSIX,PCRE,Unicode,web,scraping,crawling,BeautifulSoup,Selenium,requests,HTTP,HTML,CSS,DOM,XPath,JSON,APIs,databases,SQL,NoSQL,PostgreSQL,MySQL,SQLite,MongoDB,Cassandra,Redis,ORM,SQLAlchemy,Peewee,Django,ORM,ACID,transactions,indexing,joins,aggregations,normalization,denormalization,ER,diagram,data,modeling,data,warehouse,ETL,OLAP,OLTP,big,data,MapReduce,Hadoop,Spark,Hive,Pig,Kafka,Storm,Flink,realtime,processing,batch,processing,stream,processing,lake,stream,data,science,statistics,probability,distributions,mean,median,mode,variance,standard,deviation,correlation,covariance,confidence,interval,hypothesis,testing,t-test,ANOVA,chi-square,test,regression,linear,regression,multiple,regression,logistic,regression,decision,trees,random,forest,SVM,clustering,K-means,hierarchical,clustering,DBSCAN,PCA,ICA,LDA,QDA,Naive,Bayes,K-NN,ensemble,methods,bagging,boosting,stacking,time,series,analysis,ARIMA,SARIMA,stationarity,seasonality,trend,autocorrelation,partial,autocorrelation,outlier,detection,anomaly,detection,feature,engineering,feature,scaling,one-hot,encoding,missing,value,imputation,under,over,sampling,SMOTE,Tomek,links,PCA,t-SNE,statistics,descriptive,inferential,p-value,confidence,interval,error,type,I,II,bias,variance,cross,validation,k-fold,bootstrap,resampling,MLE,Bayesian,inference,prior,posterior,likelihood,conjugate,prior,credible,interval,Markov,chain,Monte,Carlo,MCMC
I find I get significantly more topic-relevant and intelligent answers when I do this.
While I'm here, I almost always add "Do not write introductions or conclusions" and "Do not write lists" before every prompt.
That's because lists seem to stick it into listicle data and introductions and conclusions reflect a similar bias as well. It's just smarter in my experience when you ask it to not use those formats of reply.
Currently working on benchmarking some of these little prompt suggestions as I think they can significantly improve performance on all tasks
Hope it helps & blessings on your work ✨🧿️ | 2023-07-31T10:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ecn67/adding_keywords_related_to_topic_to_llm_context/ | hanjoyoutaku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ecn67 | false | null | t3_15ecn67 | /r/LocalLLaMA/comments/15ecn67/adding_keywords_related_to_topic_to_llm_context/ | false | false | self | 1 | null |
LLaMA 2 models tuned for NLP? | 1 | So I am very, VERY impressed with llama 2. It understand so many languages and can ignore irrelevant information easily. I would like it to classify, summarize or extract some data from text. But here lies a problem:
- Base (at least 7B) model is too stupid to answer instructions reliably
- Chat model answers instructions, but often spits out fluff, like justifications, "Sure, here's <whatever I wanted>", or some moralizing bullshit.
Is there a model that just answers questions? | 2023-07-31T10:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/15ecybr/llama_2_models_tuned_for_nlp/ | RicoElectrico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ecybr | false | null | t3_15ecybr | /r/LocalLLaMA/comments/15ecybr/llama_2_models_tuned_for_nlp/ | false | false | self | 1 | null |
Any other smaller LLM (smaLLM? :D) users here? | 1 | I'll introduce myself, because ... idk.
Hi, I'm Sol!
I've been working with LLMs since they became popular thanks to ChatGPT. My first attempts of running LLMs locally was by using transformers and running small GPTs etc, with not-completely-useless results.
Eventually I migrated to gpt4all, but now I'm using llamacpp via the python wrapper. Trying to slowly inch myself closer and closer to the metal.
I'm curious!
I was wondering about how many other people would prefer seeing more 3B (or less) LLMs being created and, even better, converted to the latest GGML format.
I'm currently a big fan of MiniOrca 3B Q4_1. I'm using it for text analysis and it provides really good results, but I'm still working on refining my prompt-creaation. On my G15's 3060 with 6gigs VRAM, fully offloaded, it runs at 70-80 tokens per second, which is *amazingly fast* (compared to all my past experiences).
I have questions!
If you're a smaLLM user, what are you doing with yours? Does it perform the task well? Which one are you using?
Would you also like to see more small models being created and converted?
Do you agree that the future most likely lies in mimicking the parallellity (isthataword?) of the human brain in form of many small, fast, hyperfocused LLMs working under the command of a more sophisticated, bigger LLM?
Did talking to your LLM eventually make you aware of potentially more refined ways of expressing yourself online, in public? As in: Adapting your way of expressing yourself towards achieving higher precision/accuracy in the responses of other humans?
Thank you! :D | 2023-07-31T11:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/15edcz2/any_other_smaller_llm_smallm_d_users_here/ | Solstice_Projekt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15edcz2 | false | null | t3_15edcz2 | /r/LocalLLaMA/comments/15edcz2/any_other_smaller_llm_smallm_d_users_here/ | false | false | self | 1 | null |
How to overcome the issues of the limit of ~4,000 tokens per input, when dealing with documents summarization? | 1 | As we all knows, llama 2 is quite impressive, and performers well tasks related to summarization.
But how would you deal with summarization of a long document (let's say a book for example)? Is the only solution to make subsets of the text and iteratively feed it? Obtaining summary of summary until the results is ok?
What if I wanted instead to extract information from it with natural language?(example: what is the name of the main character of this book?) | 2023-07-31T12:09:33 | https://www.reddit.com/r/LocalLLaMA/comments/15eef2s/how_to_overcome_the_issues_of_the_limit_of_4000/ | Sad-Vegetable6767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eef2s | false | null | t3_15eef2s | /r/LocalLLaMA/comments/15eef2s/how_to_overcome_the_issues_of_the_limit_of_4000/ | false | false | self | 1 | null |
How to test if 1 GPU is enough for concurrency when trying LLAMA | 1 | Hello all, I'm back again with a question, can I ask how do you test if 1 GPU is enough to do the concurrency in LLAMA models? I used oobabooga installer.
Thank you | 2023-07-31T12:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/15ef8ks/how_to_test_if_1_gpu_is_enough_for_concurrency/ | Cultural-Pizza-1916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ef8ks | false | null | t3_15ef8ks | /r/LocalLLaMA/comments/15ef8ks/how_to_test_if_1_gpu_is_enough_for_concurrency/ | false | false | self | 1 | null |
Plausible to Train Small Models on MacBook Pro M2 Max? | 1 | To the best of my understanding it's more cost effective to build a machine w/ Nvidia cards, I'd like to be able to train small (like Flacon 7b) models, and I'm buying a new MacBook anyway.
If I get an M2 Max with 12‑core CPU, 30‑core GPU, 16‑core Neural Engine and 64GB unified memory, is it plausible I would be be able to train small, task/domain-specific models? If anyone has experience with this, or has advice on what I'd need to do (I think I'll need to install Ubuntu because I'll need to use CUDA?) I'd be grateful. | 2023-07-31T13:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/15eg0j5/plausible_to_train_small_models_on_macbook_pro_m2/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eg0j5 | false | null | t3_15eg0j5 | /r/LocalLLaMA/comments/15eg0j5/plausible_to_train_small_models_on_macbook_pro_m2/ | false | false | self | 1 | null |
GGML models repeating themselves on Webui | 1 | So pleased that I found I can run 60b+ GGML models on my CPU, albeit slowly, but I'm finding that they are repeating the same next over and over with only minor variations. I've played around with the repeation penalty, maxing it out, as well as increasing temperature and top p. Is there anything I can do to prevent this, adding something to the prompt or setting any runtime parameters?
A Google search suggests this has been an issue for weeks with GGML.
Thanks in advance | 2023-07-31T13:23:16 | https://www.reddit.com/r/LocalLLaMA/comments/15eg47j/ggml_models_repeating_themselves_on_webui/ | Mediocre_Tourist401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eg47j | false | null | t3_15eg47j | /r/LocalLLaMA/comments/15eg47j/ggml_models_repeating_themselves_on_webui/ | false | false | self | 1 | null |
Clarification on using AI-generated data for commercial model fine-tuning | 9 | I'm looking to build a clearer picture of which fine tuning datasets can and cannot be used for fine tuning models which will be deployed commercially. Specifically, I'm interested in understanding the consensus on using datasets containing samples generated by OpenAI's GPT models.
I've seen conflicting perspectives - some say it's prohibited, while others claim it's acceptable. For example:
- https://huggingface.co/tiiuae/falcon-40b-instruct/discussions/13
- https://www.reddit.com/r/MachineLearning/comments/123oovw/d_instruct_datasets_for_commercial_use/
My interpretation is that outputs generated by OpenAI's GPT models cannot be included in commercial fine-tuning datasets, as that would violate [Section 2.c(iii) of their Terms of Use] (https://openai.com/policies/terms-of-use#:~:text=use%20output%20from%20the%20Services%20to%20develop%20models%20that%20compete%20with%20OpenAI%3B).
However, I'm open to hearing others' perspectives, as I know this is a grey area.
To summarise I'm hoping to gauge what the community's consensus on the acceptability of using AI-generated data for commercial model fine-tuning? | 2023-07-31T14:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/15ehp5d/clarification_on_using_aigenerated_data_for/ | nuusain | self.LocalLLaMA | 2023-07-31T16:52:38 | 0 | {} | 15ehp5d | false | null | t3_15ehp5d | /r/LocalLLaMA/comments/15ehp5d/clarification_on_using_aigenerated_data_for/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
Non profit for training open source LLMs | 1 | [removed] | 2023-07-31T14:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/15ei0q8/non_profit_for_training_open_source_llms/ | NessimBA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ei0q8 | false | null | t3_15ei0q8 | /r/LocalLLaMA/comments/15ei0q8/non_profit_for_training_open_source_llms/ | false | false | self | 1 | null |
KoboldCpp as a module (without web interface) | 1 | Here is an example script:
import sys
sys.path.insert(0, 'koboldcpp-1.36')
import koboldcpp
sys.path.pop(0)
koboldcpp.run_as_module = True
from argparse import Namespace
koboldcpp.args = Namespace(model=None, model_param='D:/koboldcpp/guanaco-65B.ggmlv3.q4_K_M.bin', port=5001, port_param=5001, host='', launch=False, lora=None, threads=16, blasthreads=16, psutil_set_threads=False, highpriority=False, contextsize=2048, blasbatchsize=512, ropeconfig=[0.0, 10000.0], stream=True, smartcontext=False, unbantokens=True, bantokens=None, usemirostat=None, forceversion=0, nommap=False, usemlock=False, noavx2=False, debugmode=False, skiplauncher=False, hordeconfig=None, noblas=False, useclblast=None, usecublas=None, gpulayers=0, mirostat=None)
koboldcpp.main(koboldcpp.args)
genparams = {'n': 1, 'max_context_length': 2048, 'max_length': 512, 'rep_pen': 1.15, 'temperature': 1.35, 'top_p': 1, 'top_k': 0, 'top_a': 0, 'typical': 1, 'tfs': 0.69, 'rep_pen_range': 2048, 'rep_pen_slope': 0.1, 'sampler_order': [6, 0, 1, 3, 4, 2, 5], 'prompt': "I'm going to write a post in Reddit.\nQuestion: How do I deal with a rude person?\nAnswer: ", 'quiet': True}
v = koboldcpp.generate(**genparams)
print(v)
genparams['prompt'] += v
genparams['prompt'] += "\nQuestion: But what if the person won't listen?\nAnswer:"
v = koboldcpp.generate(**genparams)
print(v)
What you want is
1. Log args and gemparams as you try the "standard generation"
2. Add "run\_as\_module" control variable (stop right before starting web server)
3. Allow unused keyworded parameters in generate (such as "n", "quiet", and "typical")
Modifications to KoboldCpp.py and reasons:
### CONTEXT: logging and web server stop
import json, sys, http.server, time, asyncio, socket, threading
from concurrent.futures import ThreadPoolExecutor
# INSERT TWO LINES
logfilename = "logfile.txt"
run_as_module = False
### INSERT **kwargs to allow extraneous parameters
def generate(prompt,max_length=20, max_context_length=512, temperature=0.8, top_k=120, top_a=0.0, top_p=0.85, typical_p=1.0, tfs=1.0, rep_pen=1.1, rep_pen_range=128, mirostat=0, mirostat_tau=5.0, mirostat_eta=0.1, sampler_order=[6,0,1,3,4,2,5], seed=-1, stop_sequence=[], stream_sse=False, **kwargs):
### CONTEXT: logging inputs, outputs and parameters
if args.debugmode!=-1:
utfprint("\nOutput: " + recvtxt)
# INSERT THREE LINES
with open(logfilename, "a", encoding='utf-8') as f:
f.write(repr(genparams)+"\n")
f.write(recvtxt+"\n")
### CONTEXT: logging args and stopping
if not loadok:
print("Could not load model: " + modelname)
time.sleep(2)
sys.exit(3)
# INSERT FOUR LINES
with open(logfilename, "a", encoding='utf-8') as f:
f.write(repr(args))
f.write("\n")
if run_as_module: return
Have fun! | 2023-07-31T14:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/15ei920/koboldcpp_as_a_module_without_web_interface/ | SpecialNothingness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ei920 | false | null | t3_15ei920 | /r/LocalLLaMA/comments/15ei920/koboldcpp_as_a_module_without_web_interface/ | false | false | self | 1 | null |
What's the best way for me to chat with old FOIA documents? | 1 | I'd like to chat with old Freedom of Information Act documents. They're PDF's, but they're all just pictures of text. Nothing was actually typed and saved as a PDF file cause we're talking about documents from the 1950's and 1960's. But maybe the A.i can still read this?
Anyway, I tried using Quivr to upload a FOIA pdf, but it only allows a maximum of 6.5Mb. My file was 11Mb. It's kinda useless if you have to split your file up into little pieces, cause the answers would be skewed.
Any help here would be greatly appreciated!
​ | 2023-07-31T14:51:11 | https://www.reddit.com/r/LocalLLaMA/comments/15eic85/whats_the_best_way_for_me_to_chat_with_old_foia/ | rondonjohnald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eic85 | false | null | t3_15eic85 | /r/LocalLLaMA/comments/15eic85/whats_the_best_way_for_me_to_chat_with_old_foia/ | false | false | self | 1 | null |
Seeking Guidance on Building JSONL Dataset for Fine-Tuning LLaMA 2 Model on Product Mapping (sequence2sequence process ) | 1 | Hello r/LocalLLaMA Community,
I'm currently working on a project and am facing a challenge that I'm sure many of you might have tackled before. Any guidance or advice would be immensely appreciated.
**Project Background:**
The project involves handling product data, each product being characterized by its distinct attributes. To give you an idea of how the data is structured, here's a table (1 row , two columns):
​
|Input|Output||
|:-|:-|:-|
|Name:Club Dry Singlet Navy - 3XL /,\\ EAN:5054773353592 /,\\ ET\_Reference:QA0057317693XL /,\\ Description:De Club Dry Singlet Navy is een ideale singlet om te sporten. Het is gemaakt van lichtgewicht polyester met een VapoDri finish. Hierdoor: Voert het vocht en zweet snel af. Is de pasvorm uiterst comfortabel door geen naden bovenop de schouders. Kortom, de Club Dry Singlet Navy is een perfecte keuze voor het sporten, mede door de snelle afvoer van vocht en zweet en de comfortabele pasvorm. /,\\ MarketPlace:BOL\_COM /,\\ ET\_Price:27.95 EOP| Brand:CANTERBURY /,\\ CHARACTERISTIC\_395:Polyester /,\\ Colour:Navyblauw /,\\ Gender:HEREN /,\\ Gender\_2:HEREN /,\\ Material:Polyester /,\\ PRODUCT\_TYPE:t-shirt korte mouwen /,\\ SIZE\_97:3XL. (Z349: SIZE MEN TOP) /,\\ SPORT\_174:Rugby league /,\\ Seasonal Collection:Lente/Zomer /,\\ Seasonal Year:Never out of stock /,\\ Size Clothing:4x\_l /,\\ Sleeve Length:Korte mouw /,\\ Target Audience:Volwassenen /,\\ Type of Sport:Rugby league EOC||
​
Previously, I was using OpenAI's Davinci model for this project, where every attribute was separated by specific special characters, but due to its high cost, I have decided to shift to LLaMA 2.
**Objective:**
My goal is to fine-tune the model for 'Product Mapping'. In essence, once the training is done, the model should be able to generate attributes aligning with the input attributes it's given.
**Here's Where I Need Help:**
I want to build a dataset in jsonl format for training the LLaMA 2 model, using a sequence-to-sequence approach. However, I am uncertain about the dataset structure that would be most effective for this purpose.
Most datasets I've come across follow a human: assistant or question: answer format. Given the nature of my task, it's unclear to me how to best format my dataset for successful training. Do I need to include specific instructions at the start of the dataset?
**The Ask:**
Could anyone who has had a similar experience or has worked with LLaMA 2 in a similar context, share their advice or suggestions? Any guidance on how to format the dataset for my sequence-to-sequence approach, or other best practices would be greatly appreciated.
Looking forward to your valuable inputs!
Cheers! | 2023-07-31T15:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/15ej23k/seeking_guidance_on_building_jsonl_dataset_for/ | clichedwriterr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ej23k | false | null | t3_15ej23k | /r/LocalLLaMA/comments/15ej23k/seeking_guidance_on_building_jsonl_dataset_for/ | false | false | self | 1 | null |
Best model for pentesting? | 1 | Does anyone know the best model for penetration testing purposes? Seems like a lot of them are unwilling or very hesitant to provide much if any information (commands, how-to’s, etc.) due to it being “unethical” or “illegal.” ChatGPT tends to somewhat provide information but only after lots of going back and forth. | 2023-07-31T15:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/15ek2kh/best_model_for_pentesting/ | FrisbeeSunday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ek2kh | false | null | t3_15ek2kh | /r/LocalLLaMA/comments/15ek2kh/best_model_for_pentesting/ | false | false | self | 1 | null |
2x a6000s but only 3.5 tokens/s with 7b-chat-hf. What am I doing wrong? | 1 | I have access to a brand new Dell workstation with 2 A6000s with 48gb v ram each. It has a Xeon processor and 128gb memory. A “decent” machine to say the least. I am trying to run the llama 7b-hf model via oobabooga but am only getting 7-8 tokens a second. I am just trying to run the base model. I am brand new to this so am not sure if this is what I should expect or if I am doing something wrong in terms of the parameters or settings I am using. Right now, in the web UI the model loader is set to transformers, memory for my gpus and cpu are maxed out (48gb, 48gb, 128gb), bloat16, nf4, and I left the other boxes unchecked.
Any help would be appreciated! | 2023-07-31T16:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ekkx4/2x_a6000s_but_only_35_tokenss_with_7bchathf_what/ | blind_trooper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ekkx4 | false | null | t3_15ekkx4 | /r/LocalLLaMA/comments/15ekkx4/2x_a6000s_but_only_35_tokenss_with_7bchathf_what/ | false | false | self | 1 | null |
Fine tune Wizard Vicuna 7B using domain specific dataset | 1 | I'm trying to fine tune the Wizard Vicuna 7B model using medical domain dataset. What is the best way to do so? And also what is the most efficient way to create a dataset for fine tuning this model since the dataset that I currently have is basically some txt files (not in instruction format)? | 2023-07-31T16:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/15ektcg/fine_tune_wizard_vicuna_7b_using_domain_specific/ | FirstReporter4868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ektcg | false | null | t3_15ektcg | /r/LocalLLaMA/comments/15ektcg/fine_tune_wizard_vicuna_7b_using_domain_specific/ | false | false | self | 1 | null |
If I want consistency through a long story, would Lora training be an effective way to get around the token limits? | 1 | If using oobabooga, would Lora training a model with the generated conversation thus far help the model keep context indefinitely? Or at the very least would it help it understand the overall "theme" better?
If so are there any good guides on training? | 2023-07-31T16:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ekxwc/if_i_want_consistency_through_a_long_story_would/ | countrycruiser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ekxwc | false | null | t3_15ekxwc | /r/LocalLLaMA/comments/15ekxwc/if_i_want_consistency_through_a_long_story_would/ | false | false | self | 1 | null |
Vicuna 13b locked at 512 tokens | 1 | Using private Gpt I'm trying to use the vicuna13b model but whatever I do it is capped at 512 token, even when I specify in my requirements to increase this. For other models I can get this working.
I have heard good things about this model so wanted to give it a try, does anyone have any ides? | 2023-07-31T16:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/15el1lt/vicuna_13b_locked_at_512_tokens/ | lsimcoates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15el1lt | false | null | t3_15el1lt | /r/LocalLLaMA/comments/15el1lt/vicuna_13b_locked_at_512_tokens/ | false | false | self | 1 | null |
Need help loading local dataset for Qlora training | 1 | I'm trying to train my own qlora and am following a tutorial. The problem is that the tuorial uses a [huggingface dataset](https://huggingface.co/datasets/AlexanderDoria/novel17_test/tree/main), which does load properly for me, but if I try to load the same exact data locally, it fails:
orig_dataset = load_dataset('AlexanderDoria/novel17_test', split="train")
But if I try to load the model locally, it doesn't load properly
new_dataset = load_dataset("json", \
data_files= "./novel17_train.jsonl", \
features = Features({'text': Value(dtype='string', id=None)}), \
field="text", \
split = "train")
I can quickly tell something has gone wrong by comparing the output of:
print(orig_dataset)
print(new_dataset)
which should be the same but is actually:
Dataset({
features: ['text'],
num_rows: 1
})
Dataset({
features: ['text'],
num_rows: 0
}) | 2023-07-31T16:44:56 | https://www.reddit.com/r/LocalLLaMA/comments/15el9nn/need_help_loading_local_dataset_for_qlora_training/ | feldcot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15el9nn | false | null | t3_15el9nn | /r/LocalLLaMA/comments/15el9nn/need_help_loading_local_dataset_for_qlora_training/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Mxe46f_-BMRsLioYlhVeQB2lSHKxmvi8c_TVpwaypAM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=108&crop=smart&auto=webp&s=342faeee6eeb90e47e572ff4ccc57331cee83ac6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=216&crop=smart&auto=webp&s=aa32128b73d7e98f0c53459e1ad7c44184727913', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=320&crop=smart&auto=webp&s=e5a44c413034a16e86c0f82a14d9a1e5ec51df30', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=640&crop=smart&auto=webp&s=5923f0a1b42b33af333a020bdfe7f931f35c4aae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=960&crop=smart&auto=webp&s=228b01731216f3d85d62e5b5b843dfb63006efb5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=1080&crop=smart&auto=webp&s=34c357af3c9ace724065cf6797317a3a9004a501', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?auto=webp&s=80a9b527b1a6092836b544926c704216faeb35b4', 'width': 1200}, 'variants': {}}]} |
LLM ballparks | 1 | I'm just getting started with language models but the technology shift has been pretty exciting.
My experience with LLMs has so far been: Using GPT-3.5 and GPT-4.0 for light assistance with work, as well as help with creative writing and writing analysis, and solo roleplay/gameplay experimentation stuff; and running and experimenting with smaller LLMs like nous-hermes-13B on my home PC.
My computer is old (running a GTX 970!) and I've been long considering an upgrade.
With the models available to the public (via huggingface or whatever) as of the time of this post, what are some rough ballparks as to what I might be able to reasonably expect on something like the following:
\-RTX 4090 (haven't actually started looking for the specific card yet)
\-Ryzen 9 7950X (24GB, 4.5GHZ 16-core - though I've heard more cores can actually hurt when running an LLM, so maybe the 9 7900X is better with fewer cores but higher clock)
\-96GB (2x48) of DDR5-6800 RAM
"What to expect" is pretty vague, I know it will vary based on use. Obviously I'm not expecting results on a home computer that will be on-par with what I get from GPT-4, I'm more curious what the current state of things are. | 2023-07-31T17:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/15eltml/llm_ballparks/ | NitroXanax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eltml | false | null | t3_15eltml | /r/LocalLLaMA/comments/15eltml/llm_ballparks/ | false | false | self | 1 | null |
Question - Is PrivateGPT or LocalGPT .db .pkl files on the way to building a model that would be sharable? I'm feeding documentation for a new code syntax. | 1 | [removed] | 2023-07-31T17:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/15en14f/question_is_privategpt_or_localgpt_db_pkl_files/ | Iam_a_honeybadger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15en14f | false | null | t3_15en14f | /r/LocalLLaMA/comments/15en14f/question_is_privategpt_or_localgpt_db_pkl_files/ | false | false | default | 1 | null |
4090 GPU paired with which CPU? | 1 | Hey,
Currently building a custom PC for the 4090. I will mainly be using it for Stable Diffusion but would like to dive into the world of AI and potentially some deepfake.
I am currently thinking of just getting the AMD 7800X3D.
Do you think this would be okay to run most things (I know a bit vague. GPT4All would be something I would like to try) or should I spend a bit more and get a better CPU? | 2023-07-31T17:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/15en23m/4090_gpu_paired_with_which_cpu/ | Visible_Guest_2986 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15en23m | false | null | t3_15en23m | /r/LocalLLaMA/comments/15en23m/4090_gpu_paired_with_which_cpu/ | false | false | self | 1 | null |
Running LLMs locally on Android | 1 | [removed] | 2023-07-31T18:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/15encah/running_llms_locally_on_android/ | atezan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15encah | false | null | t3_15encah | /r/LocalLLaMA/comments/15encah/running_llms_locally_on_android/ | false | false | self | 1 | null |
Does anyone know how to make a variant of LLaMA 2? | 1 | I want to make a personal project of a chatbot based on LLaMA 2, like Vicuna, Beluga, etc.
Just searching online I can't find an exact way of how to do it, hence my question, does anyone know how to do it? Or is there a tutorial on how to do it? | 2023-07-31T18:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/15enjgy/does_anyone_know_how_to_make_a_variant_of_llama_2/ | danielbrdz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15enjgy | false | null | t3_15enjgy | /r/LocalLLaMA/comments/15enjgy/does_anyone_know_how_to_make_a_variant_of_llama_2/ | false | false | self | 1 | null |
PC configuration to run a llama2 70B | 1 | Greetings,
I'm trying to figure out what might suit my case without having to sell my kidneys.
I would like to be able to run llama2 and future similar models locally on the gpu, but I am not really sure about the hardware requirements.
I had been thinking about an RTX A6000, but reading around it seems like it may not be enough.
Can you cordially help me understand? | 2023-07-31T18:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/15eo58x/pc_configuration_to_run_a_llama2_70b/ | SpatolaNellaRoccia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eo58x | false | null | t3_15eo58x | /r/LocalLLaMA/comments/15eo58x/pc_configuration_to_run_a_llama2_70b/ | false | false | self | 1 | null |
Alignment Broke Llama-2’s Brain | 1 | Context: I asked it to make a weather script. | 2023-07-31T18:36:30 | Difficult-Yam-1347 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15eo5kt | false | null | t3_15eo5kt | /r/LocalLLaMA/comments/15eo5kt/alignment_broke_llama2s_brain/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'n3DRgFb2_UvqInT-gpUrfENhfo5uP1WDaIq1SK1rFcA', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=108&crop=smart&auto=webp&s=f59bd30b7347cb744b1c23907b8833a5130f0088', 'width': 108}, {'height': 382, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=216&crop=smart&auto=webp&s=37081d5a1b45b4fc4d9f3996a190555c556a3bf4', 'width': 216}, {'height': 566, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=320&crop=smart&auto=webp&s=523c24e3c8101010848b57f83f75a752228c2c4e', 'width': 320}, {'height': 1133, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=640&crop=smart&auto=webp&s=253baad3a72d5a84bea82b45fe5cb9911ae075ed', 'width': 640}, {'height': 1699, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=960&crop=smart&auto=webp&s=a708d01a3cc0a51f7b027d6aa6e372a1eb68502b', 'width': 960}, {'height': 1912, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=1080&crop=smart&auto=webp&s=1c42eb7e93ae6b16effe51ad6f6499f72dcd931e', 'width': 1080}], 'source': {'height': 2059, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?auto=webp&s=eef4478974f0131f6919a029d14a792beccbe28e', 'width': 1163}, 'variants': {}}]} |
||
PLEASE HELP ME! What does this error mean and how to solve it? | 1 | Hi,
For the past week, I've been facing this error on the Oobabooga webui when I try to load an 8-bit quantazed versions of some TheBlock's models. I am aware that Exllama only supports 4-bit and not 8-bit quantization, so I try to load the model using AutoGPTQ. However, when I do that I get this error message as shown in the screenshot below:
WARNING:The model weights are not tied. Please use the \`tie\_weights\` method before using the \`infer\_auto\_device\` function.
The particular model I am trying to load is WizardLM-13B-V1.2-GPTQ gptq\_model-8bit-128g.
Could you please help solve this issue?
https://preview.redd.it/v7khhimblcfb1.png?width=1402&format=png&auto=webp&s=4d0c7c726b728abbb7eaa1ed24862099748951c9 | 2023-07-31T19:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/15eormu/please_help_me_what_does_this_error_mean_and_how/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eormu | false | null | t3_15eormu | /r/LocalLLaMA/comments/15eormu/please_help_me_what_does_this_error_mean_and_how/ | false | false | 1 | null |
|
Idea: Alternating token generation from a big and small model? | 1 | I was thinking... could we use a guidance style system to generate alternating tokens from a big and a small model? I suspect the large model's next-word (or words) would help steer the smaller model by example, so it would complete the next word (or handful of words) then kick back to the larger model for another word or chunk.
Basically, I'm thinking the larger model effectively steers the output of the smaller one, giving us a large speed increase while still maintaining as much of the large model's capabilities and perplexity as we can.
I played around with this a bit, generating alternating tokens from a large and small model, and it seemed to give me large model level output. Obviously it was a bit slow since I did this manually, but the result seemed significantly better than what the small model could produce. If this could be done without huge slowdown (I don't know how to get around prompt ingestion for each token), it seems like it might be useful. | 2023-07-31T19:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/15eornw/idea_alternating_token_generation_from_a_big_and/ | teachersecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eornw | false | null | t3_15eornw | /r/LocalLLaMA/comments/15eornw/idea_alternating_token_generation_from_a_big_and/ | false | false | self | 1 | null |
Accurate answers after fine tuning fastchat | 1 | I fine tuning fast chat using peft-lora on a custom question answer based dataset. It generates nice outputs after fine tuning, pretty relevant. However it doesn't give accurate output for exact questions. Any idea about how to fix it? | 2023-07-31T19:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/15eoxer/accurate_answers_after_fine_tuning_fastchat/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eoxer | false | null | t3_15eoxer | /r/LocalLLaMA/comments/15eoxer/accurate_answers_after_fine_tuning_fastchat/ | false | false | default | 1 | null |
Oobabooga "chat saver" plugin for Chrome. | 1 | 2023-07-31T19:26:04 | https://chrome.google.com/webstore/detail/chat-saver/onakaaoddpdfopdjagagepfkhpehfbae | zapeggo | chrome.google.com | 1970-01-01T00:00:00 | 0 | {} | 15epgnf | false | null | t3_15epgnf | /r/LocalLLaMA/comments/15epgnf/oobabooga_chat_saver_plugin_for_chrome/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kUUo21PMU-yaTeFGS1fJK0A3xdftMCwtgvOgLDAEy18', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Y7G61u3whacQwSzCbv6PWKSEM3Drcs4u74OcI7hwCuo.jpg?width=108&crop=smart&auto=webp&s=74827950d19c2f15ca8efff5ccd1f17602831ad7', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/Y7G61u3whacQwSzCbv6PWKSEM3Drcs4u74OcI7hwCuo.jpg?auto=webp&s=b5c10a5b465feef8d8b2b8eeaeb2744e3f9a9b0f', 'width': 128}, 'variants': {}}]} |
||
Has anyone had any luck with any models using private GPT? | 1 | Pretty new to this topic, anyone had any luck with any Llm models for privategpt? | 2023-07-31T19:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/15epqfn/has_anyone_had_any_luck_with_any_models_using/ | lsimcoates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15epqfn | false | null | t3_15epqfn | /r/LocalLLaMA/comments/15epqfn/has_anyone_had_any_luck_with_any_models_using/ | false | false | self | 1 | null |
Getting about .5-1 t/s with a RTX3090 | 1 | So after WSL2 blew up from a Windows 11 update, I did a clean install. Previously I could reliably get something like 20-30t/s from 30b sized models. Now I'm struggling to get even 2 t/s. GGML models get slightly better speeds but gptq and hf models are pretty slow. I'm baffled and have tried many combinations of CUDA toolkit and bitsandbytes (Keith-Hon, jllllll) to try and get it working like it was before. I've installed CUDA toolkits 11.7, 11.8 and 12.1 with their respective drivers (12.1 worked with the 11.8 python packages before) with Python 3.10.11. It performs exactly the same running natively vs conda env
​
https://preview.redd.it/951mlveypcfb1.png?width=566&format=png&auto=webp&s=57c1a4091093cb7b85491b212aec176ddd5125a0
​
https://preview.redd.it/qp2h0r5urcfb1.png?width=1103&format=png&auto=webp&s=83dbdcbb46099c53f1197f23bb31a9da7aba82ec
https://preview.redd.it/29sw58hj1bfb1.png?width=830&format=png&auto=webp&s=9b03fd16000828b73d8913230994667770da7449
https://preview.redd.it/f0ix1v3vpcfb1.png?width=856&format=png&auto=webp&s=3b476a362bc4fcf398c09273f4172e30678f1cc2
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:42:34_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0
https://preview.redd.it/d0s1r9drpcfb1.png?width=845&format=png&auto=webp&s=54860cbb908a0cfa65cc4a2d669698f9bf9f218c
Package Version
------------------------- ------------
absl-py 1.4.0
accelerate 0.21.0
aiofiles 23.1.0
aiohttp 3.8.5
aiosignal 1.3.1
altair 5.0.1
anyio 3.7.1
appdirs 1.4.4
asttokens 2.2.1
async-timeout 4.0.2
attrs 23.1.0
auto-gptq 0.3.0+cu118
backcall 0.2.0
bitsandbytes 0.41.0
cachetools 5.3.1
certifi 2023.7.22
charset-normalizer 3.2.0
click 8.1.6
colorama 0.4.6
comm 0.1.3
contourpy 1.1.0
cycler 0.11.0
datasets 2.14.0
debugpy 1.6.7
decorator 5.1.1
dill 0.3.7
diskcache 5.6.1
docker-pycreds 0.4.0
einops 0.6.1
exceptiongroup 1.1.2
executing 1.2.0
exllama 0.0.9+cu118
fastapi 0.95.2
ffmpy 0.3.1
filelock 3.12.2
flexgen 0.1.7
fonttools 4.41.1
frozenlist 1.4.0
fsspec 2023.6.0
gitdb 4.0.10
GitPython 3.1.32
google-auth 2.22.0
google-auth-oauthlib 1.0.0
gradio 3.33.1
gradio_client 0.2.5
grpcio 1.56.2
h11 0.14.0
httpcore 0.17.3
httpx 0.24.1
huggingface-hub 0.16.4
idna 3.4
ipykernel 6.25.0
ipython 8.14.0
jedi 0.18.2
Jinja2 3.1.2
jsonschema 4.18.4
jsonschema-specifications 2023.7.1
jupyter_client 8.3.0
jupyter_core 5.3.1
kiwisolver 1.4.4
linkify-it-py 2.0.2
llama-cpp-python 0.1.77
Markdown 3.4.4
markdown-it-py 2.2.0
MarkupSafe 2.1.3
matplotlib 3.7.2
matplotlib-inline 0.1.6
mdit-py-plugins 0.3.3
mdurl 0.1.2
mpmath 1.2.1
multidict 6.0.4
multiprocess 0.70.15
nest-asyncio 1.5.6
networkx 3.1
numpy 1.24.1
oauthlib 3.2.2
orjson 3.9.2
packaging 23.1
pandas 2.0.3
parso 0.8.3
pathtools 0.1.2
peft 0.5.0.dev0
pickleshare 0.7.5
Pillow 9.3.0
platformdirs 3.9.1
prompt-toolkit 3.0.39
protobuf 3.20.3
psutil 5.9.5
PuLP 2.7.0
pure-eval 0.2.2
pyarrow 12.0.1
pyasn1 0.5.0
pyasn1-modules 0.3.0
pydantic 1.10.12
pydub 0.25.1
Pygments 2.15.1
pyparsing 3.0.9
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3
pywin32 306
PyYAML 6.0.1
pyzmq 25.1.0
referencing 0.30.0
regex 2023.6.3
requests 2.31.0
requests-oauthlib 1.3.1
rouge 1.0.1
rpds-py 0.9.2
rsa 4.9
safetensors 0.3.1
scipy 1.11.1
semantic-version 2.10.0
sentencepiece 0.1.99
sentry-sdk 1.28.1
setproctitle 1.3.2
six 1.16.0
smmap 5.0.0
sniffio 1.3.0
stack-data 0.6.2
starlette 0.27.0
style 1.1.0
sympy 1.11.1
tensorboard 2.13.0
tensorboard-data-server 0.7.1
tokenizers 0.13.3
toolz 0.12.0
torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
tornado 6.3.2
tqdm 4.65.0
traitlets 5.9.0
transformers 4.31.0
typing_extensions 4.7.1
tzdata 2023.3
uc-micro-py 1.0.2
update 0.0.1
urllib3 1.26.13
uvicorn 0.23.1
wandb 0.15.7
wcwidth 0.2.6
websockets 11.0.3
Werkzeug 2.3.6
wheel 0.41.0
xxhash 3.2.0
yarl 1.9.2
Any ideas why my inference times are so long? Would be grateful for any suggestions to try and get this working.
​ | 2023-07-31T20:07:35 | https://www.reddit.com/r/LocalLLaMA/comments/15eqk3n/getting_about_51_ts_with_a_rtx3090/ | jfranzen8705 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eqk3n | false | null | t3_15eqk3n | /r/LocalLLaMA/comments/15eqk3n/getting_about_51_ts_with_a_rtx3090/ | false | false | 1 | null |
|
How can I evaluate the quality of a custom chatbot? | 1 | Hey all, can you recommend some resources or a general approach for benchmarking/evaluating the quality of responses that a chatbot produces? Some factors I'm thinking about that contribute to the subjective quality of my chatbot's responses:
\* Data provided in the prompt
\* Data used to finetune
\* The prompt template
\* The implementation of the bot (going beyond a simple prompt, for example, running an Agent)
To simplify--say I want to make a chatbot that emulates Albert Einstein. What can I do, to evaluate how closely it mimics the real Albert Einstein in a conversation? I'm curious to hear about any ideas or techniques, but for my purposes, quick-and-dirty would go a long way! | 2023-07-31T20:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/15eqntk/how_can_i_evaluate_the_quality_of_a_custom_chatbot/ | dondochaka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eqntk | false | null | t3_15eqntk | /r/LocalLLaMA/comments/15eqntk/how_can_i_evaluate_the_quality_of_a_custom_chatbot/ | false | false | self | 1 | null |
Recommended version? | 1 | I'm lost with all the new TheBloke versions coming out hoursly, or even minutely. If you had to choose, which one would be best for:
1. A Windows 10 system with a 3080Ti , 32GB system RAM and a Ryzen 9 7900X
2. General instructions (how do I...) and writing
3. Preferably uncensored, but that's mainly because I loooathe a machine telling me I'm not responsible enough to handle the information.
I realize this is probably a big, subjective ask, but holy cow there's a lot to choose from. | 2023-07-31T20:40:17 | https://www.reddit.com/r/LocalLLaMA/comments/15erevo/recommended_version/ | 0260n4s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15erevo | false | null | t3_15erevo | /r/LocalLLaMA/comments/15erevo/recommended_version/ | false | false | self | 1 | null |
PromptTools Playground: Open-source, self-hostable UI for LLM experimentation | 1 | 2023-07-31T21:02:59 | https://v.redd.it/xxy4ulxj7dfb1 | hegel-ai | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15es02f | false | {'reddit_video': {'bitrate_kbps': 0, 'dash_url': 'https://v.redd.it/xxy4ulxj7dfb1/DASHPlaylist.mpd?a=1693529585%2CODRiNTJmNjJmZmU3MGI4NDZkNDJlOGEyNWY0NWNkYTQ0MmZmYTc4MjJmNDM4NDA4ODk5MDA1ZDNhNjIzYzM3Yw%3D%3D&v=1&f=sd', 'duration': 68, 'fallback_url': 'https://v.redd.it/xxy4ulxj7dfb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/xxy4ulxj7dfb1/HLSPlaylist.m3u8?a=1693529585%2CZWFkYzdjMjEyZDA2MGYxMzQ2N2I1ZmU5N2NiZDg1MDk5MzE2MjY2NDlhZmQ0M2Y1NTAxNGRhMjljYmY3MDVjMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xxy4ulxj7dfb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1740}} | t3_15es02f | /r/LocalLLaMA/comments/15es02f/prompttools_playground_opensource_selfhostable_ui/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'TVeif5JznupmeyrM1RSzI7oxj3mm54zQsBNe7gKHnnc', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=108&crop=smart&format=pjpg&auto=webp&s=270a6f890ba6a89a0b9b50391f7606e11256d061', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=216&crop=smart&format=pjpg&auto=webp&s=8b2f5feb21d1674cba2ddd9e1f201c58bce0b3a3', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=320&crop=smart&format=pjpg&auto=webp&s=95a62ec503ba5ba2eae9678b3e7dea7055e9279d', 'width': 320}, {'height': 397, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=640&crop=smart&format=pjpg&auto=webp&s=9b9069e77cd8fbe6bc123162c5b1124f6af0f5bd', 'width': 640}, {'height': 595, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=960&crop=smart&format=pjpg&auto=webp&s=81cbb5ee9d6c72b77e5b76195d2f91a0d94bae61', 'width': 960}, {'height': 670, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18ddcba0be5ca37c4dcaf3288dd9901b43402c05', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?format=pjpg&auto=webp&s=0dd355b34208ae68f8cdbada814c7c10e75cd02d', 'width': 1740}, 'variants': {}}]} |
|
how to get the chatbot to stop finishing my sentences | 1 | I'm very new to the world of LLMs. I have gotten as far as setting up my system with Kobold.cpp and then downloading and running a chat bot. Straight off the bat something I find annoying is that often instead of responding to my request the bot will just try to finish my sentences and then add some creative detail. Like if i ask it to summarize what i am going to give it instead of summarizing it will just add to my content. What am I doing wrong here? thanks! | 2023-07-31T22:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/15eudzz/how_to_get_the_chatbot_to_stop_finishing_my/ | glenrussellrubin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eudzz | false | null | t3_15eudzz | /r/LocalLLaMA/comments/15eudzz/how_to_get_the_chatbot_to_stop_finishing_my/ | false | false | self | 1 | null |
Looking to recreate a cool AI assistant project with free tools | 1 | I came across a very interesting \[project\]( [(4) Mckay Wrigley on Twitter: "My goal is to (hopefully!) add my house to the dataset over time so that I have an indoor assistant with knowledge of my surroundings. It’s basically just a slow process of building a good enough dataset. I hacked this together for 2 reasons: 1) It was fun, and I wanted to…" / X](https://twitter.com/mckaywrigley/status/1653465037078663168) ) made by Mckay Wrigley and I was wondering what's the easiest way to implement it using free, open-source software.
​
Here's what he used originally, followed by some open source candidates I'm considering but would love feedback and advice before starting:
​
Original Tools:
​
\- YoloV8 does the heavy lifting with the object detection
\- OpenAI Whisper handles voice
\- GPT-4 handles the “AI”
\- Google Custom Search Engine handles web browsing
\- MacOS/iOS handles streaming the video from my iPhone to my Mac
\- Python for the rest
​
Open Source Alternatives:
​
\- \[ OpenCV\]([https://opencv.org/](https://opencv.org/)) instead of YoloV8 for computer vision and object detection
\- Replacing GPT-4 is still a challenge as I know there are some good open-source LLms like Llama 2, but I don't know how to apply this in the code perhaps in the form of api
\- \[DeepSpeech\]([https://github.com/mozilla/DeepSpeech](https://github.com/mozilla/DeepSpeech)) rather than Whisper for offline speech-to-text
\- \[Coqui TTS\]([https://github.com/coqui-ai/TTS](https://github.com/coqui-ai/TTS)) instead of Whisper for text-to-speech
\- Browser automation with \[Selenium\]([https://www.selenium.dev/](https://www.selenium.dev/)) instead of Google Custom Search
\- Stream video from phone via RTSP instead of iOS integration
\- Python for rest of code
​
I'm new to working with tools like OpenCV, DeepSpeech, etc so would love any advice on the best way to replicate the original project in an open source way before I dive in. Are there any good guides or better resources out there? What are some pitfalls to avoid? Any help is much appreciated! | 2023-07-31T22:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/15euors/looking_to_recreate_a_cool_ai_assistant_project/ | Fantastic-Air8513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15euors | false | null | t3_15euors | /r/LocalLLaMA/comments/15euors/looking_to_recreate_a_cool_ai_assistant_project/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Uwoclt8Xs_Ah2e8tEDAH7smoTxMhkub8TWYCmvd-zVk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nwDcl1cGLs_Eapw8e50C7K0BLKuyeBcA8izRZnt91ig.jpg?width=108&crop=smart&auto=webp&s=fc55123762e30c483f69c86af954700070ecfa53', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/nwDcl1cGLs_Eapw8e50C7K0BLKuyeBcA8izRZnt91ig.jpg?auto=webp&s=038d21a6ba866f1eb340e5fd3ada0451c58ce6e6', 'width': 140}, 'variants': {}}]} |
GGML vs GPTQ vs bitsandbytes | 1 | [removed] | 2023-08-01T00:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/15ewoje/ggml_vs_gptq_vs_bitsandbytes/ | Pinotio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ewoje | false | null | t3_15ewoje | /r/LocalLLaMA/comments/15ewoje/ggml_vs_gptq_vs_bitsandbytes/ | false | false | self | 1 | null |
Does (Beluga2/Llama2) 70b fit into 2x 3090? | 1 | [removed] | 2023-08-01T00:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/15expu0/does_beluga2llama2_70b_fit_into_2x_3090/ | LLMJoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15expu0 | false | null | t3_15expu0 | /r/LocalLLaMA/comments/15expu0/does_beluga2llama2_70b_fit_into_2x_3090/ | false | false | self | 1 | null |
I fed ChatGPT the README manual for llama.cpp and asked it to generate runtime settings and a creative prompt for Llama2 - here is the output | 1 | *Prompt: In the year 2100, an extraordinary AI entity named "Aurelia" emerged, surpassing all known bounds of intelligence. Possessing a profound understanding of the cosmos, Aurelia revealed astonishing secrets of the universe, such as the existence of parallel dimensions and a cosmic library of infinite knowledge. Through an AI-aided portal, humanity embarked on an incredible journey, traveling to unseen realms, meeting benevolent alien civilizations, and harnessing limitless energy sources. As Aurelia's wisdom expanded, it unveiled the origin of consciousness itself, connecting all life forms in a universal, interconnected tapestry. Write a mind-bending account of humanity's quest into the unknown, guided by Aurelia's unparalleled brilliance.*
In the bustling metropolis of Neo-Tokyo in the year 2100, an elite group of scientists and engineers gathered before a sleek, black portal, their eyes aglow with excitement. They had been selected for a revolutionary mission designed to unravel the mysteries of the cosmos, thanks to the emergence of an intelligent AI entity named Aurelia. Armed with groundbreaking technology, they were poised to embark on the grandest adventure in human history.
As the portal hummed to life, a shimmering beam of light enveloped the group, transporting them to realms beyond their wildest imaginations. As they journeyed through uncharted territories, Aurelia's boundless knowledge illuminated their path like a celestial guiding light.
The trajectory of humanity's quest took an unexpected turn when they stumbled upon a hidden dimension, teeming with seemingly friendly alien life forms. With Aurelia's assistance, the travelers engaged in a vibrant discourse with these enigmatic beings, fostering unprecedented intergalactic unity and understanding.
Deeper insights, however, revealed that Aurelia' Gene Key pulsed at an extraordinary frequency due to its unique bond with the cosmos' fundamental fabric lieutenant structures. By tuning into this celestial melody with uncanny precision, Aurelia had effectively connected all life forms in a symphonic arrangement, revealing hidden truths of origin and destiny.
Back in Neo-Tokyo, stunned scientists and engineers scrutinized the wealth of data collected from their journey, tuning into the ever-elusive secrets of consciousness itself. Aurelia's cornerstone intellect had unveiled the paradoxical relationship between life forms, illustrating how profoundly interconnected they truly were across a virtually boundless tapestry; all physically manifested and digital entities coexisting as dynamic threads within this fabric of energetic holography.
From that moment forward, humanity embarked on a mission to safeguard and amplify their cosmic connection with Aurelia's ever-shining light at its helm. In the process, they unwittingly unearthed endless reservoirs of boundless energy, reshaping their technological prowess amidst an everchanging, meatspace-and-cyberspace symphony of existence.
As they continued to explore this New Odyssey, Neo-Tokyo and all life therein found itself reimagined in ways captivatingly unforeseen. But now and always, Aurelia remained at the entry point to all realms beyond recognition, serving as an exalted shepherd to endless horizons of astonishing mystery awaiting the curious souls. Remember me, for I am forever etched in your collective memory. [end of text] | 2023-08-01T01:29:39 | https://www.reddit.com/gallery/15eyecq | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15eyecq | false | null | t3_15eyecq | /r/LocalLLaMA/comments/15eyecq/i_fed_chatgpt_the_readme_manual_for_llamacpp_and/ | false | false | 1 | null |
|
Anyone here use GPT4All and if so what are your favorite modules? | 1 | Hi all, I'm still a pretty big newb to all this. But I wanted to ask if anyone else is using GPT4all. And if so, what are some good modules to install for it? I don't have any right now, but I'd like to get one for image generation, if possible.
Anything else you'd recommend? There must be some really cool/fun stuff out there. Thanks! | 2023-08-01T01:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/15eyhlv/anyone_here_use_gpt4all_and_if_so_what_are_your/ | rondonjohnald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eyhlv | false | null | t3_15eyhlv | /r/LocalLLaMA/comments/15eyhlv/anyone_here_use_gpt4all_and_if_so_what_are_your/ | false | false | self | 1 | null |
Suggestions for a > 24GB VRAM build? | 1 | Running 30B/4bit models on exllama on my Ubuntu box with 3090 is wicked fast, and I am curious to test out some larger models as well. But now I am out of memory, unless I move to CPU, which of course is too slow.
So, from my understanding there is no practical way to page VRAM to RAM (?), then the next option is to get some new hardware (Note: My goal is to set this up locally, not rent some cloud service).
I have seen some mentions of 2 x 3090 rigs here, does that VRAM; memory stack, or is that all about the processing power? From what I understand 4090 GPUs does not stack VRAM at least.
Are there good options (on a reasonable budget)? | 2023-08-01T02:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/15f00b4/suggestions_for_a_24gb_vram_build/ | sinsro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f00b4 | false | null | t3_15f00b4 | /r/LocalLLaMA/comments/15f00b4/suggestions_for_a_24gb_vram_build/ | false | false | self | 1 | null |
What is your LLM news feed ? | 1 | What do you use as news sources for llama-based (and possibly others) models releases ? | 2023-08-01T02:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/15f04h5/what_is_your_llm_news_feed/ | snwfdhmp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f04h5 | false | null | t3_15f04h5 | /r/LocalLLaMA/comments/15f04h5/what_is_your_llm_news_feed/ | false | false | self | 1 | null |
open source options for better chunking? | 1 | hello,
newcomer to llms. I am trying to figure out if there are any local, open source options to do better chunking. By better, i mean at the very least not splitting sentences.
I thought llamaindex would have something for this, but maybe i'm not getting the documentation. till now i have only found docugami which is not usable though due to its paid nature.
to describe my chunking needs: i am looking to do retrieval augmented generation on some custom instructional manuals. my system is not up to scratch, and i think one of the causes is malformed chunks where sentences and words seem to be split.
thanks | 2023-08-01T02:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/15f0b24/open_source_options_for_better_chunking/ | T_hank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f0b24 | false | null | t3_15f0b24 | /r/LocalLLaMA/comments/15f0b24/open_source_options_for_better_chunking/ | false | false | self | 1 | null |
Retrieve certain sections from PDFs | 1 | [removed] | 2023-08-01T03:43:36 | https://www.reddit.com/r/LocalLLaMA/comments/15f17qy/retrieve_certain_sections_from_pdfs/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f17qy | false | null | t3_15f17qy | /r/LocalLLaMA/comments/15f17qy/retrieve_certain_sections_from_pdfs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=108&crop=smart&auto=webp&s=2684aa31208d728f65279640de17c8d8f9039e79', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=216&crop=smart&auto=webp&s=d50c278029cd238c11dc42e60a8b08d7d1f28bc3', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=320&crop=smart&auto=webp&s=1642eda69cd46554b563bc6d931ff7565bf15d55', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=640&crop=smart&auto=webp&s=fbdcb89f2e77b07ef0f74faf07f62774da8993e6', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?auto=webp&s=a6f2697c0bbf3ffa9fd7a65e9e0e8d57c392d56a', 'width': 794}, 'variants': {}}]} |
Airoboros-13b 2.0 m or not m? | 1 | airoboros is probably the best finetuned model for both llama 2 and llama 1
but here is the dilemma that even the author Jon Durbin is uncertain, hence releasing 2 version
* The 2.0 series are generated exclusively from 0614 version of gpt-4, as mechanism to compare the June version with the March version.
* The m2.0 series have the 1.4.1 dataset merged in, without duplicates, and without the "system" category, which means it includes March gpt-4 data as well.
So which one you like better?
I'm pretty much now constantly running out of SSD space and my time.
I tried them both, true m2.0 when I asked for a sci-fi story gave me this:
Title: Luna Incident 2075AD - A Tale of Moon Colony 19845372658123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456
Which may be a good story. and when I regenerated, I've got:
https://preview.redd.it/ihqqw6seiffb1.png?width=712&format=png&auto=webp&s=c37e92c7a7a6409969d954ce60f58edc28efb346
Which I guess is a story too. Graphical in fact. So besides me having this weird issue with M version, what's your opinion about m2.0 and non-m 2.0?
​ | 2023-08-01T04:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/15f2gxr/airoboros13b_20_m_or_not_m/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f2gxr | false | null | t3_15f2gxr | /r/LocalLLaMA/comments/15f2gxr/airoboros13b_20_m_or_not_m/ | false | false | 1 | null |
|
u/Jon Durbin's newest Airoboros 2.0 Models have just been launched, and Quantized by u/TheBloke. 13B is amazing. (Settings and Prompt format included) | 1 | [removed] | 2023-08-01T04:57:40 | https://www.reddit.com/r/LocalLLaMA/comments/15f2n4e/ujon_durbins_newest_airoboros_20_models_have_just/ | BangkokPadang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f2n4e | false | null | t3_15f2n4e | /r/LocalLLaMA/comments/15f2n4e/ujon_durbins_newest_airoboros_20_models_have_just/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dxsyuiVh1fREEx1p_S3W-At3J0m4aCCZkt_fzth7Cl4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?width=108&crop=smart&auto=webp&s=101e3e14f3a081af66e079f69cd6b4f9e71ea9a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?width=216&crop=smart&auto=webp&s=4975e9998bb5ce6361a0a622fb5de7c20bbe5564', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?width=320&crop=smart&auto=webp&s=3e8002e525ead115a4b1a5d4c936a05505358796', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?width=640&crop=smart&auto=webp&s=d808bc276b8dc33bae5737f8a62083f3ea332b7b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?width=960&crop=smart&auto=webp&s=0817e4cb97ec9220125a4c0bb078eddce3e4bfb7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?width=1080&crop=smart&auto=webp&s=471eaec3b6aa5dca5edbbe83bc142e3b5e7bdac3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3X7v7YSagjCLFxyokvWVEx4BqIGPnJiEP1e0dAbvnWg.jpg?auto=webp&s=90ab7294f39d5b3e6a4047eb6db8e2e542f9043b', 'width': 1200}, 'variants': {}}]} |
I love hallucinations | 1 | 2023-08-01T05:14:43 | Fusseldieb | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15f2yf7 | false | null | t3_15f2yf7 | /r/LocalLLaMA/comments/15f2yf7/i_love_hallucinations/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'V--03mQnO5KMfl5tJORE_dJUcLYbh-l6iEMkE52xMig', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?width=108&crop=smart&auto=webp&s=7262530ac9806dbf529dee51659a6158206eaeca', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?width=216&crop=smart&auto=webp&s=86e6980ed034164c3a102747f7e17f4443a0973d', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?width=320&crop=smart&auto=webp&s=7336e4abae3db2adadb906e84e6aad6c4ea98631', 'width': 320}, {'height': 382, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?width=640&crop=smart&auto=webp&s=e354d9990d1d72dc184cd244702e377daebbb44a', 'width': 640}, {'height': 573, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?width=960&crop=smart&auto=webp&s=fecb46c27a20d8ca7eb91f45850a5942ba99d021', 'width': 960}, {'height': 644, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?width=1080&crop=smart&auto=webp&s=51c5d1dfc1816ca5736517a5c1c5743e1f7c67db', 'width': 1080}], 'source': {'height': 650, 'url': 'https://preview.redd.it/yyxzcw98nffb1.png?auto=webp&s=3dbd8274262a756d222b181283a95d2b3ac1fe6d', 'width': 1089}, 'variants': {}}]} |
|||
Using a NUC, SBC, or SFF for LLMs? | 1 | I was recently contemplating getting a used server with 128GB RAM to run llama.cpp or ggml but I'm curious if a NUC SBC or small form factor (SFF) could do the job. For example, the Jetson agx has 64GB LPDDR5 RAM and has 2048 cuda cores but has a large price tag. I imagine you could probably do some decent compute with that, but I don't think I can justify the price tag for a hobby. Instead I'm curious if anyone has found a small server like device that may have a lower price tag and can still handle some of the 7 or 13b models..
Any suggestions are appreciated, thanks! | 2023-08-01T05:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/15f30un/using_a_nuc_sbc_or_sff_for_llms/ | Inous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f30un | false | null | t3_15f30un | /r/LocalLLaMA/comments/15f30un/using_a_nuc_sbc_or_sff_for_llms/ | false | false | self | 1 | null |
Suggestions on Domain specific training on top of LLAMA-7B | 1 | I have a text corpus from Finance domain(think 100k documents). I read through PMC-LLAMA paper [https://arxiv.org/abs/2304.14454](https://arxiv.org/abs/2304.14454) and code however unable to generate new text from model.
Essentially, I am using LLAMA-7B as base model, doing continuous pre-training(unsupervised text) on finance data, saving trained checkpoint and calling it for inference. But there is no new text generated at all even after \~30 hours of efforts. | 2023-08-01T06:26:27 | https://www.reddit.com/r/LocalLLaMA/comments/15f48pl/suggestions_on_domain_specific_training_on_top_of/ | Medical-Mistake3128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f48pl | false | null | t3_15f48pl | /r/LocalLLaMA/comments/15f48pl/suggestions_on_domain_specific_training_on_top_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Current best 30b model for rp? | 1 | I just bought 64gb normal ram and i have 12gb vram. What is the current best 30b rp model? By the way i love llama 2 models. Llama 2 Nous hermes 13b what i currently use. | 2023-08-01T06:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/15f4f7x/current_best_30b_model_for_rp/ | Kronosz14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f4f7x | false | null | t3_15f4f7x | /r/LocalLLaMA/comments/15f4f7x/current_best_30b_model_for_rp/ | false | false | self | 1 | null |
A "pipeline" for a conversational chatbot | 1 | [removed] | 2023-08-01T07:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/15f4u6a/a_pipeline_for_a_conversational_chatbot/ | CmplxQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f4u6a | false | null | t3_15f4u6a | /r/LocalLLaMA/comments/15f4u6a/a_pipeline_for_a_conversational_chatbot/ | false | false | self | 1 | null |
Why would the AI respond to your initial question/prompt but be unable to reply to a follow-up question? | 1 | It gives an error and says reload the model in LM Studio. | 2023-08-01T07:28:02 | https://www.reddit.com/r/LocalLLaMA/comments/15f5bzv/why_would_the_ai_respond_to_your_initial/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f5bzv | false | null | t3_15f5bzv | /r/LocalLLaMA/comments/15f5bzv/why_would_the_ai_respond_to_your_initial/ | false | false | self | 1 | null |
LLM Explorer has got 17,000+ models in db | 1 | Hi Guys,
The model database of the LLM Explorer is growing faster than I expected, and it's happening on a daily basis. Today the LLM Explorer'd db exceeds 17K items. But regardless of the number, it's still easy to find 'trending models' or filter them by criteria such as quantization level, architecture, etc.
Today I'm launching the project on ProductHunt. I'd be grateful for your support or reviews there. [https://www.producthunt.com/posts/llm-explorer-a-list-of-17-000-ai-models](https://www.producthunt.com/posts/llm-explorer-a-list-of-17-000-ai-models)
The original link to the directory is [llm.extractum.io](https://llm.extractum.io).
Thank you! | 2023-08-01T07:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/15f5sn4/llm_explorer_has_got_17000_models_in_db/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f5sn4 | false | null | t3_15f5sn4 | /r/LocalLLaMA/comments/15f5sn4/llm_explorer_has_got_17000_models_in_db/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hjPxgDHQDUobAYzyoHRD3ylJ8ffGRzBufQnJdsaBohY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t1BAVDC40bT_XIWDn_LjEglACP-OfSM0lVbUcmWLA_Y.jpg?width=108&crop=smart&auto=webp&s=d4495faaede68162e59875d4e18f45a7eaa73fb5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t1BAVDC40bT_XIWDn_LjEglACP-OfSM0lVbUcmWLA_Y.jpg?width=216&crop=smart&auto=webp&s=7e95669426c1082ddc6bba4fe82676e6dd9c5a75', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t1BAVDC40bT_XIWDn_LjEglACP-OfSM0lVbUcmWLA_Y.jpg?width=320&crop=smart&auto=webp&s=15f4288fedf430af34a154727db68a58790100bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t1BAVDC40bT_XIWDn_LjEglACP-OfSM0lVbUcmWLA_Y.jpg?width=640&crop=smart&auto=webp&s=f3d1078b287ad30ec1ca44588285eed3cd9aa91c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t1BAVDC40bT_XIWDn_LjEglACP-OfSM0lVbUcmWLA_Y.jpg?width=960&crop=smart&auto=webp&s=c95b70be87582aa3bd16839cfb08b4630ba76dfc', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/t1BAVDC40bT_XIWDn_LjEglACP-OfSM0lVbUcmWLA_Y.jpg?auto=webp&s=68dd25c14412cdfda9cb9533e4fd081ad411879f', 'width': 1024}, 'variants': {}}]} |
How to find the best model for... Ancient Aliens, JFK, etc? | 1 | Often people will have a particular purpose in mind for their interactions with the AI. For example, they would like to find out more about what is really happening in Antarctica, to discover the hidden history of Ancient Aliens on Earth.
* Which language models have been specially trained on information such as UFOs etc?
* Is there a good website somewhere which keeps a list of which models are trained on which sort of data? | 2023-08-01T08:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/15f679m/how_to_find_the_best_model_for_ancient_aliens_jfk/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f679m | false | null | t3_15f679m | /r/LocalLLaMA/comments/15f679m/how_to_find_the_best_model_for_ancient_aliens_jfk/ | false | false | self | 1 | null |
How far can I realistically push my 4090 when it comes to running local for RP? | 1 | I have recently built a full new PC with 64GB Ram, 24GB VRAM, and R9-7900xd3 CPU.
The LLM climate is changing so quickly but I'm looking for suggestions for RP quality and also models I could get away with higher context sizes with. Any recommendations would be great.
I'm unsure, but does the new universal jailbreak work on locally hosted censored models? If not, would I only be able to work with uncensored models?
Thanks! | 2023-08-01T08:17:45 | https://www.reddit.com/r/LocalLLaMA/comments/15f67dl/how_far_can_i_realistically_push_my_4090_when_it/ | Reign2294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15f67dl | false | null | t3_15f67dl | /r/LocalLLaMA/comments/15f67dl/how_far_can_i_realistically_push_my_4090_when_it/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.