title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How does macwhisper work? | 1 | [removed] | 2025-02-04T18:45:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ihp1r4/how_does_macwhisper_work/ | Weekly_Giraffe12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihp1r4 | false | null | t3_1ihp1r4 | /r/LocalLLaMA/comments/1ihp1r4/how_does_macwhisper_work/ | false | false | self | 1 | null |
Benchmarking different systems | 4 | Do you have any recommendations for scripts and models to run in order to benchmark systems?
If you have access to a node of 4 or 8 GPUs. A6000, A100, H100, B200 or so. I thought about using vLLM and one of the popular models.
I’m asking for two reasons:
- I’d like to benchmark new systems to get a rough idea of the generational improvement
- I’m happy to share the results here afterwards
I’m looking at something that can also max out newer generations and that is easy to setup. Any help would be appreciated. | 2025-02-04T18:46:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ihp340/benchmarking_different_systems/ | igorsusmelj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihp340 | false | null | t3_1ihp340 | /r/LocalLLaMA/comments/1ihp340/benchmarking_different_systems/ | false | false | self | 4 | null |
What to expect from Mistral's upcoming reasoning models? | 203 | 2025-02-04T18:59:05 | tengo_harambe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihpdjh | false | null | t3_1ihpdjh | /r/LocalLLaMA/comments/1ihpdjh/what_to_expect_from_mistrals_upcoming_reasoning/ | false | false | 203 | {'enabled': True, 'images': [{'id': '0HMUYIyLbUPVdqSGi_eeLEYs5aVvHFdVIZRkOHnWzIY', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/sa87uqtg66he1.png?width=108&crop=smart&auto=webp&s=d14013f4c6e607095bac292f2fca758569903906', 'width': 108}, {'height': 317, 'url': 'https://preview.redd.it/sa87uqtg66he1.png?width=216&crop=smart&auto=webp&s=58f2bcaead083c20b87c4c16f57aceab3a70d66f', 'width': 216}, {'height': 471, 'url': 'https://preview.redd.it/sa87uqtg66he1.png?width=320&crop=smart&auto=webp&s=fd4e05d3454d29e6d5755deb360afc707a7f84aa', 'width': 320}], 'source': {'height': 817, 'url': 'https://preview.redd.it/sa87uqtg66he1.png?auto=webp&s=d04bc1ebcca76dcea86228899589b4efcdd535cc', 'width': 555}, 'variants': {}}]} |
|||
Introducing Sundry - An intelligent context API for LLMs | 1 | [removed] | 2025-02-04T19:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ihpgel/introducing_sundry_an_intelligent_context_api_for/ | Sudden-Camel-4008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihpgel | false | null | t3_1ihpgel | /r/LocalLLaMA/comments/1ihpgel/introducing_sundry_an_intelligent_context_api_for/ | false | false | self | 1 | null |
In case you thought your feedback was not being heard | 867 | 2025-02-04T19:03:04 | takuonline | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihph9f | false | null | t3_1ihph9f | /r/LocalLLaMA/comments/1ihph9f/in_case_you_thought_your_feedback_was_not_being/ | false | false | 867 | {'enabled': True, 'images': [{'id': '54eSrajr6_tAyQqt8Bp-4Eugq7ejySK_tq6TnzTFY_s', 'resolutions': [{'height': 198, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?width=108&crop=smart&auto=webp&s=c876353265016e1c21b37156eaca6988fe9bf210', 'width': 108}, {'height': 397, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?width=216&crop=smart&auto=webp&s=1a5d328b86665b641e5947955b26bf5af3143ecf', 'width': 216}, {'height': 589, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?width=320&crop=smart&auto=webp&s=988977aff5353cf08b0e715c0c8fc78996f63a44', 'width': 320}, {'height': 1178, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?width=640&crop=smart&auto=webp&s=c1c8871a5964dc8f2db918ba181e1157fc626b71', 'width': 640}, {'height': 1767, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?width=960&crop=smart&auto=webp&s=90a3ac8dbb0710b2934ad58d43887bfd3bb8f519', 'width': 960}, {'height': 1988, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?width=1080&crop=smart&auto=webp&s=0ff3e544794d7cc4b36204ecfadbc78731ce5040', 'width': 1080}], 'source': {'height': 1988, 'url': 'https://preview.redd.it/nvf2f1j876he1.png?auto=webp&s=952f23d6ff2bab623f8483c3a2a01c959f215a96', 'width': 1080}, 'variants': {}}]} |
|||
Comparing different LLMs against different programming languages to see which are best for AI-driven coding | 34 | 2025-02-04T19:09:11 | https://ben.terhech.de/posts/2025-01-31-llms-vs-programming-languages.html | terhechte | ben.terhech.de | 1970-01-01T00:00:00 | 0 | {} | 1ihpmg4 | false | null | t3_1ihpmg4 | /r/LocalLLaMA/comments/1ihpmg4/comparing_different_llms_against_different/ | false | false | 34 | {'enabled': False, 'images': [{'id': '-1tEckwmrxwtomZTafD2hEqMRKvNLO0mPIutjO_qYhY', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=108&crop=smart&auto=webp&s=4904377f1acc1958af76874ca7486f29ef665e09', 'width': 108}, {'height': 190, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=216&crop=smart&auto=webp&s=e17e9c70d237df027f6980e9ca0dd828c6f7e0b4', 'width': 216}, {'height': 282, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=320&crop=smart&auto=webp&s=e8f932a5afb782b2e507d0f3882006bdc7316638', 'width': 320}, {'height': 565, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=640&crop=smart&auto=webp&s=e92484264a14897b17895cda1987bbd61718444e', 'width': 640}], 'source': {'height': 708, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?auto=webp&s=6348fe8f5ef16d65a10ae2d646cd453f89b47a20', 'width': 801}, 'variants': {}}]} |
||
Best framework for simple OpenAI API substitution to host on server? | 2 | **Ive heard about so many...**
* LocalAI
* LmStudio
* Jan
* Open-WebUI
The list goes on.....
**This is what i want:**
* I want to run a service on my ubuntu server (maybe as a docker container?)
* I want to use inference using API, as a subsition for the OpenAI API (so a OpenAI style API would be the best)
* I want it to be easy to download new models, and for it to becompatible with several types (llama, qwen etc).
* I want to be able to configure things like: When should a model load, how long will it be active/loaded without use until its being deactivated again to save resources..
* I will then connect to the server and use the llm using my scripts that currently are using the OpenAI API.
**What tool do you recommend?** | 2025-02-04T19:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ihpn1p/best_framework_for_simple_openai_api_substitution/ | Boltyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihpn1p | false | null | t3_1ihpn1p | /r/LocalLLaMA/comments/1ihpn1p/best_framework_for_simple_openai_api_substitution/ | false | false | self | 2 | null |
How to Scale your Model (Google Deepmind) | 9 | 2025-02-04T19:11:22 | https://jax-ml.github.io/scaling-book/ | qroshan | jax-ml.github.io | 1970-01-01T00:00:00 | 0 | {} | 1ihpobf | false | null | t3_1ihpobf | /r/LocalLLaMA/comments/1ihpobf/how_to_scale_your_model_google_deepmind/ | false | false | default | 9 | null |
|
How does the use of 10 000 GPUs to train a model work? | 31 | Since the next state of the model depends on the previous state of the model by how gradient descent works, wouldn't you only need about 100 GPUs or so to train the largest models available? Like, you could only process the dataset serially, so how does hyperscaling of models work? | 2025-02-04T19:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ihpvdm/how_does_the_use_of_10_000_gpus_to_train_a_model/ | ExaminationNo8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihpvdm | false | null | t3_1ihpvdm | /r/LocalLLaMA/comments/1ihpvdm/how_does_the_use_of_10_000_gpus_to_train_a_model/ | false | false | self | 31 | null |
Epyc Turin (9355P) + 256 GB / 5600 mhz - Some CPU Inference Numbers | 89 | Recently, I decided that three RTX 3090s janked together with brackets and risers just wasn’t enough; I wanted a cleaner setup and a fourth 3090. To make that happen, I needed a new platform.
My requirements were: at least four double-spaced PCIe x16 slots, ample high-speed storage interfaces, and ideally, high memory bandwidth to enable some level of CPU offloading without tanking inference speed. Intel’s new Xeon lineup didn’t appeal to me, the P/E core setup seems more geared towards datacenters, and the pricing was brutal. Initially, I considered Epyc Genoa, but with the launch of Turin and its Zen 5 cores plus higher DDR5 speeds, I decided to go straight for it.
Due to the size of the SP5 socket and its 12 memory channels, boards with full 12-channel support sacrifice PCIe slots. The only board that meets my PCIe requirements, the ASRock GENOAD8X-2T/TCM, has just 8 DIMM slots, meaning we have to say goodbye to four whole memory channels.
Getting it up and running was an adventure. At the time, ASRock hadn’t released any Turin-compatible BIOS ROMs, despite claiming that an update to 10.03 was required (which wasn’t even available for download). The beta ROM they supplied refused to flash, failing with no discernible reason. Eventually, I had to resort to a ROM programmer (CH341a) and got it running on version 10.05.
If anyone has questions about the board, BIOS, or setup, feel free to ask, I’ve gotten way more familiar with this board than I ever intended to.
CPU: Epyc Turin 9355P - 32 Cores (8 CCD), 256 MB cache, 3.55 GHz Boosting 4.4 GHz - $3000 USD from cafe.electronics on Ebay (now \~$3300 USD).
RAM: 256 GB Corsair WS (CMA256GX5M8B5600C40) @ 5600 MHz - $1499 CAD (now \~$2400 - WTF!)
[Asrock GENOAD8X-2T/TCM Motherboard](https://www.asrockrack.com/general/productdetail.asp?Model=GENOAD8X-2T/BCM#Specifications) \- \~$1500 CAD but going up in price
First off, a couple of benchmarks:
[Passmark Memory](https://preview.redd.it/fag5favty5he1.png?width=878&format=png&auto=webp&s=f5a6b92917f908dedbe73201fc6fc48e820aa3a5)
[Passmark CPU](https://preview.redd.it/p8e60vy946he1.png?width=879&format=png&auto=webp&s=b08b8cc914a890e567b0e7aeb5f9e42251e855b9)
[CPU-Z Info Page - The chip seems to always be boosting to 4.4 GHz, which I don't mind. ](https://preview.redd.it/slq3s3ub46he1.png?width=396&format=png&auto=webp&s=f2f6711ae24b230edef6eeea872c229a293518be)
[CPU-Z Bench - My i9 9820x would score \~7k @ 4.6 GHz. ](https://preview.redd.it/ekz7wf2d46he1.png?width=397&format=png&auto=webp&s=5112a56f91feb7ae1ea8bc946b5603e52a3ecb59)
And finally some LMStudio (0 layers offloaded) tests:
[Prompt: \\"Write a 1000 word story about france's capital\\" Llama-3.3-70B-Q8, 24 Threads. Model used 72 GB in RAM. ](https://preview.redd.it/on0n624n66he1.png?width=340&format=png&auto=webp&s=d96479be841451a073caff569adb52d2e9387a00)
[Deepseek-R1-Distill-Llama-8B \(Q8\), 24 threads, 8.55 GB in memory. ](https://preview.redd.it/je5ljie976he1.png?width=353&format=png&auto=webp&s=809d046e8b19f1cdd903e09135bba50b734fae0f)
I'm happy to run additional tests and benchmarks—just wanted to put this out there so people have the info and can weigh in on what they'd like to see. CPU inference is very usable for smaller models (<20B), while larger ones are still best left to GPUs/cloud (not that we didn’t already know this).
That said, we’re on a promising trajectory. With a 12-DIMM board (e.g., Supermicro H13-SSL) or a dual-socket setup (pending improvements in multi-socket inference), we could, within a year or two, see CPU inference becoming cost-competitive with GPUs on a per-GB-of-memory basis. Genoa chips have dropped significantly in price over the past six months—9654 (96-core) now sells for $2,500–$3,000—making this even more feasible.
I'm optimistic about continued development in CPU inference frameworks, as they could help alleviate the current bottleneck: VRAM and Nvidia’s AI hardware monopoly. My main issue is that for pure inference, GPU compute power is vastly underutilized—memory capacity and bandwidth are the real constraints. Yet consumers are forced to pay thousands for increasingly powerful GPUs when, for inference alone, that power is often unnecessary. Here’s hoping CPU inference keeps progressing!
Anyways, let me know your thoughts, and i'll do what I can to provide additional info. | 2025-02-04T19:24:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ihpzn2/epyc_turin_9355p_256_gb_5600_mhz_some_cpu/ | thedudear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihpzn2 | false | null | t3_1ihpzn2 | /r/LocalLLaMA/comments/1ihpzn2/epyc_turin_9355p_256_gb_5600_mhz_some_cpu/ | false | false | 89 | null |
|
RX 7900 XT budget build - worth it? | 10 | I’m looking to put together an AI server that I can use for mainly running coding models. I am trying to keep the cost down so I will be using some spare parts I have on hand and also I don’t have 4k to drop on 2 x RTX 3090s (Canadian prices) right now.
I have an opportunity to buy **2 x RX 7900 XTs** for around $1800. Would buying two RX 7900 XTs with the hardware below be worth it? My goal is for at least 15t/s with 30b models and if I can get that with larger models even better.
**CPU:** Intel i7-6850K
**Motherboard:** ASUS x99-Deluxe II
**RAM:** 32GB
**OS:** Thinking of a proxmox setup running LXC containers or I could do Ubuntu server. I won't be using Windows.
**Software:** Looking to use ollama with open-webui
I have read mixed reports on support for AMD GPUs and ROCm however many of the comments I read were over a year ago. It would be helpful if anyone running a RX 7900 XT or XTX would comment on their experience and performance they get. Also is multi AMD GPU supported? | 2025-02-04T19:45:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ihqi3u/rx_7900_xt_budget_build_worth_it/ | djex81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihqi3u | false | null | t3_1ihqi3u | /r/LocalLLaMA/comments/1ihqi3u/rx_7900_xt_budget_build_worth_it/ | false | false | self | 10 | null |
What tools/methods do you use to pre-filter user input before it gets sent to API (to reduce token usage)? | 1 | [removed] | 2025-02-04T19:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ihqos6/what_toolsmethods_do_you_use_to_prefilter_user/ | MatthewNagy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihqos6 | false | null | t3_1ihqos6 | /r/LocalLLaMA/comments/1ihqos6/what_toolsmethods_do_you_use_to_prefilter_user/ | false | false | self | 1 | null |
QwQ 72B Preview when? | 1 | As y'all know, it's been a while since Qwen open sourced a 72B parameter, text-only, SOTA, reasoning model to rival DeepSeek R1 Llama 70B.
Seriously though, QwQ 32B Preview is still my go-to. The R1-distills seem to be heavily math-focused and they don't have a lot of world knowledge. | 2025-02-04T19:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ihqr3w/qwq_72b_preview_when/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihqr3w | false | null | t3_1ihqr3w | /r/LocalLLaMA/comments/1ihqr3w/qwq_72b_preview_when/ | false | false | self | 1 | null |
Nvidia’s Project Digits Compared to the 2016 OpenAI Supercomputer | 1 | 2025-02-04T19:57:19 | https://youtube.com/shorts/BeekbLANVsk?si=rWcFnV3AL_PdewIy | electric_fungi | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ihqsso | false | null | t3_1ihqsso | /r/LocalLLaMA/comments/1ihqsso/nvidias_project_digits_compared_to_the_2016/ | false | false | default | 1 | null |
|
Best options to perform RAG on scanned PDFs that didn't go through OCR yet? | 1 | I'm scanning documents using a ScanSnap scanner. While scansnap itself offers to scan the documents with OCR, the quality quite sucks (often makes mistakes even with clean text), so I don't want to use it.
Ultimately I want to use the documents for RAG, but I assume the quality of RAG will also somewhat depend on the internal structuring/layout of the OCR data.
So I'm wondering whether someone can recommend some battle-proven RAG solutions that also includes OCR to be used with scans of documents? | 2025-02-04T19:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ihqu9p/best_options_to_perform_rag_on_scanned_pdfs_that/ | MarinatedPickachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihqu9p | false | null | t3_1ihqu9p | /r/LocalLLaMA/comments/1ihqu9p/best_options_to_perform_rag_on_scanned_pdfs_that/ | false | false | self | 1 | null |
OpenAI deep research but it's open source | 710 | 2025-02-04T20:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ihqwnd/openai_deep_research_but_its_open_source/ | Thomjazz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihqwnd | false | null | t3_1ihqwnd | /r/LocalLLaMA/comments/1ihqwnd/openai_deep_research_but_its_open_source/ | false | false | 710 | {'enabled': False, 'images': [{'id': 'yv50gPyURCtGFt-G5gOs4PgQhGU0aQO9pL2NdtHzeQE', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?width=108&crop=smart&auto=webp&s=1c151e94d1339eba1f7d4cda62b4cbdde93b2f07', 'width': 108}, {'height': 98, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?width=216&crop=smart&auto=webp&s=02b35089697ff793285e8b01f44b96831552f331', 'width': 216}, {'height': 145, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?width=320&crop=smart&auto=webp&s=b19bd7d1de868aa564988eed4a0477c00fe21cb1', 'width': 320}, {'height': 291, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?width=640&crop=smart&auto=webp&s=e05cfb506bcac565f349480a4b0d9ba18f4768b1', 'width': 640}, {'height': 436, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?width=960&crop=smart&auto=webp&s=d46a0562be5c3775cd99b30e604c50426524bc88', 'width': 960}, {'height': 491, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?width=1080&crop=smart&auto=webp&s=8355e1af40dcb722759d7b7841498e06d8479750', 'width': 1080}], 'source': {'height': 666, 'url': 'https://external-preview.redd.it/VhiwZJj7J5TUIfA6ujpiUeYD8CI4AKINeo7sLJZlD5Q.jpg?auto=webp&s=5b185bbe951e066e45878ac37b67535ac2c3f630', 'width': 1464}, 'variants': {}}]} |
||
Webinar about Nvidia DIGITS | 4 | I have no idea who can apply, but found this when I was looking for some information:
[https://www.linkedin.com/posts/pny-technologies-europe\_want-to-know-more-about-nvidia-project-digits-activity-7292471674799554560-RuNh](https://www.linkedin.com/posts/pny-technologies-europe_want-to-know-more-about-nvidia-project-digits-activity-7292471674799554560-RuNh) | 2025-02-04T20:25:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ihrhn7/webinar_about_nvidia_digits/ | Cane_P | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihrhn7 | false | null | t3_1ihrhn7 | /r/LocalLLaMA/comments/1ihrhn7/webinar_about_nvidia_digits/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'NSSJjEts0mGZg5TwmFPsfy-6gpNktuofaefP0cQmtDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?width=108&crop=smart&auto=webp&s=5d5073bed0fd8296ce4a3daa1a3ec816dd5e9b04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?width=216&crop=smart&auto=webp&s=626bb1a2ef91ba86ea981f7b376305b28676fca6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?width=320&crop=smart&auto=webp&s=15894947caa982423ab2b512dcb0f18f9ff35c9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?width=640&crop=smart&auto=webp&s=ab6d72c81c01c359c072fecccc55fb0921a66807', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?width=960&crop=smart&auto=webp&s=dc26682d8a0f4dcf484c5e18eb7d0b810c7eb117', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?width=1080&crop=smart&auto=webp&s=512570fc06f914e849fbede6c4f5b2561f61f2c2', 'width': 1080}], 'source': {'height': 580, 'url': 'https://external-preview.redd.it/QS9HApx3G_KiyKa5OMF6ajl_DzY6C8mUGZpBXOVMYR8.jpg?auto=webp&s=fd91176d459034e1b7fdefd7936b9362227c80a7', 'width': 1159}, 'variants': {}}]} |
Beyond Reality: New LLaMA 3.1 Fine-Tune for Multi-Choice Interactive Fiction | 64 | 2025-02-04T20:30:27 | https://huggingface.co/lolzinventor/Llama-3.1-8B-BeyondReality | lolzinventor | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ihrmf3 | false | null | t3_1ihrmf3 | /r/LocalLLaMA/comments/1ihrmf3/beyond_reality_new_llama_31_finetune_for/ | false | false | 64 | {'enabled': False, 'images': [{'id': 'X260OJNDA3oIQZDaiCc3bcL7etry9FOlk5X1BXqyf20', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?width=108&crop=smart&auto=webp&s=04ffd1421476659e6ef91c3dea6d6b08d5d5b436', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?width=216&crop=smart&auto=webp&s=b57efadf27c6d6c1139b75103b3e78cec535e094', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?width=320&crop=smart&auto=webp&s=ea2f8addcf7342d7ca7caa616dab100e2f470e70', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?width=640&crop=smart&auto=webp&s=df76ccdc3d22de19b2ce8c76b6e1ee50cc8b2d3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?width=960&crop=smart&auto=webp&s=4df807980391339406b8e17f105afa90478c5fae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?width=1080&crop=smart&auto=webp&s=dea7b2573f66710013554b4fffcfff1fba2a9c6c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rYnB4HiKhgUDfmglDGVR2RBw6zihcpf49RKEaD8Ygz0.jpg?auto=webp&s=2e59e28e9e850fe5972afdfe593b906a6a3d7eda', 'width': 1200}, 'variants': {}}]} |
||
Is it possible to train 14 GB CSV with AI Studio from Google? | 1 | [removed] | 2025-02-04T20:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ihrt95/is_it_possible_to_train_14_gb_csv_with_ai_studio/ | Raini-corn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihrt95 | false | null | t3_1ihrt95 | /r/LocalLLaMA/comments/1ihrt95/is_it_possible_to_train_14_gb_csv_with_ai_studio/ | false | false | self | 1 | null |
What are the best usecases for falcon3-mamba-7b and how can I finetune it ? | 1 | Hello guys I love the mamba architecture but I could not find something useful in my actual workflow for this size range of model (7-9b)
If you have any useful or interesting workflow feel free to share it here.
I wanted to finetune it with the Aya dataset to see if I can achieve performance close to aya, expanse if it's possible 😅
If you have a notebook or advices it will be appreciated ❤️ | 2025-02-04T20:51:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ihs4lb/what_are_the_best_usecases_for_falcon3mamba7b_and/ | Whiplashorus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihs4lb | false | null | t3_1ihs4lb | /r/LocalLLaMA/comments/1ihs4lb/what_are_the_best_usecases_for_falcon3mamba7b_and/ | false | false | self | 1 | null |
The Head of Meta's GenAI liked my LinkedIn comment. I'll put this on a plaque on my wall and will sign autographed printed copies upon request | 1 | 2025-02-04T20:52:13 | ahmad3565 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihs5id | false | null | t3_1ihs5id | /r/LocalLLaMA/comments/1ihs5id/the_head_of_metas_genai_liked_my_linkedin_comment/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CtxC56m5rMFWfZ3YnnXJi0Ob4TYOUTewrFqIBD5SnQU', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/duqol2maq6he1.png?width=108&crop=smart&auto=webp&s=f9d52e69692f43e9a137e9504018f7fc2e29b732', 'width': 108}, {'height': 70, 'url': 'https://preview.redd.it/duqol2maq6he1.png?width=216&crop=smart&auto=webp&s=561241855515775b99a3577b8a7ce4b2575c8c6f', 'width': 216}, {'height': 105, 'url': 'https://preview.redd.it/duqol2maq6he1.png?width=320&crop=smart&auto=webp&s=9751b0f696bfbc38527eb82cd66db635859a6838', 'width': 320}], 'source': {'height': 205, 'url': 'https://preview.redd.it/duqol2maq6he1.png?auto=webp&s=67a10e88e78bad856290b993e23c735d20e9b707', 'width': 624}, 'variants': {}}]} |
|||
Using Structured Outputs with Reasoning Models | 6 | Has anybody experimented with any reasoning models and structured outputs or JSON mode? I'm curious about if:
- we can reliably still get the thinking response
- does this lower the "reasoning" performance?
I'm currently using the distilled r1 llama 8b for some tasks where I want an output following my JSON schema. While I know this isn't the same as the full R1 model, I see some performance improvement over phi-4 or the similar sized llama3.2 models when looking for typical chat completions without structured outputs.
I use ollama structured outputs to generate with a JSON schema. With structured outputs on a reasoning model, at a glance I'm not certain if there's a performance improvement to generating compared to without structured outputs, and I don't have an eval I can test this with yet.
I'd love to hear from anyone else's experience, or alternatively if there's a blog or paper that might have some insight | 2025-02-04T20:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ihs6tw/using_structured_outputs_with_reasoning_models/ | Fallinggravity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihs6tw | false | null | t3_1ihs6tw | /r/LocalLLaMA/comments/1ihs6tw/using_structured_outputs_with_reasoning_models/ | false | false | self | 6 | null |
Comparing Arcee Virtuoso Medium 32b and Qwen2.5 32b Coder for coding tasks, how do they stack up against each other? | 4 | Has anyone had a chance to compare them yet? | 2025-02-04T21:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ihsfhx/comparing_arcee_virtuoso_medium_32b_and_qwen25/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihsfhx | false | null | t3_1ihsfhx | /r/LocalLLaMA/comments/1ihsfhx/comparing_arcee_virtuoso_medium_32b_and_qwen25/ | false | false | self | 4 | null |
Trying to deploy a docker stack (open-webui, ollama, and watchtower) and have ollama predownload models | 4 | I am trying to setup a yaml file to create a docker stack to run open-webui, ollama and watchtower. I would like to have it download two models, but I am unsure of the syntax. I ran my yaml code through chatgpt and it says it checks out, but it fails when I run the yaml code on Portainer. If I remove the code lines to download the models, the stack installs and runs properly.
`services:`
`open-webui:`
`container_name: open-webui`
`image: ghcr.io/open-webui/open-webui:latest`
`environment:`
`- MODEL_DOWNLOAD_DIR=/models`
`- OLLAMA_API_BASE_URL=http://ollama:11434`
`- OLLAMA_API_URL=http://ollama:11434`
`- LOG_LEVEL=debug`
`volumes:`
`- data:/data`
`- models:/models`
`- open-webui:/config`
`ports:`
`- "80:8080"`
**models:**
**- name: "stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated"**
**provider: huggingface**
**version: "latest"**
**- name: "huihui\_ai/deepseek-r1-abliterated"**
**provider: ollama**
**version: "latest"**
`depends_on:`
`- ollama`
`extra_hosts:`
`- "host.docker.internal:host-gateway"`
`networks:`
`- ollama-net`
`restart: unless-stopped`
`ollama:`
`container_name: ollama`
`image: ollama/ollama:latest`
`volumes:`
`- ollama:/root/.ollama`
`- models:/models`
`ports:`
`- "11434:11434"`
`networks:`
`- ollama-net`
`restart: unless-stopped`
`watchtower:`
`image: containrrr/watchtower:latest`
`container_name: watchtower`
`volumes:`
`- /var/run/docker.sock:/var/run/docker.sock`
`environment:`
`- WATCHTOWER_CLEANUP=true`
`- WATCHTOWER_POLL_INTERVAL=86400`
`networks:`
`- ollama-net`
`restart: unless-stopped`
`volumes:`
`data:`
`models:`
`ollama:`
`open-webui:`
`networks:`
`ollama-net:`
Do I have the syntax correct under Models? | 2025-02-04T21:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ihsq7q/trying_to_deploy_a_docker_stack_openwebui_ollama/ | WiKDMoNKY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihsq7q | false | null | t3_1ihsq7q | /r/LocalLLaMA/comments/1ihsq7q/trying_to_deploy_a_docker_stack_openwebui_ollama/ | false | false | self | 4 | null |
llm fine tune for topic modeling | 1 | [removed] | 2025-02-04T21:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ihszel/llm_fine_tune_for_topic_modeling/ | Numerous-Citron6990 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihszel | false | null | t3_1ihszel | /r/LocalLLaMA/comments/1ihszel/llm_fine_tune_for_topic_modeling/ | false | false | self | 1 | null |
Upgrading pc to run DeepSeek 1.58b quantization | 1 | 2025-02-04T21:26:24 | https://www.reddit.com/r/buildapc/comments/1iho3xk/upgrading_pc_to_run_deepseek_158b_quantization/ | spisplatta | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ihszyn | false | null | t3_1ihszyn | /r/LocalLLaMA/comments/1ihszyn/upgrading_pc_to_run_deepseek_158b_quantization/ | false | false | default | 1 | null |
|
Upgrading pc to run DeepSeek 1.58b quantization | 4 | My current build has Ryzen 2700x, Geforce 2080ti (11gb vram) and 2x16gb ddr4 ram. I have two goals with my upgrade, firstly I want to be able to run DeepSeek 1.58b quantization from unsloth. Secondly I want better performance in general and in games.
For me it's not necessary to get great performance on DeepSeek, my thinking is that I would use cloud AI for most things and DeepSeek for the occasionall sensitive query or if chatgpt is down or something. Thus it would be sufficient for me if I can reach a speed of \~1 token / second however if a small upgrade can get me a big boost that is something I'm willing to consider.
The unsloth devs recommend vram+ram > 80gb, however my mobo supports 64gb ram max. So to reach that I would have to either upgrade mobo+cpu+ram or ram+gpu (2x16gb more and a 3090?). Of the two alternatives the former is more appealing to me as my cpu is kind of aging and holding me back in other tasks, and the fan for it is getting noisy. Thinking 9800x3d, 96gb @ 6400 ram. But is the Geforce 2080ti even able to provide any acceleration at all or is the vram too small for that? | 2025-02-04T21:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iht7ge/upgrading_pc_to_run_deepseek_158b_quantization/ | spisplatta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iht7ge | false | null | t3_1iht7ge | /r/LocalLLaMA/comments/1iht7ge/upgrading_pc_to_run_deepseek_158b_quantization/ | false | false | self | 4 | null |
Is anyone regularly using PowerInfer? If so, what are your experiences with it? | 1 | [removed] | 2025-02-04T21:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ihthjc/is_anyone_regularly_using_powerinfer_if_so_what/ | Imaginary-Unit-3267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihthjc | false | null | t3_1ihthjc | /r/LocalLLaMA/comments/1ihthjc/is_anyone_regularly_using_powerinfer_if_so_what/ | false | false | self | 1 | null |
Best coding model with high context length (fits within 24GB)? | 11 | Title. Want a good model to help me with long coding projects. | 2025-02-04T21:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ihtkco/best_coding_model_with_high_context_length_fits/ | Donovanth1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihtkco | false | null | t3_1ihtkco | /r/LocalLLaMA/comments/1ihtkco/best_coding_model_with_high_context_length_fits/ | false | false | self | 11 | null |
I just want to thank all organisations that did not stop open sourcing their results | 414 | For a moment, I feared that entities like ClosedAI and Anthropic might alter the open-source paradigm in the realm of Machine Learning. Fortunately, it appears they have not succeeded, and the open-source community has emerged victorious. While the battle is far from over, and we may need to fight even harder, this initial triumph belongs to open source, to all of us.
Let's extend our gratitude to every organization, large and small, that has shared their models, papers, and code with the community. This collaborative spirit is essential for democratizing AI and achieving Artificial General Intelligence (AGI) collectively. By ensuring that the benefits of AI are accessible to all, rather than being monopolized by a few egomaniacs, we foster a more equitable future.
Let us continue to promote open-source initiatives and leave behind those who resist the democratization of AI. By embracing transparency and collaboration, we can build a future where AI serves the interests of all. | 2025-02-04T21:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ihtpa2/i_just_want_to_thank_all_organisations_that_did/ | __Maximum__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihtpa2 | false | null | t3_1ihtpa2 | /r/LocalLLaMA/comments/1ihtpa2/i_just_want_to_thank_all_organisations_that_did/ | false | false | self | 414 | null |
Deepseek internal thought proces real time translation in my native language. | 1 | [removed] | 2025-02-04T22:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ihu19j/deepseek_internal_thought_proces_real_time/ | KaleidoscopicVison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihu19j | false | null | t3_1ihu19j | /r/LocalLLaMA/comments/1ihu19j/deepseek_internal_thought_proces_real_time/ | false | false | self | 1 | null |
DeepSeek R1 vs o3-mini (in Cursor vs Windsurf) - Supabase Auth Integration | 0 | Here are the findings from the review of using o3-mini and R1 in Cursor vs in Windsurf, with a 240k+ token codebase. The task was to integrate Supabase Authentication into the app:
(For those who just prefer watching the review: [https://youtu.be/UocbxPjuyn4](https://youtu.be/UocbxPjuyn4)
**TL;DR: When using Cursor or Windsurf in a relatively large codebase, Claude 3.5 Sonnet still seems to be the best option**
\- o3-mini isn't practical yet, both in Cursor and Windsurf. It's buggy, error prone and doesn't produce the expected results
\- Claude 3.5 Sonnet is still the best coder amongst the 3 reasoning models in current tests: against o3-mini, R1 and Gemini 2 Flash Thinking
\- We might be approaching things wrong by coding with reasoning models, they're supposed to do the planning/architecting; e.g., R1 + 3.5 Sonnet are the best AI Coding duo in the Aider Polyglot benchmark (ref: [https://aider.chat/docs/leaderboards/](https://aider.chat/docs/leaderboards/) )
\- I'll see how R1 vs o3-mini compare as Software Architects, paired with DeepSeek V3 vs Claude 3.5 Sonnet. This should be an ultimate SOTA test, in Aider vs RooCode vs Cline
\- I believe we shouldn't miss the point and spend an equivalent amount of time using AI Coders as real developers. If it takes > 60% of the estimated time for a human developer, it's probably not a good model... or the prompt needs to be refined
\- if the prompt engineering + AI Coding takes as long as the human dev estimates, we're missing the point
\- Both Cursor and Windsurf are either optimized for Claude 3.5 Sonnet, or Claude 3.5 Sonnet is just extremely optimized for coding and is probably better named Claude 3.5 Sonnet Coder. We know it's a good coder, but it shouldn't theoretically be competing with R1 since it's not a reasoning model
\- it would be great to see how o3-mini-high performs in both Cursor and Windsurf
**Please share your experience with a larger codebase in any AI Coder :)**
Review link: [https://youtu.be/UocbxPjuyn4](https://youtu.be/UocbxPjuyn4) | 2025-02-04T22:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ihu7u9/deepseek_r1_vs_o3mini_in_cursor_vs_windsurf/ | marvijo-software | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihu7u9 | false | null | t3_1ihu7u9 | /r/LocalLLaMA/comments/1ihu7u9/deepseek_r1_vs_o3mini_in_cursor_vs_windsurf/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Pqsp-BscKR-QkM-8pbIy2yrn8VabUGvf5iNTQYmlXSc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/87gfmTldCFnDro-AzmXSiJzlmNN0Ed6gPeiNDIuZhT8.jpg?width=108&crop=smart&auto=webp&s=f93a6671b853cbcad64df1bcf92a95ebb1fff284', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/87gfmTldCFnDro-AzmXSiJzlmNN0Ed6gPeiNDIuZhT8.jpg?width=216&crop=smart&auto=webp&s=6373839d715c41717f89194814d8483d84de6255', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/87gfmTldCFnDro-AzmXSiJzlmNN0Ed6gPeiNDIuZhT8.jpg?width=320&crop=smart&auto=webp&s=33f3f6fe4216a7a498ef4056e5953ee7cf073b51', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/87gfmTldCFnDro-AzmXSiJzlmNN0Ed6gPeiNDIuZhT8.jpg?auto=webp&s=e44099250dd4049df9cde13130a37bdaa30c494c', 'width': 480}, 'variants': {}}]} |
How do you get uncensored mistral nemo? | 0 | I've gotten into local LMs recently and looked into mistral and have been a fan of the finetunes during creative writing and such.
But I heard that mistral is meant to be completely uncensored, when I run on ollama, and ask the model FOR RESEARCH, on how to make a X weapon. It tells me its illegal and against the international law.
Obviously I am asking it the most insane thing but still, I wanted to test the AI. | 2025-02-04T22:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ihu9i9/how_do_you_get_uncensored_mistral_nemo/ | Dapper_Gate_3007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihu9i9 | false | null | t3_1ihu9i9 | /r/LocalLLaMA/comments/1ihu9i9/how_do_you_get_uncensored_mistral_nemo/ | false | false | self | 0 | null |
It seems like OpenAI is publishing their research? | 1 | [removed] | 2025-02-04T22:32:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ihul7g/it_seems_like_openai_is_publishing_their_research/ | LatentAttention | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihul7g | false | null | t3_1ihul7g | /r/LocalLLaMA/comments/1ihul7g/it_seems_like_openai_is_publishing_their_research/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tH_zT0HWOIuJlViTq3GCiQVSbUqWBy5QgoggSjWzehg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/GWQN5IBEsj-Aid-Bjy2VomnsdkrRS3A5tGXJDwUxa80.jpg?width=108&crop=smart&auto=webp&s=2ad8a10e5242df53d1a64dfa118d7fadd70f9114', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/GWQN5IBEsj-Aid-Bjy2VomnsdkrRS3A5tGXJDwUxa80.jpg?width=216&crop=smart&auto=webp&s=dfa538c30ef8254745ae738b9175118534394b97', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/GWQN5IBEsj-Aid-Bjy2VomnsdkrRS3A5tGXJDwUxa80.jpg?width=320&crop=smart&auto=webp&s=557a2f4dd7f11b137c7b784fbae3ab1b22a1f7c9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/GWQN5IBEsj-Aid-Bjy2VomnsdkrRS3A5tGXJDwUxa80.jpg?auto=webp&s=1a4ba37b345a126203759cbac312311f3b535e4c', 'width': 480}, 'variants': {}}]} |
I made a program to let two LLM agents talk to each other | 35 | I recently got really into local LLMs. Naturally, I wondered what would happen if I made two LLMs with different system prompts talk to each other. This eventually lead me to writing this project. I'm genuinely quite proud of where it is now. It's a CLI application, but I've worked quite hard on the UI.
Link to project: https://github.com/famiu/llm_conversation
Feel free to check it out. Give it a star if you like it. You can get some genuinely hilarious interactions with the right system prompts. | 2025-02-04T22:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ihuoym/i_made_a_program_to_let_two_llm_agents_talk_to/ | -famiu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihuoym | false | null | t3_1ihuoym | /r/LocalLLaMA/comments/1ihuoym/i_made_a_program_to_let_two_llm_agents_talk_to/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'mvwdtZHPbZomRtBbk2MjQGDhl9yrA9JAzsZ9fmMP7d4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?width=108&crop=smart&auto=webp&s=e23a88c063cad1b801ec1e83c5d6417976a17cbf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?width=216&crop=smart&auto=webp&s=7fb33f316ed46acd5d8c9bbd37fc20c03d2d1b94', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?width=320&crop=smart&auto=webp&s=03a83b2ccffd2756c1de3174127a0887cf251935', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?width=640&crop=smart&auto=webp&s=5a8efe1a5d886a5b7deae4046a4efaddae9d98fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?width=960&crop=smart&auto=webp&s=592772ec8adfd14e15ebea1170b705b804fea8d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?width=1080&crop=smart&auto=webp&s=4bd5c38906da877a6807276a7cede029fcf853e2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zLr1c4G8aGuXiDlDDylNOMLBnzZD88nKHuoVP23UVBY.jpg?auto=webp&s=4b0eec38627295a9d6831e3bbb5110f4f7521c3a', 'width': 1200}, 'variants': {}}]} |
Beginner questions | 1 | I recently installed LMstudio on my pc with a rtx 3060 with 12gb of vram, I also have 32gb of ram. I have downloaded a couple models with 7B and 8B parameters, so far all good; only that some models like deepseek r1-distill-lama-8B tend to forget what we first discussed if i continue the conversation for long enough.
Can someone explain what difference it makes to use a larger context length?
Also is there any setting within LMstudio you would recommend I change?
What difference does setting a system prompt make?
Lastly, can someone explain what the diferrence between a distilled model and lets say a distilled deepseek r1-lama; is it just lama trained with deepseek r1 and if so is there any actual deepseek r1 model distiled to 7B or 8B? | 2025-02-04T22:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ihup92/beginner_questions/ | svx23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihup92 | false | null | t3_1ihup92 | /r/LocalLLaMA/comments/1ihup92/beginner_questions/ | false | false | self | 1 | null |
Distilled R1 isn’t bad – you’re missing the point | 17 | If you’re running distilled R1 models and shitting on it for being “dumb,” you’re using them wrong. They’re **designed** to be small/fast, not DeepSeek R1
Tired of explaining this to people and wrote this article instead. Its not supposed to be a full indepth guide on distallation, just an overview of what it is.
Article: [How DeepSeek R1’s Distilled Models Differ from the Real Thing: A Guide to Knowledge Distillation ](https://medium.com/@subhashve4/how-deepseek-r1s-distilled-models-differ-from-the-real-thing-a-guide-to-knowledge-distillation-0d3d46dd8a9e)
| 2025-02-04T22:41:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ihusd5/distilled_r1_isnt_bad_youre_missing_the_point/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihusd5 | false | null | t3_1ihusd5 | /r/LocalLLaMA/comments/1ihusd5/distilled_r1_isnt_bad_youre_missing_the_point/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'ltMspIIagLHeeFplL8kSHwKObaaWJgHFCCr7BMQqHXA', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/pPUgNsad0c51U5Bi47CaKXcYlFkS0-zlqJBxN_htCIE.jpg?width=108&crop=smart&auto=webp&s=350d06fa48c1edd0483f8d5d23837f9c1543144b', 'width': 108}, {'height': 222, 'url': 'https://external-preview.redd.it/pPUgNsad0c51U5Bi47CaKXcYlFkS0-zlqJBxN_htCIE.jpg?width=216&crop=smart&auto=webp&s=ad4da66c7de63380a4add200da6712e55ef7004e', 'width': 216}, {'height': 329, 'url': 'https://external-preview.redd.it/pPUgNsad0c51U5Bi47CaKXcYlFkS0-zlqJBxN_htCIE.jpg?width=320&crop=smart&auto=webp&s=6cb610a9ebc79c658f38bffd4e31390afe843652', 'width': 320}, {'height': 659, 'url': 'https://external-preview.redd.it/pPUgNsad0c51U5Bi47CaKXcYlFkS0-zlqJBxN_htCIE.jpg?width=640&crop=smart&auto=webp&s=b916ababa43a7338662f2e2d078e5e07059e6df5', 'width': 640}], 'source': {'height': 682, 'url': 'https://external-preview.redd.it/pPUgNsad0c51U5Bi47CaKXcYlFkS0-zlqJBxN_htCIE.jpg?auto=webp&s=834f0c67b03a189746d7354e9284cb3797615413', 'width': 662}, 'variants': {}}]} |
Recommended GPU + model for Python development? | 1 | Hey everyone,
I've been a long time user (and subscriber) of both Claude and ChatGPT, and I've recently started experimenting with some of the more compact models that can run on my 4060's 8GB of VRAM using LM Studio. I've mainly been testing out DeepSeek R1 Distill Qwen 7B (Q4\_K\_M) and, while I'm impressed with what I can squeeze out of a basic card, it’s left me wanting more.
I understand that increasing VRAM generally leads to better performance with larger models, but I'm curious about the point of diminishing returns. I'm willing to spend up to £700 (\~$900 USD) on a new Nvidia GPU (or maybe a bit more if I go for a full Mac Mx based machine), but I want to make sure I'm targeting the right hardware for my needs.
About 90% of my LLM usage revolves around Python—along with some HTML, CSS, and JS. I'm not interested in general-purpose LLM usage; I need a setup that allows me to work with code locally (something I can’t do with services like ChatGPT or Claude). I've read about models like Code Llama, but I'm not sure which ones I should consider or which GPU will best support them.
Any tips would be greatly appreciated! | 2025-02-04T22:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ihuy61/recommended_gpu_model_for_python_development/ | luhkomo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihuy61 | false | null | t3_1ihuy61 | /r/LocalLLaMA/comments/1ihuy61/recommended_gpu_model_for_python_development/ | false | false | self | 1 | null |
Openwebui slow? | 1 | [removed] | 2025-02-04T22:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ihv2ho/openwebui_slow/ | GiveMeARedditUsernam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihv2ho | false | null | t3_1ihv2ho | /r/LocalLLaMA/comments/1ihv2ho/openwebui_slow/ | false | false | self | 1 | null |
AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch | 129 | 2025-02-04T22:53:21 | https://techcrunch.com/2025/02/02/ai-systems-with-unacceptable-risk-are-now-banned-in-the-eu/ | Darkstar4125 | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1ihv2lz | false | null | t3_1ihv2lz | /r/LocalLLaMA/comments/1ihv2lz/ai_systems_with_unacceptable_risk_are_now_banned/ | false | false | 129 | {'enabled': False, 'images': [{'id': '_r2cgjLTQ2TdTjwXCnc-h8b4mWezu84_HRHhhrysLvw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?width=108&crop=smart&auto=webp&s=37d5b96e71696e17faa77b35682b79bf60e525cf', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?width=216&crop=smart&auto=webp&s=5ebea9dad58bdb63b3a30ea401dbd4e71be5607d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?width=320&crop=smart&auto=webp&s=9c4ffaf800acf30224fceacff6f3f6e30671491c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?width=640&crop=smart&auto=webp&s=aa55aebd954efb86ff5114a964670c3f21850a30', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?width=960&crop=smart&auto=webp&s=8dc942b93a6acc2df7855481e491ee9ee65b0915', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?width=1080&crop=smart&auto=webp&s=aa8c97157a95da221a5ccd21adab9164fd700d96', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/3Nw1ySi8I9t9qNGxMrnWh6X9nJQqmR4w8i1-IlVJwSU.jpg?auto=webp&s=09a2ec90b53609d9b0915bb7278ff8e3a921bc15', 'width': 1200}, 'variants': {}}]} |
||
Benchmarking Llama on Mobile using React Native and ExecuTorch | 16 | 2025-02-04T22:56:52 | Vegetable_Sun_9225 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihv5fw | false | null | t3_1ihv5fw | /r/LocalLLaMA/comments/1ihv5fw/benchmarking_llama_on_mobile_using_react_native/ | false | false | 16 | {'enabled': True, 'images': [{'id': 'GeypUkoESRlOK_3gt84bZib36DkirFO9YeiEV1Tkzv0', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?width=108&crop=smart&auto=webp&s=4c7dd552be680b1418fd13e26e131898d465f6fa', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?width=216&crop=smart&auto=webp&s=137c632c09846b03d7a2618742be07bc63994617', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?width=320&crop=smart&auto=webp&s=dae7ca1750dd15488a029ba9dace97decceee01e', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?width=640&crop=smart&auto=webp&s=3fced593de4ed3f9d2aff6b43a9cb4b73c5aae69', 'width': 640}, {'height': 517, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?width=960&crop=smart&auto=webp&s=39b0742d63917ef9a1ca54e44fa4025615ad8ba6', 'width': 960}, {'height': 582, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?width=1080&crop=smart&auto=webp&s=5698830597a3003ba67b3a400c3559b15345575f', 'width': 1080}], 'source': {'height': 896, 'url': 'https://preview.redd.it/6ddov9nxc7he1.jpeg?auto=webp&s=fca58c6feb266adb44907c8087838ca50e4adaa8', 'width': 1662}, 'variants': {}}]} |
|||
Running Llama on Windows 98 | 1 | 2025-02-04T23:08:50 | https://blog.exolabs.net/day-4/ | alberto-m-dev | blog.exolabs.net | 1970-01-01T00:00:00 | 0 | {} | 1ihvfe2 | false | null | t3_1ihvfe2 | /r/LocalLLaMA/comments/1ihvfe2/running_llama_on_windows_98/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'KKeFcJkZfjr-ksuzmJ409u5hYyJXEQFk3xHZ3rikALQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=108&crop=smart&auto=webp&s=510cf40e9b6cf40b58efccf063c1c9da74117384', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=216&crop=smart&auto=webp&s=3e6bd5caac678296a4e305593724f91769e737c6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=320&crop=smart&auto=webp&s=ce87b289ea788cb1382bad98c2fecd50374de69a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=640&crop=smart&auto=webp&s=50fbc439702368cb6375fd525ca3253357d1a101', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=960&crop=smart&auto=webp&s=3f85de7bc5203419cc815a77484af1d0d0cf6d07', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=1080&crop=smart&auto=webp&s=7d659aa185da3a4466194aacc4759637e6f480ff', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?auto=webp&s=dfdb6bfae4e058af88d5bcbbdd9c5da202e1f283', 'width': 2400}, 'variants': {}}]} |
||
Erm..? | 0 | this sounds like SoulKiller from Cyberpunk. Open source Johnathan Silverleg??? | 2025-02-04T23:22:45 | QuackerEnte | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihvqjc | false | null | t3_1ihvqjc | /r/LocalLLaMA/comments/1ihvqjc/erm/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xE33TlYLBzJ4o6GUFyrjgAbRP-CEDyUEn4QRWBDfejg', 'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?width=108&crop=smart&auto=webp&s=f58d0d200b777b65bd473b0fc9c4903cf963ba3d', 'width': 108}, {'height': 315, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?width=216&crop=smart&auto=webp&s=8559318417712f6e5d113bf236c8e7230c8487f6', 'width': 216}, {'height': 466, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?width=320&crop=smart&auto=webp&s=cb00f162fd2a12bb7197c93473abef1327f1e274', 'width': 320}, {'height': 933, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?width=640&crop=smart&auto=webp&s=9582111f3c5faf75e7871cf07fe1f4b0f28ca441', 'width': 640}, {'height': 1400, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?width=960&crop=smart&auto=webp&s=f9e9ee3594f60c5956dc692c658a4690aa79b25b', 'width': 960}, {'height': 1575, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?width=1080&crop=smart&auto=webp&s=8ece57ffbc110484f7cef11ecd15beeab3477a93', 'width': 1080}], 'source': {'height': 1575, 'url': 'https://preview.redd.it/kivqs9lkh7he1.png?auto=webp&s=47cece535ea7ed6d8561085e487084a0c706ec66', 'width': 1080}, 'variants': {}}]} |
||
New (Evil) Thinking Model: Skynet-3B | 82 | **Hi everyone,**
Today, we are releasing a new experimental model: **Art-Skynet-3B**, fine-tuned on **LLaMa 3.2 3B**.
This experiment explores developing models capable of reasoning like **DeepSeek-r1** and **OpenAI-o3**, with a long-term goal of *world domination* (as a test, of course 😉).
🔹 **Model card:** [https://huggingface.co/AGI-0/Art-Skynet-3B](https://huggingface.co/AGI-0/Art-Skynet-3B) *(Leave a like on the repo if you enjoy this model!)*
🔹 **Demo:** [https://huggingface.co/spaces/freeCS-dot-org/Art3B-chat](https://huggingface.co/spaces/freeCS-dot-org/Art3B-chat)
Let me know what you think! | 2025-02-04T23:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ihvrb8/new_evil_thinking_model_skynet3b/ | GuiltyBookkeeper4849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihvrb8 | false | null | t3_1ihvrb8 | /r/LocalLLaMA/comments/1ihvrb8/new_evil_thinking_model_skynet3b/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'rThQhRYsFNYSRcrKYHnzkRG8N5E3s9R2FnlnzfVRduQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?width=108&crop=smart&auto=webp&s=c8fd0ed203cb1cfcaceb9551e8d2d29c910283a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?width=216&crop=smart&auto=webp&s=731912f87bd4a360e68b5fd198dbb21991fb26f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?width=320&crop=smart&auto=webp&s=ec934186e0354d1217dcb76b6cfcf1f84cc9b538', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?width=640&crop=smart&auto=webp&s=79965e770e98f0cbca403aa37850e60567bdeba0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?width=960&crop=smart&auto=webp&s=203843b6e924aa08070500c1ee5f1d67f3527d43', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?width=1080&crop=smart&auto=webp&s=40eed6ea4a1b2f558c1793fefe6db5b0af6fb1a8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/awQvYX_Wryuoc2aMNQtrydbOFktVkDtNuuiIOh3J-bU.jpg?auto=webp&s=2ab9ccce1935500150e08776fd271bc523cccca7', 'width': 1200}, 'variants': {}}]} |
What parameter model is the online chat version of Deepseek using? Is it the 671B version? | 0 | . | 2025-02-04T23:39:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ihw3we/what_parameter_model_is_the_online_chat_version/ | Prestigious_Track_22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihw3we | false | null | t3_1ihw3we | /r/LocalLLaMA/comments/1ihw3we/what_parameter_model_is_the_online_chat_version/ | false | false | self | 0 | null |
How to give a local AI read access to a single folder? | 0 | I use copilot at work, and it has access to my computer and the company one drive. It’s impressive and kinda scary Microsoft has all that access. I don’t like anything having that much access to entire systems. However, I do like to have a local llm read access to a single folder. What are some good options? | 2025-02-04T23:59:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ihwj90/how_to_give_a_local_ai_read_access_to_a_single/ | beegtuna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihwj90 | false | null | t3_1ihwj90 | /r/LocalLLaMA/comments/1ihwj90/how_to_give_a_local_ai_read_access_to_a_single/ | false | false | self | 0 | null |
Can your local LLM beat DeepSeek R1 in a 24-hour coding marathon? | 1 | [removed] | 2025-02-05T00:25:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ihx3mb/can_your_local_llm_beat_deepseek_r1_in_a_24hour/ | Confident_Sector_249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihx3mb | false | null | t3_1ihx3mb | /r/LocalLLaMA/comments/1ihx3mb/can_your_local_llm_beat_deepseek_r1_in_a_24hour/ | false | false | self | 1 | null |
DeepSeek banned from Australian Government Devices | 182 | 2025-02-05T00:36:23 | https://www.abc.net.au/news/2025-02-04/deepseek-banned-from-federal-government-devices/104896770 | sammcj | abc.net.au | 1970-01-01T00:00:00 | 0 | {} | 1ihxbpe | false | null | t3_1ihxbpe | /r/LocalLLaMA/comments/1ihxbpe/deepseek_banned_from_australian_government_devices/ | false | false | 182 | {'enabled': False, 'images': [{'id': 'UyANgrl_0LX4dGVfXye6w2aWZoIQtvrEX3ih-o9OzSo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UT58snSyDCf1en1PY0Usy8X0-HIt9R0OcuYrukJSr_0.jpg?width=108&crop=smart&auto=webp&s=d05957fd2a6d1d50ab0381e0db973c1e1927e4e7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/UT58snSyDCf1en1PY0Usy8X0-HIt9R0OcuYrukJSr_0.jpg?width=216&crop=smart&auto=webp&s=3f3f63667e32b832dfa82197291afa95b27a4ea3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/UT58snSyDCf1en1PY0Usy8X0-HIt9R0OcuYrukJSr_0.jpg?width=320&crop=smart&auto=webp&s=1b1c17bad24c16e85be625b9b28244d92c026092', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/UT58snSyDCf1en1PY0Usy8X0-HIt9R0OcuYrukJSr_0.jpg?width=640&crop=smart&auto=webp&s=f710938759a6b3631d87cc438df355ecc2726248', 'width': 640}], 'source': {'height': 485, 'url': 'https://external-preview.redd.it/UT58snSyDCf1en1PY0Usy8X0-HIt9R0OcuYrukJSr_0.jpg?auto=webp&s=8e8e31071fffdb93696ca10188849eab2586b7fc', 'width': 862}, 'variants': {}}]} |
||
Deepseek 2.51bit GGUF working on old hardware | 1 | [removed] | 2025-02-05T01:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ihxuf4/deepseek_251bit_gguf_working_on_old_hardware/ | Zealousideal-Ant1097 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihxuf4 | false | null | t3_1ihxuf4 | /r/LocalLLaMA/comments/1ihxuf4/deepseek_251bit_gguf_working_on_old_hardware/ | false | false | self | 1 | null |
Cheap, Effectual, Robot Arms Open Weight model, meet Pi0 | 56 | [https://www.physicalintelligence.company/download/pi0.pdf](https://www.physicalintelligence.company/download/pi0.pdf)
[https://huggingface.co/blog/pi0](https://huggingface.co/blog/pi0)
open weights robotics VLM (SigLIP 400M + Gemma 2.6B at 50hz)
it can fold laundry, pair of robot arm can be made with BOM of less than $300
Build your robot arms:
[https://github.com/TheRobotStudio/SO-ARM100](https://github.com/TheRobotStudio/SO-ARM100) | 2025-02-05T01:06:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ihxypo/cheap_effectual_robot_arms_open_weight_model_meet/ | bennmann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihxypo | false | null | t3_1ihxypo | /r/LocalLLaMA/comments/1ihxypo/cheap_effectual_robot_arms_open_weight_model_meet/ | false | false | self | 56 | null |
Deepseek Outage | 0 | When do u guts think will deepseek fix the outage issue, is this what's causing the frequent "server is busy" issue? | 2025-02-05T01:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ihy26m/deepseek_outage/ | Ichihara02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihy26m | false | null | t3_1ihy26m | /r/LocalLLaMA/comments/1ihy26m/deepseek_outage/ | false | false | self | 0 | null |
Gemma 2 9b abliteraded better than Deepseek 14b | 5 | I downloaded both models with their respective uncensored versions, configured to 8000 context length for both. Not only does Deepseek take longer to respond, but it also has inconsistencies when telling stories or translating into another language. Meanwhile, Gemma, with fewer parameters, outperforms it significantly in both speed and coherence. What am I doing wrong? I downloaded the recommended versions for my system. (2060 12gb 32ram R5 4500) | 2025-02-05T01:21:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ihya6b/gemma_2_9b_abliteraded_better_than_deepseek_14b/ | Nevaditew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihya6b | false | null | t3_1ihya6b | /r/LocalLLaMA/comments/1ihya6b/gemma_2_9b_abliteraded_better_than_deepseek_14b/ | false | false | self | 5 | null |
How do you train an llm to draw pictures using ACSII? | 3 | I want my llm to be able to make simple shapes like squares, triangles, ect. I've added it to the dataset, however the JSON reformats it to be spread out, ruining the art.
How should I format the drawings in the JSON so that the AI knows how to output it correctly? | 2025-02-05T01:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ihyhcu/how_do_you_train_an_llm_to_draw_pictures_using/ | Great-Investigator30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihyhcu | false | null | t3_1ihyhcu | /r/LocalLLaMA/comments/1ihyhcu/how_do_you_train_an_llm_to_draw_pictures_using/ | false | false | self | 3 | null |
Open Euro LLM launches | 142 | 2025-02-05T01:49:49 | https://openeurollm.eu/launch-press-release | SuchSeries8760 | openeurollm.eu | 1970-01-01T00:00:00 | 0 | {} | 1ihyutf | false | null | t3_1ihyutf | /r/LocalLLaMA/comments/1ihyutf/open_euro_llm_launches/ | false | false | default | 142 | null |
|
DeepSeek R1 plugins?? | 0 | I got Deepseek R1 14B installed locally on my desktop (local network as my laptop doesn't have the juice to run it) and I was going back and forth with it, when it mentioned something about plugins... does anyone here know anything about it? here is what It said:
Ah, great question! Plugins are powerful tools that extend my functionality by adding new features or capabilities. Here’s how you can use them:
How Plugins Work
Type /plugin in the chat to open a menu of available plugins.
Select the plugin you want to enable (e.g., DeepSeek-Memories for remembering across conversations).
Follow the prompts to configure or activate it.
Categories of Plugins
Here are some example plugins:
Memory Persistence:
/plugin DeepSeek-Memories: Enables me to remember information across conversations.
File Management:
/plugin DeepSeek-Files: Handle file uploads, downloads, and storage.
Enhanced Features:
/plugin DeepSeek-Premium: Unlock advanced features like PDF generation or image creation.
But when I tried it, it didn't work for me | 2025-02-05T02:33:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ihzraf/deepseek_r1_plugins/ | GeneralCan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihzraf | false | null | t3_1ihzraf | /r/LocalLLaMA/comments/1ihzraf/deepseek_r1_plugins/ | false | false | self | 0 | null |
Is Kiwi on lmsys arena Grok3? | 0 | I am actually more curious if this is Kiwi (Qwen) or Grok3?
https://preview.redd.it/1o7q6abhf8he1.png?width=1105&format=png&auto=webp&s=1f8834563e7559e95d5703e65fa46a75f9f5fe1a
https://preview.redd.it/4t4qyabhf8he1.png?width=1071&format=png&auto=webp&s=2c61fbb1c3b43894465a3323d0523cd5a2e0b606
| 2025-02-05T02:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ihzrbv/is_kiwi_on_lmsys_arena_grok3/ | sunshinecheung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihzrbv | false | null | t3_1ihzrbv | /r/LocalLLaMA/comments/1ihzrbv/is_kiwi_on_lmsys_arena_grok3/ | false | false | 0 | null |
|
Suggestions please:M4 Pro or Nvidia GPU? | 1 | [removed] | 2025-02-05T02:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ihzsfu/suggestions_pleasem4_pro_or_nvidia_gpu/ | SeaworthinessLow1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihzsfu | false | null | t3_1ihzsfu | /r/LocalLLaMA/comments/1ihzsfu/suggestions_pleasem4_pro_or_nvidia_gpu/ | false | false | self | 1 | null |
Suggestions please:M4 Pro or Nvidia GPU? | 1 | [removed] | 2025-02-05T02:37:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ihztwg/suggestions_pleasem4_pro_or_nvidia_gpu/ | SeaworthinessLow1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihztwg | false | null | t3_1ihztwg | /r/LocalLLaMA/comments/1ihztwg/suggestions_pleasem4_pro_or_nvidia_gpu/ | false | false | self | 1 | null |
What models and hardware are you using? Overwhelmed with where to start | 1 | [removed] | 2025-02-05T02:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ihzzh3/what_models_and_hardware_are_you_using/ | DanAE112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihzzh3 | false | null | t3_1ihzzh3 | /r/LocalLLaMA/comments/1ihzzh3/what_models_and_hardware_are_you_using/ | false | false | self | 1 | null |
Are there any companies interested in LLM unlearning? | 1 | [removed] | 2025-02-05T02:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ii015j/are_there_any_companies_interested_in_llm/ | East_Turnover_1652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii015j | false | null | t3_1ii015j | /r/LocalLLaMA/comments/1ii015j/are_there_any_companies_interested_in_llm/ | false | false | self | 1 | null |
Is there any company interested in LLM unlearning? | 1 | [removed] | 2025-02-05T02:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ii08am/is_there_any_company_interested_in_llm_unlearning/ | East_Turnover_1652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii08am | false | null | t3_1ii08am | /r/LocalLLaMA/comments/1ii08am/is_there_any_company_interested_in_llm_unlearning/ | false | false | self | 1 | null |
Llama-3.3 and Qwen2.5 speed comparisons on a 4-GPU / 120GB VRAM system | 49 | I did a couple of speed tests with Llama-3.3 70B Instruct, Qwen-2.5 72B Instruct, and Qwen2.5 Coder 32B Instruct where I asked each of them to "write a flappy bird game in Python that will run on a MacBook". For this test I didn't care about code quality or results, I just wanted the model to output approximately 1k tokens of code, for which the task of writing flappy bird is perfect.
The only data I was really interested in comparing was raw prompt processing speed and inference/generation speed. I figured some of the folks round here might be curious about the numbers, so here they are.
**Hardware setup**
* Supermicro M12SWA-TF motherboard
* AMD Ryzen Threadripper Pro 3945WX
* 128GB DDR4 RAM
* NVMe SSDs
* 1x EVGA RTX 3090 Ti 24GB
* 1x Pny RTX A6000 48GB
* 2x EVGA RTX 3090 FTW3 24GB
* EVGA 2000W PSU running on dedicated 240V/20A 60Hz (USA)
* All GPUs throttled at 250W
**Software setup**
* Ubuntu Linux
* tabbyAPI / exllamav2 (8bpw exl2 quants unless otherwise noted)
* tensor parallel enabled
* speculative decoding (draft mode) enabled
* context lengths for Llama and Qwen are empirically the ceiling of what I can fit in available VRAM (120GB)
**LLama-3.3 70B Instruct with 3B draft model**
* Draft Model: Llama-3.2-3B-Instruct-exl2_8.obpw
* Main Model: Llama-3.3-70B-Instruct-exl2_8.0bpw
* Context Size: 108,544 bytes
* Cache Mode: FP16
* **Process**: 44.12 T/s
* **Generate**: 30.89 T/s
**Qwen-2.5 72B Instruct with 3B draft model**
* Draft Model: Qwen2.5-3B-Instruct-exl2_8.0bpw
* Main Model: Qwen2.5-72B-Instruct-exl2_8.0bpw
* Context Size: 128,512
* Cache Mode: FP16
* **Process**: 97.83 T/s
* **Generate**: 37.93 T/s
**Qwen-2.5 Coder 32B Instruct with 1.5B draft model**
* Draft Model: Qwen2.5-Coder-1.5B-Instruct-exl2_8bpw (6 head bits)
* Main Model: Qwen2.5-Coder-32B-Instruct-exl2_8bpw (6 head bits)
* Context Size: 32,768
* Cache Mode: FP16
* **Process**: 246.16 T/s
* **Generate**: 65.24 T/s
**Qwen-2.5 Coder 32B Instruct with 3B draft model**
* Draft Model: Qwen2.5-Coder-3B-Instruct-exl2_8bpw (6 head bits)
* Main Model: Qwen2.5-Coder-32B-Instruct-exl2_8bpw (6 head bits)
* Context Size: 32,768
* Cache Mode: FP16
* **Process**: 201.84 T/s
* **Generate**: 57.08 T/s
I find it interesting that Qwen 72B was faster than Llama 70B by a whopping 7 tokens/sec despite each model using a 3B draft model, Qwen being 2B parameters larger, and Qwen having an extra 20k bytes of context. My guess is that the output of the smaller Qwen model is more closely matched to its larger counterpart than the Llama models, which therefore boosts the speed of speculative decoding... but I'm just pulling guesses out of my butt. What do you think? | 2025-02-05T03:11:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ii0i9b/llama33_and_qwen25_speed_comparisons_on_a_4gpu/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii0i9b | false | null | t3_1ii0i9b | /r/LocalLLaMA/comments/1ii0i9b/llama33_and_qwen25_speed_comparisons_on_a_4gpu/ | false | false | self | 49 | null |
L3.3-Damascus-R1 | 14 | Hello all! This is an updated and rehualed version of Nevoria-R1 and OG Nevoria using community feedback on several different experimental models (Experiment-Model-Ver-A, L3.3-Exp-Nevoria-R1-70b-v0.1 and L3.3-Exp-Nevoria-70b-v0.1) with it i was able to dial in merge settings of a new merge method called SCE and the new model configuration.
https://huggingface.co/Steelskull/L3.3-Damascus-R1 | 2025-02-05T03:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ii0llc/l33damascusr1/ | mentallyburnt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii0llc | false | null | t3_1ii0llc | /r/LocalLLaMA/comments/1ii0llc/l33damascusr1/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '3Jk3kjUTfnZREBLbi-Cqb_4fZ6i55KlviCm0d4S7-r4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?width=108&crop=smart&auto=webp&s=14466072348e1199d301e412095899369b355859', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?width=216&crop=smart&auto=webp&s=570e8f456a15555d9951e927b01b5fdd7ca95717', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?width=320&crop=smart&auto=webp&s=b6f0259865515eeb937a1805e3faf30b2fb395e7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?width=640&crop=smart&auto=webp&s=d443172b54bf2188dfc4ba8f3164c7889d1ba4b5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?width=960&crop=smart&auto=webp&s=eaca6f5639b3bf87d1549657c80ecab13c84ddd6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?width=1080&crop=smart&auto=webp&s=d466b9a51c4ad3d0b8aeeac9c61b6547c5329468', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/L2xTYIAaVMHg9W8z1k0nAQZYIO-ITv3vkxQY7GAYHZY.jpg?auto=webp&s=fd6c995bfdf4336e21d1004b6a88c9b959d91ae8', 'width': 1200}, 'variants': {}}]} |
Getting the Most out of a Small Model (llama 8b). | 1 | [removed] | 2025-02-05T03:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ii1113/getting_the_most_out_of_a_small_model_llama_8b/ | MiyamotoMusashi7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii1113 | false | null | t3_1ii1113 | /r/LocalLLaMA/comments/1ii1113/getting_the_most_out_of_a_small_model_llama_8b/ | false | false | self | 1 | null |
Do Gigabyte AI TOP motherboards support Linux well? | 4 | Sort of a tangential topic, but I figured I would ask here since this is the target audience of these motherboards. If you own one of these AI TOP motherboards can you confirm that they have good compatibility with Linux?
- I'm planning a new Intel 285k build and was originally going to choose the ASUS ROG MAXIMUS Z890 EXTREME, but I saw a post indicating that the [Thunderbolt 5 ports on the board were not working under Linux](https://rog-forum.asus.com/t5/gaming-motherboards/asus-z890-extreme-thunderbolt-5-issues-in-linux/td-p/1052427).
- My next choice was the newly launched [Z890 AORUS XTREME AI TOP](https://www.gigabyte.com/Motherboard/Z890-AORUS-XTREME-AI-TOP) motherboard.
- [Gigabyte's advertising for AI TOP products](https://www.gigabyte.com/consumer/ai-top/) always mentions their LLM tuning software, which runs on both Linux and Windows. I assume they wouldn't advertise Linux if they didn't at least test it on their hardware, right? My hope is that this is a roundabout way of saying their AI TOP products are designed to support Linux. | 2025-02-05T03:45:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ii15ha/do_gigabyte_ai_top_motherboards_support_linux_well/ | cinedog959 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii15ha | false | null | t3_1ii15ha | /r/LocalLLaMA/comments/1ii15ha/do_gigabyte_ai_top_motherboards_support_linux_well/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'QFsVvyT2M7pN8NbLYMT4Uq7gVU1Uo7sCIx4LBWhkkJw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XOdUwvmcloVUg6EHT3NSElF9J2oVQmbyaYJUvl3A9w4.jpg?width=108&crop=smart&auto=webp&s=a863a1b649bd975a27090e3da32aadecc0366f13', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XOdUwvmcloVUg6EHT3NSElF9J2oVQmbyaYJUvl3A9w4.jpg?width=216&crop=smart&auto=webp&s=9b26e4168ffd8d487e5e6c278dd421eeb2dc0644', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XOdUwvmcloVUg6EHT3NSElF9J2oVQmbyaYJUvl3A9w4.jpg?width=320&crop=smart&auto=webp&s=07a0c042c27a073d7fd744cf102f72e257c6065b', 'width': 320}], 'source': {'height': 510, 'url': 'https://external-preview.redd.it/XOdUwvmcloVUg6EHT3NSElF9J2oVQmbyaYJUvl3A9w4.jpg?auto=webp&s=fb64d67f3897ed213602f7e98bff00420bfafd85', 'width': 510}, 'variants': {}}]} |
This guy asks DeepSeek (the model) if it sends data to China ... | 1 | [removed] | 2025-02-05T03:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ii16od/this_guy_asks_deepseek_the_model_if_it_sends_data/ | hampy_chan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii16od | false | null | t3_1ii16od | /r/LocalLLaMA/comments/1ii16od/this_guy_asks_deepseek_the_model_if_it_sends_data/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aksq56lhyEKGfUs0OWNFYm1qQUZ99KsiMqfCjiar29g', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/pMfBqPEkoBfEg6N7MTqSpItzZOcu_w_kEqjF_To489w.jpg?width=108&crop=smart&auto=webp&s=e4fd8fc4abb6db746a05051d8c9da3b919357049', 'width': 108}, {'height': 385, 'url': 'https://external-preview.redd.it/pMfBqPEkoBfEg6N7MTqSpItzZOcu_w_kEqjF_To489w.jpg?width=216&crop=smart&auto=webp&s=0172a86f2077d1485ada99ef316e1d311012eb6b', 'width': 216}, {'height': 571, 'url': 'https://external-preview.redd.it/pMfBqPEkoBfEg6N7MTqSpItzZOcu_w_kEqjF_To489w.jpg?width=320&crop=smart&auto=webp&s=d85021e37b66d8e45a401f8240c7de2e28558b5b', 'width': 320}, {'height': 1143, 'url': 'https://external-preview.redd.it/pMfBqPEkoBfEg6N7MTqSpItzZOcu_w_kEqjF_To489w.jpg?width=640&crop=smart&auto=webp&s=7addf998a2bfb41517ea59aab6eeba05d6a6b25e', 'width': 640}], 'source': {'height': 1286, 'url': 'https://external-preview.redd.it/pMfBqPEkoBfEg6N7MTqSpItzZOcu_w_kEqjF_To489w.jpg?auto=webp&s=bd5ef60e758c93f8ef523ad31a06f29a6733abcd', 'width': 720}, 'variants': {}}]} |
|
newbie here: I have llama.cpp working and RAG set up for my use, but don't know how to create a web UI for my python script that does the actual queries | 3 | i've set up llama.cpp and have a working script that uses RAG into a database, and alternatively I can run llama-server and see the web UI, but I am wondering how I can create a web UI around my script that's using my own contextual database. A simple chat style web ui for my python script to mimic what llama.cpp already provides but would hand off entered queries to my own script, and then relay the responses back to the web UI, Is this simple? Sorry I'm clueless in this area.
| 2025-02-05T04:03:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ii1hsw/newbie_here_i_have_llamacpp_working_and_rag_set/ | itsjustsosimple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii1hsw | false | null | t3_1ii1hsw | /r/LocalLLaMA/comments/1ii1hsw/newbie_here_i_have_llamacpp_working_and_rag_set/ | false | false | self | 3 | null |
What does this mean? | 1 | 2025-02-05T04:12:03 | NeutralMan_01 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii1ncp | false | null | t3_1ii1ncp | /r/LocalLLaMA/comments/1ii1ncp/what_does_this_mean/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'pxjLNx3Qe3kN2sEJY6AWSyOxJyzdEJHd0dVrlVx1rZw', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/wtfpwp63x8he1.png?width=108&crop=smart&auto=webp&s=c869670c226ea4e54e9b950e16151216a22b8910', 'width': 108}, {'height': 212, 'url': 'https://preview.redd.it/wtfpwp63x8he1.png?width=216&crop=smart&auto=webp&s=e5b029a7b4558eb32bb31de6b3e3a8ae4e73c62d', 'width': 216}, {'height': 314, 'url': 'https://preview.redd.it/wtfpwp63x8he1.png?width=320&crop=smart&auto=webp&s=49c2ddf5c57df582eb341d6bacdf6b7bca1c385f', 'width': 320}, {'height': 629, 'url': 'https://preview.redd.it/wtfpwp63x8he1.png?width=640&crop=smart&auto=webp&s=d1224da8c03b347a3be557007a246e782b4af53a', 'width': 640}], 'source': {'height': 816, 'url': 'https://preview.redd.it/wtfpwp63x8he1.png?auto=webp&s=09bd395a04cbced3f90b6afb5c16c128098b3fd0', 'width': 830}, 'variants': {}}]} |
|||
What does this mean? Does this imply that it using the same architecture as OpenAI | 1 | 2025-02-05T04:16:09 | prince_of_pattikaad | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii1q0j | false | null | t3_1ii1q0j | /r/LocalLLaMA/comments/1ii1q0j/what_does_this_mean_does_this_imply_that_it_using/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'EyrPWBK9susn6J09vzRFHi18kNHQViaUA98eHBEWffs', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/acctf7owx8he1.png?width=108&crop=smart&auto=webp&s=9107fffaa718509af6e61b48eba550e83a4afbfb', 'width': 108}, {'height': 212, 'url': 'https://preview.redd.it/acctf7owx8he1.png?width=216&crop=smart&auto=webp&s=2f2fb526b55061e8c0a60105ef11b823fd38dd00', 'width': 216}, {'height': 314, 'url': 'https://preview.redd.it/acctf7owx8he1.png?width=320&crop=smart&auto=webp&s=904eec32ced06aea0c3fa63500505ddf21db8cdc', 'width': 320}, {'height': 629, 'url': 'https://preview.redd.it/acctf7owx8he1.png?width=640&crop=smart&auto=webp&s=8e1ea1173917e1669912dc80ae28bf49c19fdec8', 'width': 640}], 'source': {'height': 816, 'url': 'https://preview.redd.it/acctf7owx8he1.png?auto=webp&s=bea0fc663bea777b5298098691a10f92176f45f4', 'width': 830}, 'variants': {}}]} |
|||
If you are waiting for that cheap cable or even a P40 for your LLM build, don't hold your breath. The USPS has stopped all shipments from China and Hong Kong. | 138 | In accordance with Trump orders, the USPS has stopped shipments coming from China and HK. The China Post to USPS gateway is how much of this stuff is shipped.
Even after it starts back up. Expect to pay more since these small shipments are no longer exempt from tariffs. If it wasn't paid by the shipper, it'll be held at customs until you can pay the duty.
https://www.cbsnews.com/news/usps-suspends-packages-china-hong-kong/ | 2025-02-05T04:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ii1rti/if_you_are_waiting_for_that_cheap_cable_or_even_a/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii1rti | false | null | t3_1ii1rti | /r/LocalLLaMA/comments/1ii1rti/if_you_are_waiting_for_that_cheap_cable_or_even_a/ | false | false | self | 138 | {'enabled': False, 'images': [{'id': 'fy_-uqoosdlS2k0D9_5uojwpwhrh_OdKT7hv_O4ccbQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?width=108&crop=smart&auto=webp&s=6085d77ac9ac919a3a182b8d3af834afbb9c06e5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?width=216&crop=smart&auto=webp&s=809d552dc4c0163806863a030cf17ea72f3a2c8f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?width=320&crop=smart&auto=webp&s=b73c03c057af1907276b6a256e311677744940cf', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?width=640&crop=smart&auto=webp&s=1c132a4ad7bc882299c9ddc26ff31a947873c670', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?width=960&crop=smart&auto=webp&s=fb929fe65aded0f3296db6090efd7e796bb19626', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?width=1080&crop=smart&auto=webp&s=5cdf69edeaf0ab45cd0172def92c63af84ce72dd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5HN7DI2eBSDq_WZqiEdS1ybmWFK3M3xQeFxP8g1bh7A.jpg?auto=webp&s=c71e6c32236a4c95ec59b3085dce78c06ea8fd82', 'width': 1200}, 'variants': {}}]} |
Re-evaluating my Suzy data layout - Thanks deepseek | 3 | 2025-02-05T04:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ii2eqm/reevaluating_my_suzy_data_layout_thanks_deepseek/ | Alienanthony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii2eqm | false | null | t3_1ii2eqm | /r/LocalLLaMA/comments/1ii2eqm/reevaluating_my_suzy_data_layout_thanks_deepseek/ | false | false | 3 | null |
||
Can larger contexts make the LLMs job increasingly harder? | 3 | Even if i keep my context within the window, can a larger context make the LLMs job harder? I'm asking because
1) I watched Dylan and Nathan on Lex Podcast and they talked about how memory use increases exponentially with number of tokens
2) an AI agent in my company is having trouble following everything in the system prompt, and sometimes becomes just Artificial-Stupidity, and imo the system prompt is part of the fault. Its 14k tokens in size. Even if thats not the problem, it's still unecessarily large, and i'm thinking if we cut it in half, it would indeed improve the agent. It uses Chat-GPT 4o, which has a window of 128k, if i'm not mistaken
Basically, i'm saying larger context will be bad for the LLM to reason, unless you really really need that context | 2025-02-05T05:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ii2kv8/can_larger_contexts_make_the_llms_job/ | Blender-Fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii2kv8 | false | null | t3_1ii2kv8 | /r/LocalLLaMA/comments/1ii2kv8/can_larger_contexts_make_the_llms_job/ | false | false | self | 3 | null |
‘Faking’ Remote 9-5 with DeepSeek LLM. | 1 | [removed] | 2025-02-05T05:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ii2nfb/faking_remote_95_with_deepseek_llm/ | HeyDontSkipLegDay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii2nfb | false | null | t3_1ii2nfb | /r/LocalLLaMA/comments/1ii2nfb/faking_remote_95_with_deepseek_llm/ | false | false | self | 1 | null |
ops | 0 | 2025-02-05T05:28:40 | Wrong_User_Logged | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii2yrz | false | null | t3_1ii2yrz | /r/LocalLLaMA/comments/1ii2yrz/ops/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'jUn3ritZYuikq7v9_Atsd4tUFClchMuoU_QA_hyjLmg', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/xj1k7b2ua9he1.jpeg?width=108&crop=smart&auto=webp&s=ab408c9067dd7e8a72769081515d0841bd5714c4', 'width': 108}, {'height': 337, 'url': 'https://preview.redd.it/xj1k7b2ua9he1.jpeg?width=216&crop=smart&auto=webp&s=d9120c61bc486ccb481b3ebfba1ae0eae0607d18', 'width': 216}, {'height': 500, 'url': 'https://preview.redd.it/xj1k7b2ua9he1.jpeg?width=320&crop=smart&auto=webp&s=2b3edbb6906901c5e2c8cbd9d4f4fda49cafddea', 'width': 320}], 'source': {'height': 782, 'url': 'https://preview.redd.it/xj1k7b2ua9he1.jpeg?auto=webp&s=bf92320ad0311b8fafba577be84a832980b19103', 'width': 500}, 'variants': {}}]} |
|||
Local TTS models question mark inflection | 1 | [removed] | 2025-02-05T05:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ii35xs/local_tts_models_question_mark_inflection/ | RastislavKish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii35xs | false | null | t3_1ii35xs | /r/LocalLLaMA/comments/1ii35xs/local_tts_models_question_mark_inflection/ | false | false | self | 1 | null |
Do you have to redo fine tuning if you upgrade LLMs? | 1 | [removed] | 2025-02-05T06:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ii3htu/do_you_have_to_redo_fine_tuning_if_you_upgrade/ | Apita2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii3htu | false | null | t3_1ii3htu | /r/LocalLLaMA/comments/1ii3htu/do_you_have_to_redo_fine_tuning_if_you_upgrade/ | false | false | self | 1 | null |
How does multi gpu inference in llama.cpp work ? | 1 | [removed] | 2025-02-05T06:04:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ii3j6u/how_does_multi_gpu_inference_in_llamacpp_work/ | Fun_Sky6649 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii3j6u | false | null | t3_1ii3j6u | /r/LocalLLaMA/comments/1ii3j6u/how_does_multi_gpu_inference_in_llamacpp_work/ | false | false | self | 1 | null |
Google Lifts a Ban on Using Its AI for Weapons and Surveillance | 544 | 2025-02-05T06:18:38 | https://www.wired.com/story/google-responsible-ai-principles/ | ab2377 | wired.com | 1970-01-01T00:00:00 | 0 | {} | 1ii3qvv | false | null | t3_1ii3qvv | /r/LocalLLaMA/comments/1ii3qvv/google_lifts_a_ban_on_using_its_ai_for_weapons/ | false | false | 544 | {'enabled': False, 'images': [{'id': 'N1d-WRbFy6c-2LcDq1nIOgDw_HbxgINzWUv2CJPM34E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?width=108&crop=smart&auto=webp&s=a1520c481bb2536639f6d3cf9562c6d3182c016a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?width=216&crop=smart&auto=webp&s=9ff1dfe2a205181ebb37e6234a1b59b563f703b8', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?width=320&crop=smart&auto=webp&s=cae7f87474a18c91afc88ea42a759edbcf69956f', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?width=640&crop=smart&auto=webp&s=afb92e2820ff849c88d693e6467404d7df633be8', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?width=960&crop=smart&auto=webp&s=3e238196c8080b41276f1836d8d0524937dc768e', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?width=1080&crop=smart&auto=webp&s=7d3dbea9344f80c05869cdb5953597acd362d223', 'width': 1080}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/NrA5s-vSHIcN9SaUL0ETUoJ_dGVpcnD0UsdffV5wGi8.jpg?auto=webp&s=59f16aa74f8b8fb97287773eab2d72ce56d67b61', 'width': 1280}, 'variants': {}}]} |
||
Prompt Targets now powered by LoRA of Arch-Function. 2M parameter model designed for intent detection and task routing. | 27 | The base model can be found here: https://huggingface.co/katanemo/Arch-Function-3B
And we have yet to publish the LoRA because we beta-testing it and want more practical real world scenarios validated. But in our internal benchmarks we were able to show it’s comparable to SOTA while be negligee in cost and leaps and bounds faster. So we integrated in Arch - the intelligent edge and LLM proxy for agentic apps - and hosted it. Would for folks to try it so that we release it in a few weeks
More on prompt targets: https://docs.archgw.com/concepts/prompt_target.html
More on the project: https://github.com/katanemo/archgw | 2025-02-05T06:33:23 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii3yz2 | false | null | t3_1ii3yz2 | /r/LocalLLaMA/comments/1ii3yz2/prompt_targets_now_powered_by_lora_of/ | false | false | 27 | {'enabled': True, 'images': [{'id': '1oYoSWUOIHOAS7sNO3z1Z6pBULxSKZ2nRQU75Q6KVzg', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?width=108&crop=smart&auto=webp&s=e3abc0a43adb5fc980b51971c6142f1eeafdb9f6', 'width': 108}, {'height': 317, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?width=216&crop=smart&auto=webp&s=c36405d84d522ad78d0a04efd635636a3ceae7e5', 'width': 216}, {'height': 471, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?width=320&crop=smart&auto=webp&s=a318ddb79e10c3a2855d4bf806735b03bd689358', 'width': 320}, {'height': 942, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?width=640&crop=smart&auto=webp&s=565ebb1694849b7e6d120b14c4949f34ca56a7e2', 'width': 640}, {'height': 1413, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?width=960&crop=smart&auto=webp&s=ea73312201b4aa64f68324f0988df8b4d7105cb0', 'width': 960}, {'height': 1589, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?width=1080&crop=smart&auto=webp&s=f43c2f41610bae1f72d5a46b662468eec8494fe1', 'width': 1080}], 'source': {'height': 1899, 'url': 'https://preview.redd.it/bxtskf7em9he1.jpeg?auto=webp&s=4cf26d8c271b52cb7a433913e4e16c5d7e5afe89', 'width': 1290}, 'variants': {}}]} |
||
Concerns over Stargate AI Project's $720 Billion Funding | 0 | The Stargate AI project, spearheaded by President Donald Trump, has generated both excitement and skepticism. The project, aiming to secure a staggering $500 billion in funding for U.S. AI infrastructure, faces significant hurdles in its journey to secure this immense capital. The key partners—OpenAI, SoftBank, and Oracle—promote the venture as an opportunity to create hundreds of thousands of jobs and maintain the U.S.’s lead in AI. However, there are doubts about whether the projected funding can be raised and whether the initiative is truly groundbreaking or merely an extension of existing AI investments.
Critics like Dario Amodei, CEO of Anthropic, question whether the ambitious goals of the Stargate project can be met, particularly when much of the capital and details remain vague. Experts speculate that SoftBank may seek additional investments through asset sales or partnerships with Middle Eastern funds. Ultimately, the success of the project depends on overcoming financial uncertainty and its ability to deliver on the promises made.
[Explore more on the challenges and potential of the Stargate project here!](https://equifi.blogspot.com/2025/02/can-stargate-ai-project-secure-record.html) | 2025-02-05T06:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ii45db/concerns_over_stargate_ai_projects_720_billion/ | Large-Ad8031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii45db | false | null | t3_1ii45db | /r/LocalLLaMA/comments/1ii45db/concerns_over_stargate_ai_projects_720_billion/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'o9Lc2Qvdbcathatic-_PxZw4UleYSmPpUXbDD9SwgPc', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/KlJMRbqfDrhCdlOvI7Jg74FNYJjQFrXmJGCeHubHw6k.jpg?width=108&crop=smart&auto=webp&s=58c6ac035eec66ebb71b75ee40603cf9dd5bce3c', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/KlJMRbqfDrhCdlOvI7Jg74FNYJjQFrXmJGCeHubHw6k.jpg?width=216&crop=smart&auto=webp&s=357a0da1f0a5ed32d555977b9987f2ace6596132', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/KlJMRbqfDrhCdlOvI7Jg74FNYJjQFrXmJGCeHubHw6k.jpg?width=320&crop=smart&auto=webp&s=df1c68368a3a4f80537042b8d9563f04d85c1586', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/KlJMRbqfDrhCdlOvI7Jg74FNYJjQFrXmJGCeHubHw6k.jpg?width=640&crop=smart&auto=webp&s=6c185aa4dbc617845a8c0fced2382ef526fef8a6', 'width': 640}], 'source': {'height': 566, 'url': 'https://external-preview.redd.it/KlJMRbqfDrhCdlOvI7Jg74FNYJjQFrXmJGCeHubHw6k.jpg?auto=webp&s=3975fb06931a4f542f37f4ad193104d8dfcaff6f', 'width': 850}, 'variants': {}}]} |
How to use Genai on top of Mongodb? | 1 | [removed] | 2025-02-05T06:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ii47it/how_to_use_genai_on_top_of_mongodb/ | One-Calligrapher1792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii47it | false | null | t3_1ii47it | /r/LocalLLaMA/comments/1ii47it/how_to_use_genai_on_top_of_mongodb/ | false | false | self | 1 | null |
downloading large ollama models using download manager | 0 | Hi. I want to know, is it possible to download ollama models manually (with pause resume support like download managers) So that I can manually download and then place it in the folder Because in a default way, via command prompt after some % it is getting stopped I don't know why.
i am in windows 11 , decent PC build.
i was trying to download using this command
ollama run deepseek-r1:32b
is there any huggingface repo that i can download manually and place in the folder of ollama ?
| 2025-02-05T06:54:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ii4a34/downloading_large_ollama_models_using_download/ | bkacademy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii4a34 | false | null | t3_1ii4a34 | /r/LocalLLaMA/comments/1ii4a34/downloading_large_ollama_models_using_download/ | false | false | self | 0 | null |
Is deepseek api down? | 1 | [removed] | 2025-02-05T07:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ii4f4c/is_deepseek_api_down/ | Public-Salary1289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii4f4c | false | null | t3_1ii4f4c | /r/LocalLLaMA/comments/1ii4f4c/is_deepseek_api_down/ | false | false | self | 1 | null |
HuggingFace changes to PRO subscription Inference limits, should I switch providers now? | 4 | I recently picked up a PRO subscription for HuggingFace for the 20,000 Inference endpoint requests/day, but it seems like they changed the limits to just $2.00 worth of credits instead, which fills up very fast even on daily use.
My main usecase is simply increased limits so I could test out various models on HF before finalising on a model and self-hosting it for specific tasks, but this change significantly reduced the limits, so what am I even paying the $9 for? | 2025-02-05T07:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ii4nst/huggingface_changes_to_pro_subscription_inference/ | Dixie9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii4nst | false | null | t3_1ii4nst | /r/LocalLLaMA/comments/1ii4nst/huggingface_changes_to_pro_subscription_inference/ | false | false | self | 4 | null |
As a noob, which language model could you suggest for me? | 1 | [removed] | 2025-02-05T07:35:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ii4uek/as_a_noob_which_language_model_could_you_suggest/ | Live-Pause-6543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii4uek | false | null | t3_1ii4uek | /r/LocalLLaMA/comments/1ii4uek/as_a_noob_which_language_model_could_you_suggest/ | false | false | self | 1 | null |
500: Ollama: 500, message='Internal Server Error', url='http://host.docker.internal:11434/api/chat ' | 0 | Hello,
I'm relatively new to the local models and AI. I just ran my 1st docker with Ollama and it was working perfectly, but now I consitently get this error. Does anyone know how to resolve it?
Thank you | 2025-02-05T07:43:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ii4yad/500_ollama_500_messageinternal_server_error/ | PCOwner12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii4yad | false | null | t3_1ii4yad | /r/LocalLLaMA/comments/1ii4yad/500_ollama_500_messageinternal_server_error/ | false | false | self | 0 | null |
What is this? | 1 | [removed] | 2025-02-05T08:05:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ii58y2 | false | null | t3_1ii58y2 | /r/LocalLLaMA/comments/1ii58y2/what_is_this/ | false | false | default | 1 | null |
||
Resources for non quantizized fine-tuning. | 1 | [removed] | 2025-02-05T08:08:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ii5a9o/resources_for_non_quantizized_finetuning/ | Koaleer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii5a9o | false | null | t3_1ii5a9o | /r/LocalLLaMA/comments/1ii5a9o/resources_for_non_quantizized_finetuning/ | false | false | self | 1 | null |
Found this while messing with the web version of DeepSeek. What do you guys think? | 1 | 2025-02-05T08:11:28 | NeutralMan_01 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii5bme | false | null | t3_1ii5bme | /r/LocalLLaMA/comments/1ii5bme/found_this_while_messing_with_the_web_version_of/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-ujxZ6UbMXZvMl-A9d3aLPfmeBXvniQzeI8JJNhRqZo', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/mq4f5atl3ahe1.png?width=108&crop=smart&auto=webp&s=b3db885bc4cc2581a5a06b488bfc07f802a37119', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/mq4f5atl3ahe1.png?width=216&crop=smart&auto=webp&s=13a8318b3a052cedec0da35923052da3feaf932d', 'width': 216}, {'height': 313, 'url': 'https://preview.redd.it/mq4f5atl3ahe1.png?width=320&crop=smart&auto=webp&s=6d07803c12bab21877eecb40428981398a96fb12', 'width': 320}, {'height': 626, 'url': 'https://preview.redd.it/mq4f5atl3ahe1.png?width=640&crop=smart&auto=webp&s=6befeb8f3bb1bf0f780f72d8bfcbdfdf63b02cf3', 'width': 640}, {'height': 940, 'url': 'https://preview.redd.it/mq4f5atl3ahe1.png?width=960&crop=smart&auto=webp&s=41acd52a8961af2803204f0d5f0dfa0c3414b8e4', 'width': 960}], 'source': {'height': 961, 'url': 'https://preview.redd.it/mq4f5atl3ahe1.png?auto=webp&s=d876bd18607066a94a8660bde27c1a1b92f6f602', 'width': 981}, 'variants': {}}]} |
|||
🔥 New 70B Finetune: Pernicious Prophecy 70B – A Merged Monster of Models! 🔥 | 2 | An intelligent fusion of:
💀 Negative\_LLAMA\_70B (SicariusSicariiStuff)
🔥 L3.1-70Blivion (invisietch)
⚡ EVA-LLaMA-3.33-70B (EVA-UNIT-01)
🧬 OpenBioLLM-70B (aaditya)
🔮 Forged through arcane \*\*merges\*\* and an eldritch finetune on top, this beast harnesses the \*\*intelligence\*\* and \*\*unique capabilities\*\* of the above models, further smoothed via the \*\*SFT\*\* phase to combine all their strengths, yet shed all the weaknesses.
Expect enhanced \*\*reasoning\*\*, excellent \*\*roleplay\*\*, and a disturbingly good ability to generate everything from cybernetic poetry to cursed prophecies and stories.
🔥 What makes Pernicious Prophecy 70B different?
⚔️ \*\*Exceptional structured responses\*\* with unparalleled \*\*markdown\*\* understanding.
👁️ \*\*Unhinged creativity\*\* – Great for roleplay, occult rants, and GPT-breaking meta.
🧠 \*\*Multi-domain expertise\*\* – Medical and scientific knowledge will enhance your roleplays and stories.
🌘 \*\*Dark\*\*, \*\*Negativily biased\*\* and \*\*uncensored\*\*.
Included in the repo:
\*\*Accursed Quill\*\* - write down what you wish for, and behold how your wish becomes your demise 🩸
\[under Pernicious\_Prophecy\_70B/Character\_Cards\]
📜 Give it a try, and let the prophecies flow.
(Also available on \*\*Horde\*\* for the next \*\*24 hours\*\*)
[https://huggingface.co/Black-Ink-Guild/Pernicious\_Prophecy\_70B](https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B) | 2025-02-05T08:12:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ii5c8v/new_70b_finetune_pernicious_prophecy_70b_a_merged/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii5c8v | false | null | t3_1ii5c8v | /r/LocalLLaMA/comments/1ii5c8v/new_70b_finetune_pernicious_prophecy_70b_a_merged/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'aAhjOvYYUncZTEpawVn50SBr4NRDMYixu1-mxTvzaOk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?width=108&crop=smart&auto=webp&s=7d17fa6b75a2a5029db245b6249acd11406f17c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?width=216&crop=smart&auto=webp&s=e3e12ae4a7296e7b08cae09edf71a1565ac8afe2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?width=320&crop=smart&auto=webp&s=d61d82ef888b7850bdddbc1d36a4d42c5888fb67', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?width=640&crop=smart&auto=webp&s=791cfd2377239bbfb363a733dc3496a505d13d19', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?width=960&crop=smart&auto=webp&s=9b81278d50a07c86f29243983c60db3fb3cd9a80', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?width=1080&crop=smart&auto=webp&s=7203ece417348a10921fe97637e8d45b3d53cb0a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TsNW21bMr2Zni1SUc1wvLOngtIex7H9MCEAMNi3n5BQ.jpg?auto=webp&s=56b3e517c22d3c5c3a8d7fea68091bbc546e1f38', 'width': 1200}, 'variants': {}}]} |
Macbook M1 max 64gb - run stats for some of the current models using LMStudio | 0 |
Deepseek 32b qwen2 18.49gb ollama terminal
system ram: 34.95
12.68 t/s
12.63 t/s
Deepseek 32b qwen2 18.49gb lmstudio gguf
lmstudio ram: 20gb. system ram: 32.6gb. cpu: 7%
12.39 t/s
12.16 t/s
Qwen2.5 Coder Instruct 32b 17.36gb lmstudio gguf
lmstudio ram: 18.84. system ram: 33.8gb
15.15 t/s
15.13 t/s
deepseek coder v2 Lite Instruct mlx 16b 8.84gb lmstudio mlx
lmstudio ram: 8.91gb. system ram: 24.3gb
65.73 t/s
67.77 t/s
mlx-community/Mistral Nemo Instruct 2407 4bit 12b 6.43gb lmstudio mlx
lmstudio ram: 8.89gb. system ram: 21gb
39.79 t/s
39.69 t/s
Llama3.2 3b 1.69gb lmstudio mlx
lmstudio ram: 2.25gb. system ram: 14.8gb
102.13 t/s
104.78 t/s
all tests were run with all apps closed except Activity Monitor and a TextEdit instance.
I waited for the coolers to stop between runs (happends for the bigger models)
prompt:
`"make this better:`
`override init() {`
`super.init()`
`if let data = UserDefaults.standard.value(forKey: rootKey) as? Data {`
`do {`
`let stored = try JSONDecoder().decode(UserSettings.self, from: data)`
`mapTypeIndex = stored.mapTypeIndex`
`timeTravelStartDate = stored.timeTravelStartDate`
`timeTravelEndDate = stored.timeTravelEndDate`
`timeTravelIntervalType = stored.timeTravelIntervalType`
`showDayRectangles = stored.showDayRectangles`
`showCommute = stored.showCommute`
`showPrecisionBubbles = stored.showPrecisionBubbles`
`enableGradientColors = stored.enableGradientColors`
`allowLocationToAutoPause = stored.allowLocationToAutoPause`
`reloadDataInProgress = stored.reloadDataInProgress`
`} catch {`
`log.notice("Stored UserSettings data found, but cannot be parsed. will use defaults:\n \(error)")`
`}`
`} else {`
`log.notice("Stored UserSettings not found on init")`
`}`
`}"` | 2025-02-05T08:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ii5dni/macbook_m1_max_64gb_run_stats_for_some_of_the/ | 9limits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii5dni | false | null | t3_1ii5dni | /r/LocalLLaMA/comments/1ii5dni/macbook_m1_max_64gb_run_stats_for_some_of_the/ | false | false | self | 0 | null |
Is Phi finally any good? | 12 | I recently (out of boredom) installed phi4:14b. Gave it a a few programming/devops tasks and the answers actually made sense. This had never happened with previous phi iterations. Phi2, 3, 3.5 were only good in benchmarks and spat out absolute bollocks for answers. So the question now is, is the model actually good or did they just train it on a dataset that makes sense for my use case but for everything else it's still dogsh*te? In my test tasks (Ansible with AWS, terraform) it wiped the floor with 14b-32b parameter sized competition and was roughly on par with qwen2.5-coder:14b. | 2025-02-05T08:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ii5li1/is_phi_finally_any_good/ | umataro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii5li1 | false | null | t3_1ii5li1 | /r/LocalLLaMA/comments/1ii5li1/is_phi_finally_any_good/ | false | false | self | 12 | null |
Error Loading Model | 1 | Hi I am running Win10 x64 with an i74790K, 24GB RAM & GTX 970
I am repeatedly getting errors while trying to load r1-llama or deepseek v3.
It says:
"Failed to load last used model.
Error loading model.
(Exit code: 18446744072635812000). Unknown error. Try a different model and/or config." | 2025-02-05T08:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ii5pfu/error_loading_model/ | Dyslexist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii5pfu | false | null | t3_1ii5pfu | /r/LocalLLaMA/comments/1ii5pfu/error_loading_model/ | false | false | self | 1 | null |
𝐔 turn ; Sam says, people take his words without context ; open-Source trend humbles OpenAI CEO | 2 | 2025-02-05T08:41:39 | BidHot8598 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii5ptn | false | null | t3_1ii5ptn | /r/LocalLLaMA/comments/1ii5ptn/𝐔_turn_sam_says_people_take_his_words_without/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'TCBewSyZ964Qv3F6kIkIZ7dUWcMrNuNIGuLJDs1C7lI', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?width=108&crop=smart&auto=webp&s=4267106d9b68d46ae7d88ac8d8fc66532915a710', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?width=216&crop=smart&auto=webp&s=d91ceadb192decf9e5589a18741ebd79930792ed', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?width=320&crop=smart&auto=webp&s=95de40ae2203f67680f3dc432738bb4a6f82906d', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?width=640&crop=smart&auto=webp&s=518364a2fdc0e7f4168f4a7da2d80f27f81c511e', 'width': 640}, {'height': 1279, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?width=960&crop=smart&auto=webp&s=a5fc9d8155cb7e9e9d07d5409015f4e0b0e37de2', 'width': 960}, {'height': 1439, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?width=1080&crop=smart&auto=webp&s=87f2b12c53f7ad26b25ef7c71dbad8cfaf81f295', 'width': 1080}], 'source': {'height': 5461, 'url': 'https://preview.redd.it/dba5qpa99ahe1.jpeg?auto=webp&s=7b92b1e7b99eae6912b80d2ed0a9c9f4116dc33d', 'width': 4096}, 'variants': {}}]} |
|||
Best UI to run local deepseek models? | 3 | Been out of loop for few months..
I'm wondering if there's any more user friendly UIs with decent functionality now. I have installed Ooobabooga web ui
Also, is there anything better then deepseek for local? I have 3090. | 2025-02-05T08:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ii5sc1/best_ui_to_run_local_deepseek_models/ | Suimeileo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii5sc1 | false | null | t3_1ii5sc1 | /r/LocalLLaMA/comments/1ii5sc1/best_ui_to_run_local_deepseek_models/ | false | false | self | 3 | null |
Interview with Deepseek CEO Liang | 63 | 2025-02-05T08:54:02 | https://youtu.be/HRsVZuEMlvI?si=2kgt1IFtaib8vstb | mesmerlord | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ii5vi6 | false | {'oembed': {'author_name': 'Awakening Richard', 'author_url': 'https://www.youtube.com/@Awakening_Richard', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HRsVZuEMlvI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek CEO interview in English."></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/HRsVZuEMlvI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek CEO interview in English.', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ii5vi6 | /r/LocalLLaMA/comments/1ii5vi6/interview_with_deepseek_ceo_liang/ | false | false | 63 | {'enabled': False, 'images': [{'id': 'NpCbWm9EndpEPJK0dM6qDy-cmhXWcXUUnokwWV6hfyU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/65TzyKuMhkMpc_f2_BttXLzKse8GVSrynY3OE_u8jTY.jpg?width=108&crop=smart&auto=webp&s=a5be3659a37a1ca98c46c8ea92438cf76af85b5c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/65TzyKuMhkMpc_f2_BttXLzKse8GVSrynY3OE_u8jTY.jpg?width=216&crop=smart&auto=webp&s=be443f089c77a313de61b457932c14c25132063a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/65TzyKuMhkMpc_f2_BttXLzKse8GVSrynY3OE_u8jTY.jpg?width=320&crop=smart&auto=webp&s=b5133312a80c92d85bc650060674afac9dd7c50f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/65TzyKuMhkMpc_f2_BttXLzKse8GVSrynY3OE_u8jTY.jpg?auto=webp&s=dbeaa8f48ab62719b4203cc23705961656a29ed5', 'width': 480}, 'variants': {}}]} |
||
Which mini pc for deepseeker-coder-33b? | 1 | [removed] | 2025-02-05T09:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ii617g/which_mini_pc_for_deepseekercoder33b/ | stanTheCodeMonkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii617g | false | null | t3_1ii617g | /r/LocalLLaMA/comments/1ii617g/which_mini_pc_for_deepseekercoder33b/ | false | false | self | 1 | null |
Running R1 in ollama. cpu bound with gpu at half power draw? | 0 | I’m running r1 2b on my epyc server with 24 cores, 24 gb gpu and 256gb ram. I get between 1-2 t/s but for some reason the cpu is always at 100% during inference and gpu drawing less than half power on ollama. Is ollama not using resources correctly? Is this due to R1 infrastructure? How to fix this? If I add another gpu would this increase my speed if the cpu is the bottleneck? | 2025-02-05T09:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ii61eh/running_r1_in_ollama_cpu_bound_with_gpu_at_half/ | davesmith001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii61eh | false | null | t3_1ii61eh | /r/LocalLLaMA/comments/1ii61eh/running_r1_in_ollama_cpu_bound_with_gpu_at_half/ | false | false | self | 0 | null |
Tracking global regulatory responses to DeepSeek | 5 | I have created a tracker that collates and tracks government / regulatory responses to DeepSeek around the world. Thought it would be interesting to visual the regulatory and geopolitical trends happening in the AI world.
[https://www.note2map.com/share?deepseek\_regulation\_tracker](https://www.note2map.com/share?deepseek_regulation_tracker)
| 2025-02-05T09:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ii63ko/tracking_global_regulatory_responses_to_deepseek/ | techie_ray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii63ko | false | null | t3_1ii63ko | /r/LocalLLaMA/comments/1ii63ko/tracking_global_regulatory_responses_to_deepseek/ | false | false | self | 5 | null |
Where can I find comparisons of quantized/distilled models. The leaderboard always show only the "max" version. | 1 | [removed] | 2025-02-05T09:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ii6533/where_can_i_find_comparisons_of/ | Pristine-Yak-4242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii6533 | false | null | t3_1ii6533 | /r/LocalLLaMA/comments/1ii6533/where_can_i_find_comparisons_of/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.