[FEEDBACK] Local apps
Please share your feedback about the Local Apps integration in model pages.
On compatible models , you'll be proposed to launch some local apps:
In your settings, you can configure the list of apps and their order:
The list of available local apps is defined in https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/local-apps.ts
I think the tensor-core fp16 FLOPS should be used for GPUs supporting that. I note that V100 counts as way less than the theoretical 125 TFLOPS, listed e.g. here: https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf
Hey! Have you guys heard of LangFlow? It is a neat solution for developing AI-powered apps as well!
The GPU list is missing the RTX A4000 (16GB)
Would be nice to get ollama integration
I suggest adding Ollama as local app to run LLM's
I use GPT4All and it is not listed herein
Ollama
local app to run LLM
https://github.com/ollama/ollama
transformerlab-app
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
https://github.com/transformerlab/transformerlab-app
Perplexica
Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
https://github.com/ItzCrazyKns/Perplexica
May be in future adding HuggingChat?
HuggingChat macOS is a native chat interface designed specifically for macOS users, leveraging the power of open-source language models. It brings the capabilities of advanced AI conversation right to your desktop, offering a seamless and intuitive experience.
Missing from the Hardware lists:
GPU: Nvidia RTX4070 laptop (8 GB vram)
CPU: Intel Core Ultra CPU 7 (14th generation)
Hi @tkowalsky , would you like to open a PR? :) Here's another one you can use as an example to get started, if you're up for it: https://github.com/huggingface/huggingface.js/pull/880/files
Missing from the Hardware lists:
GPU: Nvidia RTX2060S (8 GB vram)
@alarianb would you be able to open a PR on https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/hardware.ts?
Not sure how to add my CPU.
It's this:
https://www.intel.com/content/www/us/en/products/sku/241062/intel-core-ultra-7-processor-265kf-30m-cache-up-to-5-50-ghz/specifications.html
I can't see any flops advertised...seems like Intel like to hide this now.
I looked around for Tools, trying perf & python-papi to no avail.
AIDA64 on Windows measures 1531 double-precision GFLOPS...is that a value we can use?
- looks right with theoretical 20 Cores * 1 Thread * 5200 MHz Clock Speed * 16 FLOPS/Cycle = 1.55 GFlops
Any other recommendations for how to fill this?
I'll raise a PR with this to see if it's useful, or alternative suggestions:
"Intel Core Ultra 7 265KF": {
tflops: 1.53,
},
https://github.com/huggingface/huggingface.js/pull/1329
When using LM Studio, MLX is supported. Navigating to "Browse compatible models" should show MLX as an active filter
What about Koboldcpp?
Since transformers supports GGUF, I'd love to see transformers as a library option for GGUF files. In particular, here
My niche NVIDIA Quadro RTX-8000 48GB wise tortoise could use a spot please, and Intel is up to at least 14th gen now. Thanks!
there's an open issue about ComfyUI in https://github.com/huggingface/huggingface.js, if i'm not mistaken!
Surprised not to see GPT4ALL at this point?