Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
kalkey 
posted an update Mar 20
Post
467
we are using the hugging face pro model but all of a sudden we are getting this below error "Error: "The model meta-llama/Llama-3.2-11B-Vision-Instruct is too large to be loaded automatically (21GB > 10GB)." need help, please

It's the default behavior for normal models... @victor
llama32v.png

·

so how to resolve this issue?

I have to know if my pro access would help because its not working with hf-inference anymore, so is it a provider issue or the model issue.

Do i need to change providers?.

·

I'm trying this with a Pro subscription, so it's not a Pro subscription issue. As you say, it does work if you use a different inference provider.
However, only HF staff can tell whether the current state is an error or a specification.

For reporting Hub problem

https://github.com/huggingface/hub-docs/issues
[email protected]