File size: 401 Bytes
939262b
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
from transformers import AutoModelForCausalLM
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", torch_dtype="auto")

You can also set the data type to use for models instantiated from scratch.
thon
import torch
from transformers import AutoConfig, AutoModel
my_config = AutoConfig.from_pretrained("google/gemma-2b", torch_dtype=torch.float16)
model = AutoModel.from_config(my_config)
```