Update README.md
Browse files
README.md
CHANGED
@@ -53,7 +53,7 @@ Users (both direct and downstream) should critically evaluate the outputs in the
|
|
53 |
|
54 |
To load and run plutus-8B-instruct, you can use the Hugging Face Transformers library. For example:
|
55 |
|
56 |
-
|
57 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
58 |
|
59 |
model_name = "TheFinAI/plutus-8B-instruct"
|
@@ -64,7 +64,7 @@ prompt = "Παρακαλώ δώσε μου ανάλυση για την οικο
|
|
64 |
inputs = tokenizer(prompt, return_tensors="pt")
|
65 |
outputs = model.generate(**inputs)
|
66 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
67 |
-
|
68 |
|
69 |
Additional details for running with mixed precision, LoRA configuration, or int4 quantization can be found in the training documentation.
|
70 |
|
|
|
53 |
|
54 |
To load and run plutus-8B-instruct, you can use the Hugging Face Transformers library. For example:
|
55 |
|
56 |
+
```python
|
57 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
58 |
|
59 |
model_name = "TheFinAI/plutus-8B-instruct"
|
|
|
64 |
inputs = tokenizer(prompt, return_tensors="pt")
|
65 |
outputs = model.generate(**inputs)
|
66 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
67 |
+
```
|
68 |
|
69 |
Additional details for running with mixed precision, LoRA configuration, or int4 quantization can be found in the training documentation.
|
70 |
|