Update README.md
Browse files
README.md
CHANGED
@@ -21,13 +21,13 @@ The model determines when to execute functions, whether in parallel or serially,
|
|
21 |
|
22 |
## How to Get Started
|
23 |
|
24 |
-
We provide custom code for
|
25 |
|
26 |
```python
|
27 |
import torch
|
28 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
29 |
|
30 |
-
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-medium-v3.0"
|
31 |
model = AutoModelForCausalLM.from_pretrained(
|
32 |
"meetkai/functionary-medium-v3.0",
|
33 |
device_map="auto",
|
@@ -57,7 +57,6 @@ tools = [
|
|
57 |
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
|
58 |
|
59 |
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
|
60 |
-
tokenizer.padding_side = "left"
|
61 |
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
|
62 |
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
|
63 |
print(tokenizer.decode(pred.cpu()[0]))
|
@@ -67,7 +66,7 @@ print(tokenizer.decode(pred.cpu()[0]))
|
|
67 |
|
68 |
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
|
69 |
|
70 |
-
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message
|
71 |
|
72 |
```python
|
73 |
from openai import OpenAI
|
|
|
21 |
|
22 |
## How to Get Started
|
23 |
|
24 |
+
We provide custom code for parsing raw model responses into a JSON object containing `role`, `content` and `tool_calls` fields. This enables the users to read the function-calling output of the model easily.
|
25 |
|
26 |
```python
|
27 |
import torch
|
28 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
29 |
|
30 |
+
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-medium-v3.0")
|
31 |
model = AutoModelForCausalLM.from_pretrained(
|
32 |
"meetkai/functionary-medium-v3.0",
|
33 |
device_map="auto",
|
|
|
57 |
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
|
58 |
|
59 |
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
|
|
|
60 |
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
|
61 |
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
|
62 |
print(tokenizer.decode(pred.cpu()[0]))
|
|
|
66 |
|
67 |
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
|
68 |
|
69 |
+
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server:
|
70 |
|
71 |
```python
|
72 |
from openai import OpenAI
|