Spaces:
Running
Running
Next, let's define a list of tools: | |
thon | |
def get_current_temperature(location: str, unit: str) -> float: | |
""" | |
Get the current temperature at a location. | |
Args: | |
location: The location to get the temperature for, in the format "City, Country" | |
unit: The unit to return the temperature in. (choices: ["celsius", "fahrenheit"]) | |
Returns: | |
The current temperature at the specified location in the specified units, as a float. | |
""" | |
return 22. # A real function should probably actually get the temperature! | |
def get_current_wind_speed(location: str) -> float: | |
""" | |
Get the current wind speed in km/h at a given location. | |
Args: | |
location: The location to get the temperature for, in the format "City, Country" | |
Returns: | |
The current wind speed at the given location in km/h, as a float. | |
""" | |
return 6. # A real function should probably actually get the wind speed! | |
tools = [get_current_temperature, get_current_wind_speed] | |
Now, let's set up a conversation for our bot: | |
python | |
messages = [ | |
{"role": "system", "content": "You are a bot that responds to weather queries. You should reply with the unit used in the queried location."}, | |
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"} | |
] | |
Now, let's apply the chat template and generate a response: | |
python | |
inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt") | |
inputs = {k: v.to(model.device) for k, v in inputs.items()} | |
out = model.generate(**inputs, max_new_tokens=128) | |
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):])) | |
And we get: | |
text | |
<tool_call> | |
{"arguments": {"location": "Paris, France", "unit": "celsius"}, "name": "get_current_temperature"} | |
</tool_call><|im_end|> | |
The model has called the function with valid arguments, in the format requested by the function docstring. It has | |
inferred that we're most likely referring to the Paris in France, and it remembered that, as the home of SI units, | |
the temperature in France should certainly be displayed in Celsius. | |
Let's append the model's tool call to the conversation. Note that we generate a random tool_call_id here. These IDs | |
are not used by all models, but they allow models to issue multiple tool calls at once and keep track of which response | |
corresponds to which call. You can generate them any way you like, but they should be unique within each chat. | |
python | |
tool_call_id = "vAHdf3" # Random ID, should be unique for each tool call | |
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France", "unit": "celsius"}} | |
messages.append({"role": "assistant", "tool_calls": [{"id": tool_call_id, "type": "function", "function": tool_call}]}) | |
Now that we've added the tool call to the conversation, we can call the function and append the result to the | |
conversation. Since we're just using a dummy function for this example that always returns 22.0, we can just append | |
that result directly. Again, note the tool_call_id - this should match the ID used in the tool call above. | |
python | |
messages.append({"role": "tool", "tool_call_id": tool_call_id, "name": "get_current_temperature", "content": "22.0"}) | |
Finally, let's let the assistant read the function outputs and continue chatting with the user: | |
python | |
inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt") | |
inputs = {k: v.to(model.device) for k, v in inputs.items()} | |
out = model.generate(**inputs, max_new_tokens=128) | |
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):])) | |
And we get: | |
text | |
The current temperature in Paris, France is 22.0 ° Celsius.<|im_end|> | |
Although this was a simple demo with dummy tools and a single call, the same technique works with | |
multiple real tools and longer conversations. This can be a powerful way to extend the capabilities of conversational | |
agents with real-time information, computational tools like calculators, or access to large databases. |