Text Gen Solution

Function Calling using OctoAI Models

Introduction

OctoAI’s Chat Completions API supports function calling as introduced by OpenAI. You can describe the functions and arguments to make available to the LLM model. This is a powerful technique that turns LLMs models into “agents” that can take action within your application.

When the model makes a function call, it will output a JSON object containing arguments to call one or many of the tools that have been defined. After calling any invoked tools, you can provide those results back to the model in subsequent Chat Completions API calls.

Typically, the model autonomously determines which functions to use (default is “auto”). However, you can manually specify a function by using the tool_choice parameter. For instance, in the examples below, you can substitute tool_choice=auto with tool_choice={"type": "function", "function": {"name": "get_flight_status"}}.

In the example below, we’ll use the OpenAI SDK, but reset the base URL and model arguments to call OctoAI’s Chat Completions API.

Supported Models

  • mistral-7b-instruct
  • hermes-2-pro-llama-3-8b
  • meta-llama-3-8b-instruct
  • meta-llama-3-70b-instruct
  • qwen2-7b-instruct

Function Calling Example: Flight Status in Python

In the python example below, we:

  • Define the get_flight_status function.
  • Create a list of tools to specify the function available to the model.
  • Define the messages for the conversation.
  • Create a chat completion request with the model, messages, and tools.
  • Process the tool calls returned by the model and get the function’s output.
  • Append the function’s response to the messages and generate the final enriched response.
1import os
2import json
3import openai
4
5# Initialize the OpenAI client with OctoAI's base URL and your API key
6client = openai.OpenAI(
7 base_url="https://text.octoai.run/v1",
8 api_key=os.environ['OCTOAI_API_KEY'],
9)
10
11# Set the model that you want to use
12model = "hermes-2-pro-llama-3-8b"
13
14# Mock function to simulate getting flight status
15def get_flight_status(flight_number, date):
16 return json.dumps({"flight_number": flight_number, "status": "On Time", "date": date})
17
18# Define the function and its parameters to be available for the model
19tools = [
20 {
21 "type": "function",
22 "function": {
23 "name": "get_flight_status",
24 "description": "Get the current status of a flight",
25 "parameters": {
26 "type": "object",
27 "properties": {
28 "flight_number": {
29 "type": "string",
30 "description": "The flight number, e.g., AA100"
31 },
32 "date": {
33 "type": "string",
34 "format": "date",
35 "description": "The date of the flight, e.g., 2024-06-17"
36 }
37 },
38 "required": ["flight_number", "date"]
39 }
40 }
41 }
42]
43
44# Initial conversation setup with the system and user roles
45messages = [
46 {"role": "system", "content": "You are a helpful assistant that can help with flight information and status."},
47 {"role": "user", "content": "I have a flight booked for tomorrow with American Airlines, flight number AA100. Can you check its status for me?"}
48]
49
50# Create a chat completion request with the model, messages, and the tools available to the model
51response = client.chat.completions.create(
52 model=model,
53 messages=messages,
54 tools=tools,
55 tool_choice="auto",
56 temperature=0,
57)
58
59# Extract the agent's response from the API response
60agent_response = response.choices[0].message
61
62# Append the response from the model to keep state in the conversation
63messages.append(
64 {
65 "role": agent_response.role,
66 "content": "",
67 "tool_calls": [
68 tool_call.model_dump()
69 for tool_call in response.choices[0].message.tool_calls
70 ]
71 }
72)
73
74# Process any tool calls made by the model
75tool_calls = response.choices[0].message.tool_calls
76if tool_calls:
77 for tool_call in tool_calls:
78 function_name = tool_call.function.name
79 function_args = json.loads(tool_call.function.arguments)
80
81 # Call the function to get the response
82 function_response = locals()[function_name](**function_args)
83
84 # Add the function response to the messages block
85 messages.append(
86 {
87 "tool_call_id": tool_call.id,
88 "role": "tool",
89 "name": function_name,
90 "content": function_response,
91 }
92 )
93
94 # Pass the updated messages to the model to get the final enriched response
95 function_enriched_response = client.chat.completions.create(
96 model=model,
97 messages=messages,
98 )
99 print(json.dumps(function_enriched_response.choices[0].message.model_dump(), indent=2))

Output

{
"role": "assistant"
"content": "The current status of American Airlines flight AA100 on 2024-06-17 is On Time.",
}

And there you have it!

In this tutorial, we explored how to use OctoAI’s Chat Completions API to integrate a simple function calling exercise within a chatbot scenario. We demonstrated setting up the OpenAI client with a specific API endpoint, defining a function (get_flight_status), and configuring the chatbot to handle and process function calls dynamically. By following the steps outlined, you can extend this approach to include various functions and enhance the capabilities of your chatbot applications using the scalable, low latency, low cost endpoints offered by OctoAI.