Implement function calling with the Granite-3.0-8B-Instruct model in Python with watsonx¶
Authors: Erika Russi, Anna Gutowska, Jess Bozorg
In this tutorial, you will use the IBM® Granite-3.0-8B-Instruct model now available on watsonx.ai™ to perform custom function calling.
Traditional large language models (LLMs), like the OpenAI GPT-4 (generative pre-trained transformer) model available through ChatGPT, and the IBM Granite™ models that we'll use in this tutorial, are limited in their knowledge and reasoning. They produce their responses based on the data used to train them and are difficult to adapt to personalized user queries. To obtain the missing information, these generative AI models can integrate external tools within the function calling. This method is one way to avoid fine-tuning a foundation model for each specific use-case. The function calling examples in this tutorial will implement external API calls.
The Granite-3.0-8B-Instruct model and tokenizer use natural language processing (NLP) to parse query syntax. In addition, the models use function descriptions and function parameters to determine the appropriate tool calls. Key information is then extracted from user queries to be passed as function arguments.
Steps¶
Step 1. Set up your environment¶
While you can choose from several tools, this tutorial walks you through how to set up an IBM account to use a Jupyter Notebook.
Log in to watsonx.ai using your IBM Cloud account.
Create a watsonx.ai project.
You can get your project ID from within your project. Click the Manage tab. Then, copy the project ID from the Details section of the General page. You need this ID for this tutorial.
Create a Jupyter Notebook.
This step opens a notebook environment where you can copy the code from this tutorial. Alternatively, you can download this notebook to your local system and upload it to your watsonx.ai project as an asset in Step 2. To view more Granite tutorials, check out the IBM Granite Community. This Jupyter Notebook is also available on GitHub.
Step 2. Set up watsonx.ai Runtime service and API key¶
Create a watsonx.ai Runtime service instance (choose the Lite plan, which is a free instance).
Generate an API Key.
Associate the watsonx.ai Runtime service to the project you created in watsonx.ai.
Step 3. Install and import relevant libraries and set up your credentials¶
We'll need a few libraries and modules for this tutorial. Make sure to import the following ones; if they're not installed, you can resolve this with a quick pip install. If you are running this tutorial locally, we recommend setting up a virtual environment to avoid Python package dependency conflicts.
# installations
%pip install -q transformers
%pip install -q torch torchvision
%pip install -q langchain-ibm
#imports
import requests
import ast
import re
import getpass
from transformers import AutoTokenizer
from transformers.utils import get_json_schema
from langchain_ibm import WatsonxLLM
Next, we can prepare our environment by setting the model ID for the granite-3-8b-instruct
model, and the tokenizer for the same Granite model.
Input your WATSONX_APIKEY
and WATSONX_PROJECT_ID
that you created in steps 1 and 2 upon running the following cell.
MODEL_ID = "ibm/granite-3-8b-instruct"
TOKENIZER = AutoTokenizer.from_pretrained("ibm-granite/granite-3.0-8b-instruct")
WATSONX_URL = "https://us-south.ml.cloud.ibm.com"
WATSONX_APIKEY = getpass.getpass("Please enter your watsonx.ai Runtime API key (hit enter): ")
WATSONX_PROJECT_ID = getpass.getpass("Please enter your project ID (hit enter): ")
The get_stock_price
function in this tutorial requires an AV_STOCK_API_KEY
key. To generate a free AV_STOCK_API_KEY
, please visit the Alpha Vantage website.
Secondly, the get_current_weather
function requires a WEATHER_API_KEY
. To generate one, please create an account. Upon creating an account, select the "API Keys" tab to display your free key.
AV_STOCK_API_KEY = getpass.getpass("Please enter your AV_STOCK_API_KEY (hit enter): ")
WEATHER_API_KEY = getpass.getpass("Please enter your WEATHER_API_KEY (hit enter): ")
Step 4. Define the functions¶
We can now define our functions. The function's docstring and type information are important for generating the proper tool information.
In this tutorial, the get_stock_price
function uses the Stock Market Data API available through Alpha Vantage.
def get_stock_price(ticker: str, date: str) -> dict:
"""
Retrieves the lowest and highest stock prices for a given ticker and date.
Args:
ticker: The stock ticker symbol, e.g., "IBM".
date: The date in "YYYY-MM-DD" format for which you want to get stock prices.
Returns:
A dictionary containing the low and high stock prices on the given date.
"""
print(f"Getting stock price for {ticker} on {date}")
try:
stock_url = f"https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol={ticker}&apikey={AV_STOCK_API_KEY}"
stock_data = requests.get(stock_url)
stock_low = stock_data.json()["Time Series (Daily)"][date]["3. low"]
stock_high = stock_data.json()["Time Series (Daily)"][date]["2. high"]
return {
"low": stock_low,
"high": stock_high
}
except Exception as e:
print(f"Error fetching stock data: {e}")
return {
"low": "none",
"high": "none"
}
The get_current_weather
function retrieves the real-time weather in a given location using the Current Weather Data API via OpenWeather.
def get_current_weather(location: str) -> dict:
"""
Fetches the current weather for a given location (default: San Francisco).
Args:
location: The name of the city for which to retrieve the weather information.
Returns:
A dictionary containing weather information such as temperature, weather description, and humidity.
"""
print(f"Getting current weather for {location}")
try:
# API request to fetch weather data
weather_url = f"http://api.openweathermap.org/data/2.5/weather?q={location}&appid={WEATHER_API_KEY}&units=metric"
weather_data = requests.get(weather_url)
data = weather_data.json()
# Extracting relevant weather details
weather_description = data["weather"][0]["description"]
temperature = data["main"]["temp"]
humidity = data["main"]["humidity"]
# Returning weather details
return {
"description": weather_description,
"temperature": temperature,
"humidity": humidity
}
except Exception as e:
print(f"Error fetching weather data: {e}")
return {
"description": "none",
"temperature": "none",
"humidity": "none"
}
Step 5. Set up the API request¶
Now that our functions are defined, we can create a function that generates a watsonx API request for the provided instructions the watsonx API endpoint. We will use this function each time we make a request.
def make_api_request(instructions: str) -> str:
model_parameters = {
"decoding_method": "greedy",
"max_new_tokens": 200,
"repetition_penalty": 1.05,
"stop_sequences": [TOKENIZER.eos_token]
}
model = WatsonxLLM(
model_id=MODEL_ID,
url= WATSONX_URL,
apikey=WATSONX_APIKEY,
project_id=WATSONX_PROJECT_ID,
params=model_parameters
)
response = model.invoke(instructions)
return response
Next, we can create a list of available functions. Here, we declare our function definitions that require the function names, descriptions, parameters and required properties.
tools = [get_json_schema(tool) for tool in (get_stock_price, get_current_weather)]
tools
[{'type': 'function', 'function': {'name': 'get_stock_price', 'description': 'Retrieves the lowest and highest stock prices for a given ticker and date.', 'parameters': {'type': 'object', 'properties': {'ticker': {'type': 'string', 'description': 'The stock ticker symbol, e.g., "IBM".'}, 'date': {'type': 'string', 'description': 'The date in "YYYY-MM-DD" format for which you want to get stock prices.'}}, 'required': ['ticker', 'date']}, 'return': {'type': 'object', 'description': 'A dictionary containing the low and high stock prices on the given date.'}}}, {'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Fetches the current weather for a given location (default: San Francisco).', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The name of the city for which to retrieve the weather information.'}}, 'required': ['location']}, 'return': {'type': 'object', 'description': 'A dictionary containing weather information such as temperature, weather description, and humidity.'}}}]
Step 6. Perform function calling¶
Step 6a. Calling the get_stock_price function¶
To prepare for the API requests, we must set our query
used in the tokenizer chat template.
query = "What were the IBM stock prices on October 7, 2024?"
Applying a chat template is useful for breaking up long strings of texts into one or more messages with corresponding labels. This allows the LLM to process the input in a format that it expects. Because we want our output to be in a string format, we can set the tokenize
parameter to false. The add_generation_prompt
can be set to true in order to append the tokens indicating the beginning of an assistant message to the output. This will be useful when generating chat completions with the model.
conversation = [
{"role": "system","content": "You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required."},
{"role": "user", "content": query },
]
instruction_1 = TOKENIZER.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True)
instruction_1
'<|start_of_role|>available_tools<|end_of_role|>\n{\n "type": "function",\n "function": {\n "name": "get_stock_price",\n "description": "Retrieves the lowest and highest stock prices for a given ticker and date.",\n "parameters": {\n "type": "object",\n "properties": {\n "ticker": {\n "type": "string",\n "description": "The stock ticker symbol, e.g., \\"IBM\\"."\n },\n "date": {\n "type": "string",\n "description": "The date in \\"YYYY-MM-DD\\" format for which you want to get stock prices."\n }\n },\n "required": [\n "ticker",\n "date"\n ]\n },\n "return": {\n "type": "object",\n "description": "A dictionary containing the low and high stock prices on the given date."\n }\n }\n}\n\n{\n "type": "function",\n "function": {\n "name": "get_current_weather",\n "description": "Fetches the current weather for a given location (default: San Francisco).",\n "parameters": {\n "type": "object",\n "properties": {\n "location": {\n "type": "string",\n "description": "The name of the city for which to retrieve the weather information."\n }\n },\n "required": [\n "location"\n ]\n },\n "return": {\n "type": "object",\n "description": "A dictionary containing weather information such as temperature, weather description, and humidity."\n }\n }\n}<|end_of_text|>\n<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>What were the IBM stock prices on October 7, 2024?<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>'
Now, we can call the make_api_request
function and pass the instructions we generated.
data_1 = make_api_request(instruction_1)
data_1
'[{"name": "get_stock_price", "arguments": {"ticker": "IBM", "date": "2024-10-07"}}]'
As you can see by the function name in the JSON object produced by the model, the appropriate get_stock_price
tool use was selected from the set of functions. To run the api call within the function, let's extract relevant information from the output. With the function name and arguments extracted, we can call the function. To call the function using its name as a string, we can use the globals()
function.
def tool_call(llm_response: str):
tool_request = ast.literal_eval(re.search("({.+})", llm_response).group(0))
tool_name = tool_request["name"]
tool_arguments = tool_request["arguments"]
tool_response = globals()[tool_name](**tool_arguments)
return tool_response
Get the response from the requested tool.
tool_response = tool_call(data_1)
tool_response
Getting stock price for IBM on 2024-10-07
{'low': '225.0200', 'high': '227.6700'}
The function successfully retrieved the requested stock price. To generate a synthesized final response, we can pass another prompt to the Granite model along with the information collected from function calling.
conversation2 = conversation + [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Display the tool response in natural language." },
{"role": "tool_response", "content": str(tool_response) },
]
instruction_2 = TOKENIZER.apply_chat_template(conversation=conversation2, tools=tools, tokenize=False, add_generation_prompt=True)
data_2 = make_api_request(instruction_2)
data_2
'On October 7, 2024, the IBM stock prices ranged from a low of $225.02 to a high of $227.67.'
Step 6b. Calling the get_current_weather function¶
As our next query, let’s inquire about the current weather in San Francisco. We can follow the same steps as in Step 5a by adjusting the query.
query = "What is the current weather in San Francisco?"
conversation = [
{"role": "system","content": "You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required."},
{"role": "user", "content": query },
]
instruction_1 = TOKENIZER.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True)
instruction_1
'<|start_of_role|>available_tools<|end_of_role|>\n{\n "type": "function",\n "function": {\n "name": "get_stock_price",\n "description": "Retrieves the lowest and highest stock prices for a given ticker and date.",\n "parameters": {\n "type": "object",\n "properties": {\n "ticker": {\n "type": "string",\n "description": "The stock ticker symbol, e.g., \\"IBM\\"."\n },\n "date": {\n "type": "string",\n "description": "The date in \\"YYYY-MM-DD\\" format for which you want to get stock prices."\n }\n },\n "required": [\n "ticker",\n "date"\n ]\n },\n "return": {\n "type": "object",\n "description": "A dictionary containing the low and high stock prices on the given date."\n }\n }\n}\n\n{\n "type": "function",\n "function": {\n "name": "get_current_weather",\n "description": "Fetches the current weather for a given location (default: San Francisco).",\n "parameters": {\n "type": "object",\n "properties": {\n "location": {\n "type": "string",\n "description": "The name of the city for which to retrieve the weather information."\n }\n },\n "required": [\n "location"\n ]\n },\n "return": {\n "type": "object",\n "description": "A dictionary containing weather information such as temperature, weather description, and humidity."\n }\n }\n}<|end_of_text|>\n<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>What is the current weather in San Francisco?<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>'
data_1 = make_api_request(instruction_1)
data_1
'[{"name": "get_current_weather", "arguments": {"location": "San Francisco"}}]'
Once again, the model decides the appropriate tool choice, in this case get_current_weather
, and extracts the location correctly. Now, let's call the function with the argument generated by the model.
tool_response = tool_call(data_1)
tool_response
Getting current weather for San Francisco
{'description': 'clear sky', 'temperature': 15.52, 'humidity': 68}
The function response correctly describes the current weather in San Francisco. Lastly, let's generate a synthesized final response with the results of this function call.
conversation2 = conversation + [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Display the tool response in natural language." },
{"role": "tool_response", "content": str(tool_response) },
]
instruction_2 = TOKENIZER.apply_chat_template(conversation=conversation2, tools=tools, tokenize=False, add_generation_prompt=True)
data_2 = make_api_request(instruction_2)
data_2
'The current weather in San Francisco is clear with a temperature of 15.52 degrees and a humidity of 68%.'
Summary¶
In this tutorial, you built custom functions and used the Granite-3.0-8B-Instruct model to determine which function to call based on key information from user queries. With this information, you called the function with the arguments as stated in the model response. These function calls produce the expected output. Finally, you called the Granite-3.0-8B-Instruct model again to synthesize the information returned by the functions.