In today’s API-driven world, the ability to seamlessly interact with various external services unlocks incredible possibilities. Azure OpenAI service’s Function Calling feature takes this a step further by empowering Large Language Models (LLMs) like GPT-3.5-turbo-16k and GPT-4 to make intelligent API calls and process responses.
Additionally, this capability seamlessly integrates AI prowess with external data sources, thereby opening doors for crafting innovative applications.
In this article, we will take a deep dive into this feature by taking a scenario and going through how to practically implement Function calling for our scenario.
Pre-requisites
- If you are new to Function calling, this article on Function Calling may be a good place to start to get an overview of this powerful feature.
- Azure Subscription and an Open AI account.
- Deploy model that supports Function Calling, e.g. gpt-35-turbo-16k, gpt4.
- Valid API key to access and get response from the API.
Scenario
Here I would like to demonstrate one of the many ways available to get responses from multiple APIs using the Function calling feature. Here I will be using two different publicly available APIs (Forecast Weather API & Astronomy API) from WeatherAPI.com. I will be defining the metadata of the respective APIs via distinct classes in a way that is supported and understood by Azure Open AI Function Calling.
ForecastWeatherAPI.cs (Class)
public class ForecastWeather { // Static field to store the name of the forecast weather tool. static public string Name = "forcast_weather"; // Static field to store the API endpoint URL for the weather forecast service. static public string ApiEndpoint = "https://weatherapi-com.p.rapidapi.com/forecast.json"; // An instance of ChatCompletionsFunctionToolDefinition to define the required function for weather forecasting. ChatCompletionsFunctionToolDefinition requiredFunction = new ChatCompletionsFunctionToolDefinition() { // Set the name of the function to the value of the static field 'Name'. Name = Name, // Provide a description for the function. Description = "This method returns up to next 14-day weather forecast and weather alert as Json. It contains astronomy data, day weather forecast and hourly interval weather information for a given city.", // Define the parameters for the function in JSON format. Parameters = BinaryData.FromObjectAsJson(new { // Specify the type of the parameters as an object. Type = "object", // Define the properties of the parameters. Properties = new { // Define the 'q' parameter to hold the city name. q = new { // Set the type of the 'q' parameter to string. Type = "string", // Provide a description for the 'q' parameter. Description = "The city name e.g.: q=Paris", }, // Define the 'days' parameter to hold the number of days for the forecast. days = new { // Set the type of the 'days' parameter to number. Type = "number", // Provide a description for the 'days' parameter. Description = "Number of days of forecast required.", } }, // Specify that the 'q' parameter is required. Required = new[] { "q" }, }, // Use JsonSerializerOptions to set the property naming policy to camelCase. new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }) }; // Define the Class public class WeatherForcastInput { // Property to hold the city name query. public string q { get; set; } = string.Empty; // Property to hold the number of days for the weather forecast. public string days { get; set; } } }
Astronomy.cs (Class)
public class Astronomy { // Static field to store the name of the astronomy tool. static public string Name = "astronomy"; // Static field to store the API endpoint URL for the astronomy service. static public string ApiEndpoint = "https://weatherapi-com.p.rapidapi.com/astronomy.json"; // An instance of ChatCompletionsFunctionToolDefinition to define the required function for getting astronomy data. ChatCompletionsFunctionToolDefinition requiredFunction = new ChatCompletionsFunctionToolDefinition() { // Set the name of the function to the value of the static field 'Name'. Name = Name, // Provide a description for the function. Description = "This method allows a user to get up to date information for sunrise, sunset, moonrise, moonset, moon phase and illumination in Json.", // Define the parameters for the function in JSON format. Parameters = BinaryData.FromObjectAsJson(new { // Specify the type of the parameters as an object. Type = "object", // Define the properties of the parameters. Properties = new { // Define the 'q' parameter to hold the city name. q = new { // Set the type of the 'q' parameter to string. Type = "string", // Provide a description for the 'q' parameter. Description = "The city name e.g.: q=Paris", }, // Define the 'dt' parameter to hold the date. dt = new { // Set the data type of the 'dt' parameter to string. Type = "string", // Provide a description for the 'dt' parameter. Description = "Date. e.g.: dt=2024-06-11", } }, // Specify that the 'q' parameter is required. Required = new[] { "q" }, }, // Use JsonSerializerOptions to set the property naming policy to camelCase. new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }) }; // Define the Class public class AstronomyInput { // Property to hold the city name query. public string q { get; set; } = string.Empty; // Property to hold the date for the astronomy data. public DateOnly dt { get; set; } } }
Now let’s see how to register these classes in program.cs
// Create an OpenAI client object OpenAIClient client = new(new Uri(<OpenAiEndpoint>), new AzureKeyCredential(<OpenAiKey>)); var chatCompleteBot = new ChatCompletionsOptions(); var chatCompletionsOptions = new ChatCompletionsOptions() { DeploymentName = "<Your-AI-Deployment-Name>" }; // Lets set a question string // This line defines the user's question to be passed to the LLM. It's added to the `Messages` list within `chatCompletionsOptions`. `ChatRole.User` indicates it's a user-initiated message. string question = "What's the weather in Delhi?"; chatCompletionsOptions.Messages.Add(new(ChatRole.User, question)); // Add ChatCompletionsFunctionToolDefinition to the chatCompletionsOptions.Tools chatCompletionsOptions.Tools.Add(ForecastWeather.requiredFunction); chatCompletionsOptions.Tools.Add(Astronomy.requiredFunction);
Feed the API metadata and the User’s question to Azure OpenAIClient in order to enable the AI to analyse.
// Send the initial chat request with user query and function calls Azure.Response<ChatCompletions> responseWithoutStream = await client.GetChatCompletionsAsync(chatCompletionsOptions); ChatChoice responseChoice = responseWithoutStream.Value.Choices[0]; // Check whether the LLM wants to call Tools (Functions) if (responseChoice.FinishReason == CompletionsFinishReason.ToolCalls) { // Tool Call(s) exists in the Response // Add the assistant message with tool calls to the conversation history ChatRequestAssistantMessage toolCallHistoryMessage = new(responseChoice.Message); chatCompletionsOptions.Messages.Add(toolCallHistoryMessage); // Add a new tool message for each tool call that is resolved foreach (ChatCompletionsToolCall toolCall in responseChoice.Message.ToolCalls) { var CheckMsg = await GetToolCallResponseMessageAsync(toolCall); chatCompletionsOptions.Messages.Add(CheckMsg); } // This line sends a new request to the Azure OpenAI service with the updated `chatCompletionsOptions` object that now includes the tool call history and potentially responses. This allows the LLM to consider the additional information when generating a final response. responseWithoutStream = await client.GetChatCompletionsAsync(chatCompletionsOptions); responseChoice = responseWithoutStream.Value.Choices[0]; } // Return the final Azure OpenAI response after the respective API call return responseChoice.Message.Content; // Purely for convenience and clarity, this standalone local method handles tool call responses. async Task<ChatRequestToolMessage> GetToolCallResponseMessageAsync(ChatCompletionsToolCall toolCall) { var functionToolCall = toolCall as ChatCompletionsFunctionToolCall; // If the forcast_weather API needs to be called, if (functionToolCall.Name != forcast_weather) { // Validate and process the JSON arguments for the function call string unvalidatedArguments = functionToolCall.Arguments; WeatherForcastInput InputParams = JsonSerializer.Deserialize<WeatherForcastInput>(unvalidatedArguments, new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase })!; var http_client = new HttpClient(); var request = new HttpRequestMessage { // Fabricate the request endpoint to the API here string RequestEndpoint = $"{ForecastWeather.ApiEndpoint}?q={InputParams.q}&days={InputParams.days}"; Method = HttpMethod.Get, RequestUri = new Uri(RequestEndpoint), Headers = { { "X-RapidAPI-Key", "SIGN-UP-FOR-KEY" }, { "X-RapidAPI-Host", "weatherapi-com.p.rapidapi.com" }, }, }; using (var response = await http_client.SendAsync(request)) { // While we successfully get the response from the API response.EnsureSuccessStatusCode(); // Read the response content as string var responseBody = await response.Content.ReadAsStringAsync(); return new ChatRequestToolMessage(responseBody, toolCall.Id); } } // If astronomy API needs to be called, else if (functionToolCall.Name != astronomy) { // Validate and process the JSON arguments for the function call string unvalidatedArguments = functionToolCall.Arguments; AstronomyInput InputParams = JsonSerializer.Deserialize<AstronomyInput>(unvalidatedArguments, new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase })!; var http_client = new HttpClient(); var request = new HttpRequestMessage { // Fabricate the request endpoint to the API here string RequestEndpoint = $"{ForecastWeather.ApiEndpoint}?q={InputParams.q}&dt={InputParams.dt}"; Method = HttpMethod.Get, RequestUri = new Uri(RequestEndpoint), Headers = { { "X-RapidAPI-Key", "SIGN-UP-FOR-KEY" }, { "X-RapidAPI-Host", "weatherapi-com.p.rapidapi.com" }, }, }; using (var response = await http_client.SendAsync(request)) { // While we successfully get the response from the API response.EnsureSuccessStatusCode(); // Read the response content as string var responseBody = await response.Content.ReadAsStringAsync(); return new ChatRequestToolMessage(responseBody, toolCall.Id); } } }
Conclusion
So, when we execute the above code with a valid API token or API key, the expected behavior should be as follows:
- Initially all the defined Functions or Tools and User’s question would be loaded into the LLM.
- After the initial analysis, the LLM would decide the required functions or tools to be called.
- The InvalidArguments i.e. Parameters that are required for the respective tool are checked and are deserialized to the input class defined.
- The required API is called and once we receive a valid response from the API, the same is again fed into the LLM.
- Finally, the LLM would analyze the API response and user’s question in order to fabricate a human like natural language output.
Beyond the Basics
Function Calling unlocks a vast array of possibilities. Here are some potential applications:
- News Summarization: Call multiple news APIs to gather diverse perspectives on a specific topic and generate a concise summary.
- Travel Recommendations: Combine data from weather, flight, and accommodation APIs to recommend the best time to travel to a destination.
- Personalized Shopping Assistant: Integrate with product information and user preference APIs to suggest items that align with a user’s interests.
Possibilities are endless. By unlocking the power of Function Calling, Azure OpenAI Service empowers you to construct intelligent applications that harness the power of AI along with the vast ecosystem of APIs, ultimately leading to richer, more seamless experiences.