Azure OpenAI Tool Calling

Overview

You can use this Snap to provide external tools for the model to call, supplying internal data and information for the model's responses.


Azure OpenAI Tool Calling

Prerequisites

None.

Limitations and known issues

None.

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap has one document input view, typically carrying the input message for the Azure OpenAI model.
Output This Snap has two output views. One contains the response from the LLM, the other one contains the tool call response extracted from the LLM response, which contains a json_arguments field.
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when an error occurs.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Note:
  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
  • Add icon (Plus Icon): Indicates that you can add fields in the field set.
  • Remove icon (Minus Icon): Indicates that you can remove fields from the field set.
Field / Field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: Azure OpenAI Tool Calling

Example: Tool Calling

Deployment ID String/Expression/Suggestion Required. Specify the deployment ID to run with to receive chat completions.

Default value: N/A

Example: gpt-4o

Message payload String/Expression Required. Specify the message payload that will be processed by the model. This payload typically includes input messages in JSON format.

Default value: N/A

Example: $inputMessage

Tool payload String/Expression Required. Specify the tool payload that defines the tools to be called along with the model.

Default value: N/A

Example: $specifiedTools

Tool Parameters Use this field set to allow users to configure advanced options for calling the Azure OpenAI tool, enabling fine-tuned control over how the tool operates within the pipeline. Each field within this set provides specific customization options to optimize the performance and results generated by the Azure OpenAI tool.
Tool choice Dropdown list/Expression Required. Choose the tool or function you want to call within the model. Options available are:
  • SPECIFY A FUNCTION
  • REQUIRED
  • NONE
  • AUTO

Default value: AUTO

Example: REQUIRED

Function name String/Expression Required.

Appears when SPECIFY A FUNCTION is selected in Tool choice is expression-enabled.

Provide the function name that you want to call from the model. This field is required when specifying a function.

Default value: None

Example: Analyze sentiment

Parallel tool calling Checkbox/Expression Select this checkbox to enable parallel calls to multiple tools.

Default status: Deselected

Model parameters Configure the parameters to tune the model runtime.
Maximum tokens Integer/Expression

Specify the maximum number of tokens to generate in the chat completion. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 50

Temperature Decimal/Expression

Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 0.2

Top P Decimal/Expression

Specify the nucleus sampling value, a decimal value between 0 and 1. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 0.2

Stop sequences String/Expression

Specify a sequence of texts or tokens to stop the model from generating further output. Learn more.

Note:
  • You can configure up to four stop sequences when generating the text. These stop sequences tell the model to halt further text generation if any of the specified sequences are encountered.
  • The returned text does not contain the stop sequence.

Default value: N/A

Example: pay, ["amazing"], ["September", "paycheck"]
Advanced prompt configuration Configure the prompt settings to guide the model responses and optimize output processing.
JSON modeCheckbox/Expression Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes the json_output field that contains the parsed JSON object, encapsulating the data.

Default status: Deselected

Snap execution Dropdown list Select one of the three modes in which the Snap executes.
Available options are:
  • Validate & Execute. Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only. Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps that are downstream from it.