Azure OpenAI Chat Completions

Overview

You can use this Snap to generate chat completions with the specified model and model parameters.


Azure OpenAI Chat Completions Overview

Prerequisites

Deploy the specific model in the Azure OpenAI Studio portal. Learn more about how to access Azure OpenAI.

Limitations and known issues

None.

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap supports a maximum of one binary or document input view. When the Input type is a document, you must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Azure OpenAI Prompt Generator or any user-desired prompt intended for submission to the chat completions LLM API. Mapper
Output This Snap has at the most one document output view. The Snap provides the result generated by the Azure OpenAI LLMs. Mapper
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when the Snap encounters an error.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Note:
  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
  • Add icon (Plus Icon): Indicates that you can add fields in the field set.
  • Remove icon (Minus Icon): Indicates that you can remove fields from the field set.
Field / Field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: Azure OpenAI Chat Completions

Example: Create customer support chatbots
Deployment ID String/Expression/Suggestion
Required. Specify the model or deployment ID for the model from the Azure OpenAI Studio portal. Learn more about how to retrieve the ID and the list of compatible models.
Note: All deployment IDs available might not be listed in the Suggestions list because of the limitations of Azure APIs

Workaround: Enter the Deployment ID manually (found on your Deployments page within the Azure OpenAI Portal) associated with the model you plan to use.

Default value: N/A

Example: snaplogic-gpt-4
Use message payload Checkbox

Select this checkbox to generate responses using the messages specified in the Message payload field.

Note:
  • When you select this checkbox, the Snap hides the Prompt and System prompt fields and the Message payload field is displayed.
  • When the input view is Binary, this field is hidden.

Default status: Deselected

Message payload String/Expression

Appears when you select the Use message payload checkbox.

Required. Specify the prompt to send to the chat completions endpoint as the user message. The expected data type for this field is a list of objects (a list of messages). You can generate this list with the Azure OpenAI Prompt Generator Snap.

For example,
[
    {
        "content": "You are a helpful assistant",
        "sl_role": "SYSTEM"
    },
    {
        "content": "Who won the world series in 2020?",
        "sl_role": "USER",
        "name": "Snap-User"
    },
    {
        "content": "The Los Angeles Dodgers won the World Series in 2020",
        "sl_role": "ASSISTANT"
    },
    {
        "content": "Where was it played?",
        "sl_role": "USER",
        "name": "Snap-User2"
    }
]

Default value: N/A

Example: $messages
Prompt String/Expression

Appears when you select Document as the Input type.

Required. Specify the prompt to send to the chat completions endpoint as the user message.

Default value: N/A

Example: $msg
Model parameters Configure the parameters to tune the model runtime.
Maximum tokens Integer/Expression

Specify the maximum number of tokens to generate in the chat completion. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 50

Temperature Decimal/Expression

Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 0.2

Top P Decimal/Expression

Specify the nucleus sampling value, a decimal value between 0 and 1. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 0.2

Response count Integer/Expression

Specify the number of responses to generate for each input, where 'n' is a model parameter. If left blank, the default value of the endpoint is used.

Important:
  • This field will be hidden if either Simplify response or Continuation requests is enabled, as both options only support a single choice response.
  • If both the Response count and either Simplify response or Continuation requests are enabled, and the Response count is not set to 1, the following warning will appear: Snap's advanced response configuration for 'Response count' is ignored since 'Simplify response' or 'Continuation requests' is enabled. Set the Response count value to 1 to resolve.

Default value: N/A

Example: 2

Stop sequences String/Expression

Specify a sequence of texts or tokens to stop the model from generating further output. Learn more.

Note:
  • You can configure up to four stop sequences when generating the text. These stop sequences tell the model to halt further text generation if any of the specified sequences are encountered.
  • The returned text does not contain the stop sequence.

Default value: N/A

Example: pay, ["amazing"], ["September", "paycheck"]
Advanced prompt configuration Configure the advanced prompt settings.
System prompt String/Expression
Specify the prompt (inital instruction). This prompt prepares for the conversation by defining the role for Azure OpenAI Service Model specified, personality, tone, and other relevant details to understand and respond to the user's input.
Note:
  • If you leave this empty, or null, the field is ignored.
  • This field supports input document values from upstream Snap.
  • The output represents the result generated by the Azure OpenAI Chat Completions response.

Default value: N/A

Example: Explain the answer to a 6-year-old child.

JSON mode Checkbox/Expression

Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes the parsed JSON object in a field named json_output that contains the data.

Note:
  • When the output from the model is not a valid JSON, the Snap fails indicating that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view with the error message.
  • When you select this checkbox, ensure the word JSON is included in the prompt, either in the Prompt field or the System prompt field. Otherwise, the Snap results in an error. Learn more.
    Azure OpenAI Chat Completions Prompt field with JSON enabled

  • When the output from the model indicates that there are not enough tokens, the Snap fails with a reason that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view.
  • When you select JSON mode, the Continuation requests checkbox is hidden, as this feature is not supported in JSON mode.

Default status: Deselected

Advanced response configurations Configure the response settings to customize the responses and optimize output processing.
Important:
  • The Response count field under Model parameters is hidden when you select either Simplify response or Continuation requests because these options support only a single-choice response.
  • If the Response count value is not set to 1, the following warning is displayed: Snap's advanced response configuration for 'Response count' is ignored since 'Simplify response' or 'Continuation requests' is enabled. To resolve the issue, set the Response count value to 1.
Simplify response Checkbox/Expression Select this checkbox to receive a simplified response format that retains only the most commonly used fields and standardizes the output for compatibility with other models. This option supports only a single choice response. Here's an example of a simplified output format.
{
  "role": <string/null>,
  "content": <string/JSON(for JSON mode)>,
  "tool_calls": <array of tool call information object>, // optional
  "finish_reason": <string>,
  "usage": {
    "prompt_tokens": <integer/null>,
    "output_tokens": <integer/null>,
    "total_tokens": <integer/null>,
  },
  "_sl_responses": <object/array of the raw responses for debug mode> //optional
  "original": {}
}               
Note: This field does not support upstream values.

Default status: Deselected

Continuation requests Checkbox/Expression

Select this checkbox to enable continuation requests. When selected, the Snap automatically requests additional responses if the finish reason is length.

Important: This Snap uses the same schema as the Azure OpenAI response. However, when multiple responses are merged through Continuation requests, certain fields may not merge correctly, such as id, system_fingerprint, prompt_filter_results, and content_filter_results. This is because of the structure of the responses, where specific fields are not designed to be combined across multiple entries.
The following example represents the format of the output when you select the Continuation requests checkbox:
{
  "object": <object>,
  "created":  <created from latest response>,
  "model": <model>,
  "service_tier": <service_tier>,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": <response1> + <response2> + ... <response n>,
        "tool_calls": [...], // if has finish by tool calling
        "refusal": null
      },
      "finish_reason": <finish reason from latest response>
      
      // include if has only one repsonse
      "logprobs": null, 
      "content_filter_results": {
        ...
      },
    }
  ],
  "usage": {
    "prompt_tokens": <sum of prompt_tokens>,
    "completion_tokens": <sum of completion_tokens>,
    "total_tokens": <sum of total_tokens>
  },
  
  // include if has only one repsonse
  "id": <id>,
  "system_fingerprint": null,
  "prompt_filter_results": [
    ...
  ],
  
  // When JSON mode is enabled
  "json_output": <parse json from output>,
  // When debug mode is enabled
  "_sl_responses": [<raw response1>, <raw response2>, ... ,<raw response n>]
}
Note: This field does not support upstream values.

Default status: Deselected

Continuation requests limit Integer/Expression

Appears when you select Continuation requests checkbox.

Required. Specify the maximum number of continuation requests to be made.

Note: This field does not support upstream values.

Minimum value: 1

Maximum value: 20

Default value: N/A

Example: 3
Debug mode Checkbox/Expression

Appears when you select Simplify response or Continuation requests checkbox.

Select this checkbox to enable debug mode. This mode provides the raw response in the _sl_response field and is recommended for debugging purposes only. If Continuation requests is enabled, the _sl_responses field will contain an array of raw response objects from each individual request.

Note: This field does not support upstream values.

Default status: Deselected

Snap execution Dropdown list
Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute. Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only. Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only

Troubleshooting

Unable to generate JSON reponse

Not enough tokens.

Modify the settings and try again.

Unable to parse JSON content string

JSON output is malformed.

Try again.

Request encountered an error when connecting to Azure OpenAI (status code: 400)

JSON_object is not supported with this model/Message must contain the word JSON in some form.

Verify the account credentials and model parameters, and try again.

Snap's model parameters for 'Response count' is ignored since advanced response configurations 'Simplify response' or 'Continuation requests' is enabled

The Response count value is ignored because Simplify response or Continuation requests only supports a single response.

Set the Response count value to 1 to resolve.

Continuation requests limit error.

The Continuation requests limit value is invalid.

Provide a valid value for Continuation requests limit that is between 1-20.

Examples