Azure OpenAI Chat Completions
Overview
You can use this Snap to generate chat completions with the specified model and model parameters.
- Transform-type Snap
- Works in Ultra Pipelines
Prerequisites
Deploy the specific model in the Azure OpenAI Studio portal. Learn more about how to access Azure OpenAI.Limitations and known issues
None.
Snap views
View | Description | Examples of upstream and downstream Snaps |
---|---|---|
Input | This Snap supports a maximum of one binary or document input view. When the Input type is a document, you must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Azure OpenAI Prompt Generator or any user-desired prompt intended for submission to the chat completions LLM API. | Mapper |
Output | This Snap has at the most one document output view. The Snap provides the result generated by the Azure OpenAI LLMs. | Mapper |
Error |
Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Learn more about Error handling in Pipelines. |
Snap settings
- Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
- Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
- Add icon (): Indicates that you can add fields in the field set.
- Remove icon (): Indicates that you can remove fields from the field set.
Field / Field set | Type | Description |
---|---|---|
Label | String |
Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline. Default value: Azure OpenAI Chat Completions Example: Create customer support chatbots |
Deployment ID | String/Expression/Suggestion |
Required. Specify the model or deployment ID for the model from the Azure OpenAI Studio portal.
Learn more about how to retrieve the ID and the list of compatible models.
Note: All deployment IDs available might not be listed in the Suggestions list because of the limitations of Azure APIs
Workaround: Enter the Deployment ID manually (found on your Deployments page within the Azure OpenAI Portal) associated with the model you plan to use. Default value: N/A Example: snaplogic-gpt-4 |
Use message payload | Checkbox |
Select this checkbox to generate responses using the messages specified in the Message payload field. Note:
Default status: Deselected |
Message payload | String/Expression |
Appears when you select the Use message payload checkbox. Required. Specify the prompt to send to the chat completions endpoint as the user message. The expected data type for this field is a list of objects (a list of messages). You can generate this list with the Azure OpenAI Prompt Generator Snap. For example,
Default value: N/A Example: $messages |
Prompt | String/Expression |
Appears when you select Document as the Input type. Required. Specify the prompt to send to the chat completions endpoint as the user message. Default value: N/A Example: $msg |
Model parameters | Configure the parameters to tune the model runtime. | |
Maximum tokens | Integer/Expression |
Specify the maximum number of tokens to generate in the chat completion. If left blank, the default value of the endpoint is used. Default value: N/A Example: 50 |
Temperature | Decimal/Expression |
Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the default value of the endpoint is used. Default value: N/A Example: 0.2 |
Top P | Decimal/Expression |
Specify the nucleus sampling value, a decimal value between 0 and 1. If left blank, the default value of the endpoint is used. Default value: N/A Example: 0.2 |
Response count | Integer/Expression |
Specify the number of responses to generate for each input, where 'n' is a model parameter. If left blank, the default value of the endpoint is used. Default value: N/A Example: 2 |
Stop sequences | String/Expression |
Specify a sequence of texts or tokens to stop the model from generating further output. Learn more. Note:
Default value: N/A Example: pay, ["amazing"], ["September", "paycheck"] |
Advanced prompt configuration | Configure the advanced prompt settings. | |
System prompt | String/Expression |
Specify the prompt (inital instruction).
This prompt prepares for the conversation by defining the role for Azure OpenAI Service Model specified, personality, tone, and other relevant details to understand and respond to the user's input.
Note:
Default value: N/A Example: Explain the answer to a 6-year-old child. |
JSON mode | Checkbox/Expression |
Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes a field named json_output that contains the parsed JSON object, encapsulating the data. Note:
Default status: Deselected |
Snap execution | Dropdown list |
Select one of the three modes in which the Snap executes.
Available options are:
Default value: Validate & Execute Example: Execute only |
Troubleshooting
Unable to generate JSON reponse
Not enough tokens.
Modify the settings and try again.
Unable to parse JSON content string
JSON output is malformed.
Try again.
Request encountered an error when connecting to Azure OpenAI (status code: 400)
JSON_object is not supported with this model/Message must contain the word JSON in some form.
Verify the account credentials and model parameters, and try again.