OpenAI Chat Completions


You can use this Snap to generate chat completions using the specified model and model parameters.

OpenAI Chat Completions Overview



Limitations and Known Issues


Snap Views

View Description Examples of Upstream and Downstream Snaps
Input This Snap supports a maximum of one binary or document input view. When the input type is a document, you must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the OpenAI Prompt Generator or any user-desired prompt intended for submission to the chat completions LLM API. Mapper
Output This Snap has at the most one document output view. The Snap provides the result generated by the OpenAI LLMs. Mapper

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when the Snap encounters an error.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
  • Add icon (): Indicates that you can add fields in the field set.
  • Remove icon (): Indicates that you can remove fields from the field set.
Field / Field Set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if there are more than one of the same Snap in the pipeline.

Default value: OpenAI Chat Completions

Example: Create customer support chatbots
Model name String/Expression

Required. Specify the model name to use for the chat completion. Learn more about the list of models from OpenAI that are compatible with the completions API.

Default value: N/A

Example: gpt-3.5-turbo
Prompt String/Expression

Appears when you select Document as the Input type.

Required. Specify the prompt to send to the chat completions endpoint as the user message.

Default value: N/A

Example: $msg
Model parameters Specify the parameters to tune the model runtime.
Maximum tokens Integer/Expression

Specify the maximum number of tokens to generate in the chat completion. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 50

Temperature Number/Expression

Specify the sampling temperature to use a decimal value between 0 and 2. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 0.2

Top P Number/Expression

Specify the nucleus sampling value, a decimal value between 0 and 1. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 0.2

Response count Integer/Expression

Specify the number of responses to generate for each input, where 'n' is a model parameter. If left blank, the default value of the endpoint is used.

Default value: N/A

Example: 2

Snap execution Dropdown list
Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute. Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only. Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only