Amazon Bedrock Converse API

Overview

You can use this Snap to generate messages using the specified Amazon Bedrock Converse API model and model parameters.


Amazon Bedrock Converse API Overview

Prerequisites

None.

Known issues

None.

Limitations

  • When you select JSON mode with Claude models, they may produce malformed JSON, causing parsing errors.

    Workaround: Ensure your prompt clearly asks for a valid JSON response, such as: Respond with a valid JSON object.

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap supports a maximum of one binary or document input view.
  • Binary Input type: Requires a binary input to be used as the prompt. When you select the Binary input view, the Prompt field is hidden.
  • Document Input type: You must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Amazon Bedrock Prompt Generator Snap or any user-desired prompt intended for submission to the Amazon Bedrock Converse API.
Output This Snap has at the most one document output view. The Snap provides the result generated by the Amazon Bedrock Converse API. Mapper
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when an error occurs.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Legend:
  • Expression icon (): JavaScript syntax to access SnapLogic Expressions to set field values dynamically (if enabled). If disabled, you can provide a static value. Learn more.
  • SnapGPT (): Generates SnapLogic Expressions based on natural language using SnapGPT. Learn more.
  • Suggestion icon (): Populates a list of values dynamically based on your Account configuration.
  • Upload : Uploads files. Learn more.
Learn more about the icons in the Snap settings dialog.
Field / field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: Amazon Bedrock Converse API

Example: Create customer support chatbots
Model name String/Expression/Suggestion

Required. Specify the model name to use for converse API. Learn more about the list of supported Amazon Bedrock Converse API models.

Default value: N/A

Example: anthropic.claude-3-sonnet
Use message payload Checkbox

Select this checkbox to generate responses using the messages specified in the Message payload field.

Note:
  • When you select this checkbox, the Snap hides the Prompt and System prompt fields and the Message payload field is displayed.
  • When the input view is Binary, this field is hidden.

Default status: Deselected

Message payload String/Expression

Appears when you select the Use message payload checkbox.

Required. Specify the prompt to send to the chat completions endpoint as the user message. The expected data type for this field is a list of objects (a list of messages). You can generate this list with the Amazon Bedrock Prompt Generator Snap.

For example,
[
    {
        "content": "You are a helpful assistant",
        "sl_role": "SYSTEM"
    },
    {
        "content": "Who won the world series in 2020?",
        "sl_role": "USER",
        "name": "Snap-User"
    },
    {
        "content": "The Los Angeles Dodgers won the World Series in 2020",
        "sl_role": "ASSISTANT"
    },
    {
        "content": "Where was it played?",
        "sl_role": "USER",
        "name": "Snap-User2"
    }
]
Note:
  • If a message contains an unsupported role (for example, SYSTEM), it is passed as is to the endpoint. This can potentially result in call failure because the role is not supported by the Amazon Bedrock Prompt Generator Snap.
  • If a message being processed contains any field not supported (for example, name field), it is ignored and excluded in the final message sent to the API because the endpoint does not support it. The supported fields are only content and role.

Default value: N/A

Example: $messages
Prompt String/Expression

Appears when you select Document as the Input type.

Required. Specify the prompt to send to the chat completions endpoint as the user message.

Default value: N/A

Example: $msg
Model parameters Configure the parameters to tune the model runtime.
Maximum tokens Integer/Expression
Specify the maximum number of tokens to generate in the chat completion. If left blank, the default will be set to the specified model's maximum allowed value. Learn more.
Note: The response may be incomplete if the sum of the prompt tokens and Maximum tokens exceed the allowed token limit for the model.
Minimum value: 1

Default value: N/A

Example: 100

Temperature Decimal/Expression

Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the model will use its default value. Learn more.

Default value: N/A

Example: 0.2

Top P Decimal/Expression

Specify the nucleus sampling value, a decimal value between 0 and 1. If left blank, the model will use its default value. Learn more.

Default value: N/A

Example: 0.2

Stop sequences String/Expression

Specify a list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response. To use a list of strings, expression must be enabled.

Default value: N/A

Example:
  • coffee
  • ["coffee", "bean"]
Advanced prompt configuration

Appears when a compatible model is selected. Learn more about the supported models.

Configure the advanced prompt settings.
JSON mode Checkbox/Expression

Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes the parsed JSON object in a field named json_output that contains the data. Learn more about the supported models.

Note:
  • This field does not support input document values from upstream Snaps.
  • When the output from the model is an invalid JSON, the Snap fails indicating that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view with the error message.
  • When the output from the model indicates that there are not enough tokens, the Snap fails with a reason that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view.
  • When you select this checkbox and specify a Message payload, the Snap automatically adds an Assistant message at the end of the message payload to ensure the model returns JSON in its response. Therefore, if you use JSON mode and a Message payload, ensure your message list ends with a User message, not an Assistant message, to avoid errors.

Default status: Deselected

System Prompt String/Expression

Specify the prompt (inital instruction). This prompt prepares for the conversation by defining the model's role, personality, tone, and other relevant details to understand and respond to the user's input. Learn more about the supported models.

Note:
  • If you leave this field blank, empty or null, the Snap processes the request without using any system prompt.
  • This field supports input document values from upstream Snap.
  • The output represents the result generated by the Amazon Bedrock Converse API Response.

Default value: N/A

Example:
  • Explain the answer to a 6-year-old child.
  • Explain your role as an AI assistant.
Advanced response configurations Configure the response settings to customize the responses and optimize output processing.
Simplify response Checkbox/Expression Select this checkbox to receive a simplified response format that retains only the most commonly used fields and standardizes the output for compatibility with other models. This option supports only a single choice response. Here's an example of a simplified output format.
{
  "role": <string/null>,
  "content": <string/JSON(for JSON mode)>,
  "tool_calls": <array of tool call information object>, // optional
  "finish_reason": <string>,
  "usage": {
    "prompt_tokens": <integer/null>,
    "output_tokens": <integer/null>,
    "total_tokens": <integer/null>,
  },
  "_sl_responses": <object/array of the raw responses for debug mode> //optional
  "original": {}
}               
Important: This field does not support upstream values.

Default status: Deselected

Continuation requests Checkbox/Expression

Appears when you select a Model name starting with anthropic.claude.

Select this checkbox to enable continuation requests. When selected, the Snap automatically requests additional responses if the finish reason is Maximum tokens.

Important: This Snap uses the same schema as the Amazon Bedrock Converse response. However, when multiple responses are merged through Continuation requests, certain fields may not merge correctly, such as additionalModelResponseFields, metrics, and trance. This is due to the structure of the responses, where specific fields are not designed to be combined across multiple entries.
The following example represents the format of the output when you select the Continuation requests checkbox:
{
  "additionalModelResponseFields": { ... }, // include if has only one repsonse
  "metrics": { ... }, // include if has only one repsonse
  "output": {
      "message": {
          "content": [
              {
                  "text": <response1> + <response2> + ... <response n>
              }
          ],
          "role": "assistant"
      }
  },
  "stop_reason": <stop_reason from latest response>
  "trace": { ... }, // include if has only one repsonse
  "usage": { 
      "inputTokens": <sum of inputTokens>,
      "outputTokens": <sum of outputTokens>,
      "totalTokens": <sum of totalTokens>
   },
  
  // When JSON mode is enabled
  "json_output": <parse json from output>,
  // When debug mode is enabled
  "_sl_responses": [<raw response1>, <raw response2>, ... ,<raw response n>]
}
Important: This field does not support upstream values.

Default status: Deselected

Continuation requests limit Integer/Expression

Appears when you select Continuation requests checkbox.

Required. Specify the maximum number of continuation requests to be made.

Important: This field does not support upstream values.

Minimum value: 1

Maximum value: 20

Default value: N/A

Example: 3
Debug mode Checkbox/Expression

Appears when you select Simplify response or Continuation requests checkbox.

Select this checkbox to enable debug mode. This mode provides the raw response in the _sl_response field and is recommended for debugging purposes only. If Continuation requests is enabled, the _sl_responses field will contain an array of raw response objects from each individual request.

Important: This field does not support upstream values.

Default status: Deselected

Snap execution Dropdown list
Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only: Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only

Additional information

The following table lists the supported models for the JSON Mode and System prompt fields:

Field name Supported models
JSON Mode This field only supports models that starts with anthropic.claude:
  • anthropic.claude-instant-v1
  • anthropic.claude-v2:1
  • anthropic.claude-v2
  • anthropic.claude-3-sonnet-20240229-v1:0
  • anthropic.claude-3-haiku-20240307-v1:0
  • anthropic.claude-3-opus-20240229-v1:0
  • anthropic.claude-3-5-sonnet-20240620-v1:0
System prompt
  • anthropic.claude-instant-v1
  • anthropic.claude-v2:1
  • anthropic.claude-v2
  • anthropic.claude-3-sonnet-20240229-v1:0
  • anthropic.claude-3-haiku-20240307-v1:0
  • anthropic.claude-3-opus-20240229-v1:0
  • anthropic.claude-3-5-sonnet-20240620-v1:0
  • cohere.command-r-v1:0
  • cohere.command-r-plus-v1:0
  • meta.llama3-8b-instruct-v1:0
  • meta.llama3-70b-instruct-v1:0
  • meta.llama3-1-8b-instruct-v1:0
  • meta.llama3-1-70b-instruct-v1:0
  • meta.llama3-1-405b-instruct-v1:0
  • mistral.mistral-large-2402-v1:0
  • mistral.mistral-large-2407-v1:0

Troubleshooting

Continuation requests limit error.

The Continuation requests limit value is invalid.

Provide a valid value for Continuation requests limit that is between 1-20.

Unable to parse JSON content string

This error occurs because of a limitation in the Claude models, which sometimes generates incorrectly formatted JSON responses.

To avoid this, explicitly request a valid JSON response in your prompt. For example, Respond with a valid JSON object.

Examples