Amazon Bedrock Converse Agent

Overview

The Amazon Bedrock Converse API Agent Snap accepts input containing an initial request contents, a list of tools, and parameters to invoke an Agent execution loop. The Snap handles the call to the Amazon Bedrock Converse API, consumes the result and call tools, then collect them accordingly until the model has no more tools to call.

Learn more about the Agent Snap in this developer blog.


Snap dialog with fieldsets expanded on the right side

  • Read-type Snap

Limitations

  • When you select JSON mode with Claude models, they may produce malformed JSON, causing parsing errors.

    Workaround: Ensure your prompt clearly asks for a valid JSON response, such as: Respond with a valid JSON object.

  • Selecting the Thinking checkbox entails the following limitations on the Model parameters fieldset:
    • Thinking is not compatible with modifications to Temperature, Top K, or Top P fields.
    • Thinking does not support forcing tool use.
    • You cannot pre-fill responses with Thinking enabled.
    • Changes to the thinking budget invalidate cached prompt prefixes that include messages. However, cached system prompts and tool definitions will continue to work when thinking parameters change.

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap supports a maximum of one binary or document input view.
  • Binary Input type: Requires a binary input to be used as the prompt. When you select the Binary input view, the Prompt field is hidden.
  • Document Input type: You must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Amazon Bedrock Prompt Generator Snap or any user-desired prompt intended for submission to the Amazon Bedrock Converse API.
Output This Snap has at the most one document output view. The Snap provides the result generated by the Amazon Bedrock Converse API. Mapper
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when an error occurs.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Legend:
  • Expression icon (): Allows using JavaScript syntax to access SnapLogic Expressions to set field values dynamically (if enabled). If disabled, you can provide a static value. Learn more.
  • SnapGPT (): Generates SnapLogic Expressions based on natural language using SnapGPT. Learn more.
  • Suggestion icon (): Populates a list of values dynamically based on your Snap configuration. You can select only one attribute at a time using the icon. Type into the field if it supports a comma-separated list of values.
  • Upload : Uploads files. Learn more.
Learn more about the icons in the Snap settings dialog.
Field / Field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: Amazon Bedrock Converse API

Example: Create customer support agents
Visualize agent flow String

Launch the Agent Visualizer UI.

Default value: N/A

Include Cross-Region Models Checkbox

Shows cross-region models in the Model Name suggestions if selected. Enabled only when Model Name is not an expression.

Default value: Deselected

Note: This setting only changes the suggest behavior for the model listing and not during runtime.
Model name String/Expression/ Suggestion

Required. Specify the model name to use for converse API. Learn more about the list of supported Amazon Bedrock Converse API models.

Default value: N/A

Example: anthropic.claude-3-sonnet
System prompt String/Expression

Specify the prompt (initial instruction). This prompt prepares for the conversation by defining the model's role, personality, tone, and other relevant details to understand and respond to the user's input. Learn more about the supported models.

Note:
  • If you leave this field blank, empty or null, the Snap processes the request without using any system prompt.
  • This field supports input document values from upstream Snap.
  • The output represents the result generated by the Amazon Bedrock Converse API Response.

Default value: N/A

Example:
  • Explain the answer to a 6-year-old child.
  • Explain your role as an AI assistant.
Message payload String/Expression

Required. Specify the message payload for the associated Converse API model.

For example,
[
    {
        "content": "You are a helpful assistant",
        "sl_role": "SYSTEM"
    },
    {
        "content": "Who won the world series in 2020?",
        "sl_role": "USER",
        "name": "Snap-User"
    },
    {
        "content": "The Los Angeles Dodgers won the World Series in 2020",
        "sl_role": "ASSISTANT"
    },
    {
        "content": "Where was it played?",
        "sl_role": "USER",
        "name": "Snap-User2"
    }
]
Important: Here are the typical scenarios of how the tool calling Snap processes different types of message lists in the input document:
  • If a message does not contain sl_role, the Snap sends the message to the model without any modifications.
  • If a message does not contain sl_role but has sl_type in any item of the content list, the Snap sends without any modifications.
  • If the sl_role is USER or ASSISTANT, the Snap parses the sl_role and the content fields. All the other fields are filtered out and are not sent to the model; however, they are displayed in the output document.
  • If the sl_role is SYSTEM, USER or ASSISTANT and if the content is multimodal content generated by Multimodal Content Generator Snap, the Snap reformats the content as the model requires.
  • If the sl_role is TOOL, which is a function result generated by Function Result Generator Snap, the Snap parses the sl_role, function_id, is_error and the content fields. All the other fields are filtered out and are not sent to the model; however they are displayed in the output document.

Default value: N/A

Example: $messages
Tool payload String/Expression

Required. Specify the list of tool definitions to send to the model.

Default value: N/A

Example: $messages
Agent execution configuration

Modify the limitation of the Agent execution.

Iteration limit Integer/Expression

Required. The maximum iterations an agent should run.

Default value: 10

Example: 10
Monitor tool calls Checkbox

Monitor tool call parameters in pipeline statistics.

Default status: Selected

Pool size Integer/Expression

Required. The number of threads for parallel tool execution.

Default value:1

Example: 1
Reuse tool pipeline Checkbox

Reuse the tool pipeline for tool execution.

Default status: Deselected

Tool configuration

Modify the tool call settings to guide the model responses and optimize output processing.

Tool choice Dropdown list/Expression
Required. Choose the preferred tool which the model has to call. Available options include:
  • ANY
  • SPECIFY A FUNCTION
  • AUTO
Important: The SPECIFY A FUNCTION option is only available for Anthropic Claude 3 models.

Default value: AUTO

Example: ANY

Cache tools Checkbox/Expression

Select to reduce inference response latency and input token costs.

As of May 2025, supported models are:

  • Claude 3.7 Sonnet
  • Claude 3.5 Haiku
  • Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro
  • Claude 3 Opus 4.1
  • Claude Opus 4
  • Claude Sonnet 4.5
  • Claude Haiku 4.5
  • Claude Sonnet 4
  • CLaude 3.7 Sonnet
  • Claude 3.5 Haiku
  • CLaude 3.5 Sonnet v2

This field also displays under Advanced prompt configuration. Learn more supported models

.
Function name String/Expression

Appears when you select SPECIFY A FUNCTION as the Tool choice.

Required. Specify the name of the function to force the model to call.

Default value: N/A

Example: get_weather_info

Model parameters Configure the parameters to tune the model runtime.
Thinking Checkbox

Enabled if the model supports thinking or the model name is an expression.

Supporting models include the following options from the Model dropdown:

  • anthropic.claude-3-7-sonnet
  • anthropic.claude-sonnet-4
  • anthropic.claude-opus-4
  • anthropic.claude-sonnet-4-5

Learn more.

Default: Deselected

Budget tokens Integer

Number of tokens for reasoning apart from output tokens.

Enabled if Thinking is selected or if the model name is an expression.

Maximum tokens Integer/Expression
Specify the maximum number of tokens to generate in the chat completion. If left blank, the default will be set to the specified model's maximum allowed value. Learn more.
Note: The response may be incomplete if the sum of the prompt tokens and Maximum tokens exceed the allowed token limit for the model.
Minimum value: 1

Default value: N/A

Example: 100

Temperature Decimal/Expression

Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the model will use its default value. Learn more.

Default value: N/A

Example: 0.2

Top P Decimal/Expression

Specify the nucleus sampling value, a decimal value between 0 and 1. If left blank, the model will use its default value. Learn more.

Default value: N/A

Example: 0.2

Stop sequences String/Expression

Specify a sequence of texts or tokens to stop the model from generating further output.

Default value: N/A

Example:
  • coffee
  • ["coffee", "bean"]
Advanced prompt configuration

Use this field set to configure the advanced prompt settings.

System prompt String/Expression

Specify the prompt (initial instruction). This prompt prepares for the conversation by defining the model's role, personality, tone, and other relevant details to understand and respond to the user's input. Learn more about the supported models.

Note:
  • If you leave this field blank, empty or null, the Snap processes the request without using any system prompt.
  • This field supports input document values from upstream Snap.
  • The output represents the result generated by the Amazon Bedrock Converse API Response.

Default value: N/A

Example:
  • Explain the answer to a 6-year-old child.
  • Explain your role as an AI assistant.
Cache system prompt Checkbox/Expression

Appears when $.settings.modelName.value.includes('anthropic.claude') or $.settings.modelName.value.includes('cohere.command-r') or $.settings.modelName.value.includes('meta.llama3') or $.settings.modelName.value.includes('mistral.mistral-large') or $.settings.modelName.value.includes('amazon.nova') or $.settings.modelName.expression

To cache the system prompt for the requests

Default status: Deselected

Snap execution Dropdown list
Choose one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only: Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only

Troubleshooting

The following function (s) does not have a corresponding tool pipeline or the schema is invalid: get_weather _api

Tool pipeline path is incorrect.

Include a pipeline path for the function defined and ensure the tool type is valid.

The Agent is not complete but the execution is completed.

The iteration limit is reached, but the finish reason is not a stop condition.

Consider providing a greater iteration limit.

Duplicate tool names found: get_weather_api

The duplicated tool names will be automatically renamed to avoid conflicts.

Rename the tool names.