Google Gemini Generate

Overview

You can use this Snap to generate text responses using the specified model and model parameters.


Google Gemini Generate Overview

Prerequisites

Limitations

  • gemini-1.0-pro and gemini-1.5-pro is not supported for the Top K field.
  • models/gemini-1.5-flash is not supported for the JSON mode field.

Known issues

None.

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap supports a maximum of one binary or document input view. When the input type is a document, you must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Google GenAI Prompt Generator or any user-desired prompt intended for submission to the Gemini API.
Output This Snap has at the most one document output view. The Snap provides the result generated by the Gemini API. Mapper
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when the Snap encounters an error.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Note:
  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
  • Add icon (): Indicates that you can add fields in the field set.
  • Remove icon (): Indicates that you can remove fields from the field set.
Field / Field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: Google Gemini Generate

Example: Create customer support chatbots
Model name String/Expression

Required. Specify the model name to generate tesxt responses. Learn more about the list of compatible models from Gemini API.

Default value: N/A

Example: models/gemini-1.5-pro
Use content payload Checkbox

Select this checkbox to generate responses using the messages specified in the Content payload field.

Note:
  • When you select this checkbox, the Snap hides the Prompt field and the Content payload field is displayed.
  • When the input view is Binary, this field is hidden.

Default status: Deselected

Content payload String/Expression

Appears when you select the Use content payload checkbox.

Required. Specify the prompt to send to the Gemini API as the user message. The expected data type for this field is a list of objects (a list of messages). You can generate this list with the Google GenAI Prompt Generator Snap.

For example,
[
    {
        "contents": [
            {
                "content": "What day is it today?",
                "sl_role": "USER"
            },
            {
                "content": "Today is Monday.",
                "sl_role": "MODEL"
            },
            {
                "content": "What day is it tomorrow?",
                "sl_role": "USER"
            }
        ]
    }
]

Default value: N/A

Example: $messages
Prompt String/Expression

Appears when you select Document as the Input type.

Required. Specify the prompt to send to the Gemini API as the user message.

Default value: N/A

Example: $msg
Model parameters Configure the parameters to tune the model runtime.
Maximum tokens Integer/Expression
Specify the maximum number of tokens to generate in the chat completion. If left blank, the default value of the model is used.
Note: The response may be incomplete if the sum of the prompt tokens and Maximum tokens exceed the allowed token limit for the model.

Minimum value: 1

Maximum value: 8,192

Default value: N/A

Example: 50

Temperature Decimal/Expression

Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the default value of the model is used. Learn more about the minimum, maximum, and default values for each model.

Minimum value: 0.0

Maximum value: 2.0

Default value: N/A

Example: 0.2

Top P Decimal/Expression

Specify the nucleus sampling value as a decimal between 0 and 1. This value sets the cumulative probability threshold for selecting tokens, which influences the diversity of the generated content. Lower values may result in more focused and deterministic responses, while higher values can increase content variability. If left blank, the default value of the model is used.

Refer to the following default values for each model:
  • gemini-1.5-pro Default value: 0.94
  • gemini-1.0-pro Default value: 1
  • gemini-1.0-pro-vision Default value: 1

Minimum value: 0.0

Maximum value: 1.0

Default value: N/A

Example: 0.2

Top K Integer/Expression
Specify a value to limit the number of high-probability tokens considered for each generation step to control the randomness of the output. If left blank, the default value of the model is used.
Note: Models gemini-1.0-pro and gemini-1.5-pro is not supported.
Refer to the following default value for the model:
  • gemini-1.0-pro-vision Default value: : 32

Minimum value: 1

Maximum value: 40

Default value: N/A

Example: 30

Advanced prompt configuration Configure the prompt settings to guide the model responses and optimize output processing.
JSON mode Checkbox/Expression

Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes a field named json_output that contains the parsed JSON object, encapsulating the data.

Note:
  • models/gemini-1.5-flash is not supported for this field.
  • This field does not support input document values from upstream Snaps.
  • When the output from the model is an invalid JSON, the Snap fails indicating that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view with the error message.
  • When the output from the model indicates that there are not enough tokens, the Snap fails with a reason that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view.

Default status: Deselected

System Prompt String/Expression

Specify the persona for the model to adopt in the responses. This initial instruction guides the LLM's responses and actions. This prompt prepares for the conversation by defining role, personality, tone, and other relevant details to understand and respond to the user's input.

Note:
  • Supports gemini-1.5-pro and gemini-1.0-pro-002 models too.
  • If you leave this field blank, empty or null, the Snap processes the request without using any system prompt.

Default value: N/A

Example:
  • "You are a helpful assistant."
  • "You are an experienced software developer."
  • "You are a customer service representative for a tech company."
Snap execution Dropdown list
Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute. Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only. Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only

Additional information

The following table lists the Models and their corresponding minimum, maximum, and default values for the Temperature field:

Model Name Minimum value Maximum value Default value
gemini-1.5-pro 0.0 2.0 1.0
gemini-1.0-pro-vision 0.0 1.0 0.4
gemini-1.0-pro-002 0.0 2.0 1.0
gemini-1.0-pro-001 0.0 1.0 0.9

Examples