Google Gemini Generate

Overview

You can use this Snap to generate text responses using the specified model and model parameters.


Google Gemini Generate Overview

Prerequisites

Limitations

  • gemini-1.0-pro and gemini-1.5-pro is not supported for the Top K field.
  • models/gemini-1.5-flash is not supported for the JSON mode field.

Known issues

None.

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap supports a maximum of one binary or document input view. When the input type is a document, you must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Google GenAI Prompt Generator or any user-desired prompt intended for submission to the Gemini API.
Output This Snap has at the most one document output view. The Snap provides the result generated by the Gemini API. Mapper
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when an error occurs.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Legend:
  • Expression icon (): JavaScript syntax to access SnapLogic Expressions to set field values dynamically (if enabled). If disabled, you can provide a static value. Learn more.
  • SnapGPT (): Generates SnapLogic Expressions based on natural language using SnapGPT. Learn more.
  • Suggestion icon (): Populates a list of values dynamically based on your Account configuration.
  • Upload : Uploads files. Learn more.
Learn more about the icons in the Snap settings dialog.
Field / field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: Google Gemini Generate

Example: Create customer support chatbots
Model name String/Expression

Required. Specify the model name to generate text responses. Learn more about the list of compatible models from Gemini API.

Default value: N/A

Example: models/gemini-1.5-pro
Use content payload Checkbox

Select this checkbox to generate responses using the messages specified in the Content payload field.

Note:
  • When you select this checkbox, the Snap hides the Prompt field and the Content payload field is displayed.
  • When the input view is Binary, this field is hidden.

Default status: Deselected

Content payload String/Expression

Appears when you select the Use content payload checkbox.

Required. Specify the prompt to send to the Gemini API as the user message. The expected data type for this field is a list of objects (a list of messages). You can generate this list with the Google GenAI Prompt Generator Snap.

For example,
[
    {
        "contents": [
            {
                "content": "What day is it today?",
                "sl_role": "USER"
            },
            {
                "content": "Today is Monday.",
                "sl_role": "MODEL"
            },
            {
                "content": "What day is it tomorrow?",
                "sl_role": "USER"
            }
        ]
    }
]

Default value: N/A

Example: $messages
Prompt String/Expression

Appears when you select Document as the Input type.

Required. Specify the prompt to send to the Gemini API as the user message.

Default value: N/A

Example: $msg
Model parameters Configure the parameters to tune the model runtime.
Maximum tokens Integer/Expression
Specify the maximum number of tokens to generate in the chat completion. If left blank, the default value of the model is used.
Note: The response may be incomplete if the sum of the prompt tokens and Maximum tokens exceed the allowed token limit for the model.

Minimum value: 1

Maximum value: 8,192

Default value: N/A

Example: 50

Temperature Decimal/Expression

Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the default value of the model is used. Learn more about the minimum, maximum, and default values for each model.

Minimum value: 0.0

Maximum value: 2.0

Default value: N/A

Example: 0.2

Top P Decimal/Expression

Specify the nucleus sampling value as a decimal between 0 and 1. This value sets the cumulative probability threshold for selecting tokens, which influences the diversity of the generated content. Lower values may result in more focused and deterministic responses, while higher values can increase content variability. If left blank, the default value of the model is used.

The default values for each model are as follows:
  • gemini-1.5-pro: 0.94
  • gemini-1.0-pro: 1
  • gemini-1.0-pro-vision: 1

Minimum value: 0.0

Maximum value: 1.0

Default value: N/A

Example: 0.2

Top K Integer/Expression
Specify a value to limit the number of high-probability tokens considered for each generation step to control the randomness of the output. If left blank, the default value of the model is used.
Note: Models gemini-1.0-pro and gemini-1.5-pro is not supported.
Refer to the following default value for the model:
  • gemini-1.0-pro-vision Default value: 32

Minimum value: 1

Maximum value: 40

Default value: N/A

Example: 30

Stop sequences String/Expression

Specify a sequence of texts or tokens to stop the model from generating further output. Learn more.

Note:
  • You can configure up to five stop sequences when generating the text. These stop sequences tell the model to halt further text generation if any of the specified sequences are encountered.
  • The returned text does not contain the stop sequence.

Default value: N/A

Example: pay, ["amazing"], ["September", "paycheck"]
Advanced prompt configuration Configure the prompt settings to guide the model responses and optimize output processing.
JSON mode Checkbox/Expression

Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes the parsed JSON object in a field named json_output that contains the data.

Note:
  • models/gemini-1.5-flash is not supported for this field.
  • This field does not support input document values from upstream Snaps.
  • When the output from the model is an invalid JSON, the Snap fails indicating that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view with the error message.
  • When the output from the model indicates that there are not enough tokens, the Snap fails with a reason that it failed to parse the JSON in the output. However, the Snap provides the full output from the LLM Model in the error view.

Default status: Deselected

System Prompt String/Expression

Specify the persona for the model to adopt in the responses. This initial instruction guides the LLM's responses and actions. This prompt prepares for the conversation by defining role, personality, tone, and other relevant details to understand and respond to the user's input.

Note:
  • Supports gemini-1.5-pro and gemini-1.0-pro-002 models too.
  • If you leave this field blank, empty or null, the Snap processes the request without using any system prompt.

Default value: N/A

Example:
  • "You are a helpful assistant."
  • "You are an experienced software developer."
  • "You are a customer service representative for a tech company."
Advanced response configurations Configure the response settings to customize the responses and optimize output processing.
Simplify response Checkbox/Expression Select this checkbox to receive a simplified response format that retains only the most commonly used fields and standardizes the output for compatibility with other models. This option supports only a single choice response. Here's an example of a simplified output format.
{
  "role": <string/null>,
  "content": <string/JSON(for JSON mode)>,
  "tool_calls": <array of tool call information object>, // optional
  "finish_reason": <string>,
  "usage": {
    "prompt_tokens": <integer/null>,
    "output_tokens": <integer/null>,
    "total_tokens": <integer/null>,
  },
  "_sl_responses": <object/array of the raw responses for debug mode> //optional
  "original": {}
}               
Important: This field does not support upstream values.

Default status: Deselected

Continuation requests Checkbox/Expression
Select this checkbox to enable continuation requests. When selected, the Snap automatically requests additional responses if the finish reason is Maximum tokens.
Important: This Snap uses the same schema as the Google Gemini generate response. However, when multiple responses are merged through Continuation requests, certain fields may not merge correctly, such as safetyRatings. This is because of the structure of the responses, where specific fields are not designed to be combined across multiple entries.
The following example represents the format of the output when you select the Continuation requests checkbox:
{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": <response1> + <response2> + ... <response n>,
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP",
      "index": 0,
      "safetyRatings": [...],  // include if has only one repsonse
      "citationMetadata": {
        "citations": [
          <citation response 1>,
          <citation response 2>,
          ...
          <citation response n>,
        ]
      },
      "avgLogprobs": <avgLogprobs>  // include if has only one repsonse
    }
  ],
  "usageMetadata": {
    "promptTokenCount": <sum of promptTokenCount>,
    "candidatesTokenCount": <sum of candidatesTokenCount>,
    "totalTokenCount": <sum of totalTokenCount>
  },
  
  // When JSON mode is enabled
  "json_output": <parse json from output>,
  // When debug mode is enabled
  "_sl_responses": [<raw response1>, <raw response2>, ... ,<raw response n>]
}
Important: This field does not support upstream values.

Default status: Deselected

Continuation requests limit Integer/Expression

Appears when you select Continuation requests checkbox.

Required. Specify the maximum number of continuation requests to be made.

Important: This field does not support upstream values.

Minimum value: 1

Maximum value: 20

Default value: N/A

Example: 3
Debug mode Checkbox/Expression

Appears when you select Simplify response or Continuation requests checkbox.

Select this checkbox to enable debug mode. This mode provides the raw response in the _sl_response field and is recommended for debugging purposes only. If Continuation requests is enabled, the _sl_responses field will contain an array of raw response objects from each individual request.

Important: This field does not support upstream values.

Default status: Deselected

Snap execution Dropdown list
Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only: Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only

Additional information

The following table lists the Models and their corresponding minimum, maximum, and default values for the Temperature field:

Model name Default value Minimum value Maximum value
gemini-1.5-pro 1.0 0.0 2.0
gemini-1.0-pro-vision 0.4 0.0 1.0
gemini-1.0-pro-002 1.0 0.0 2.0
gemini-1.0-pro-001 0.9 0.0 1.0

Troubleshooting

Continuation requests limit error.

The Continuation requests limit value is invalid.

Provide a valid value for Continuation requests limit that is between 1-20.

Examples