Google Gemini Generate
Overview
You can use this Snap to generate text responses using the specified model and model parameters.

Transform-type Snap
Works in Ultra Tasks
Prerequisites
- A valid Google GenAI Service Account, Google GenAI Access Token Account or Google Gemini API Key Account configured with the following OAuth scope https://www.googleapis.com/auth/generative-language.
Limitations
- gemini-1.0-pro and gemini-1.5-pro is not supported for the Top K field.
- models/gemini-1.5-flash is not supported for the JSON mode field.
Known issues
None.
Snap views
View | Description | Examples of upstream and downstream Snaps |
---|---|---|
Input | This Snap supports a maximum of one binary or document input view. When the input type is a document, you must provide a field to specify the path to the input prompt. The Snap requires a prompt, which can be generated either by the Google GenAI Prompt Generator or any user-desired prompt intended for submission to the Gemini API. | |
Output | This Snap has at the most one document output view. The Snap provides the result generated by the Gemini API. | Mapper |
Error |
Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Learn more about Error handling in Pipelines. |
Snap settings
- Expression icon (
): JavaScript syntax to access SnapLogic Expressions to set field values dynamically (if enabled). If disabled, you can provide a static value. Learn more.
- SnapGPT (
): Generates SnapLogic Expressions based on natural language using SnapGPT. Learn more.
- Suggestion icon (
): Populates a list of values dynamically based on your Account configuration.
- Upload
: Uploads files. Learn more.
Field / field set | Type | Description |
---|---|---|
Label | String |
Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline. Default value: Google Gemini Generate Example: Create customer support chatbots |
Model name | String/Expression |
Required. Specify the model name to generate text responses. Learn more about the list of compatible models from Gemini API. Default value: N/A Example: models/gemini-1.5-pro |
Use content payload | Checkbox |
Select this checkbox to generate responses using the messages specified in the Content payload field. Note:
Default status: Deselected |
Content payload | String/Expression |
Appears when you select the Use content payload checkbox. Required. Specify the prompt to send to the Gemini API as the user message. The expected data type for this field is a list of objects (a list of messages). You can generate this list with the Google GenAI Prompt Generator Snap. For example,
Default value: N/A Example: $messages |
Prompt | String/Expression |
Appears when you select Document as the Input type. Required. Specify the prompt to send to the Gemini API as the user message. Default value: N/A Example: $msg |
Model parameters | Configure the parameters to tune the model runtime. | |
Maximum tokens | Integer/Expression |
Specify the maximum number of tokens to generate in the chat completion. If left
blank, the default value of the model is used.
Note: The response may be incomplete if the sum of the
prompt tokens and Maximum tokens exceed the allowed token limit for the model. Minimum value: 1 Maximum value: 8,192 Default value: N/A Example: 50 |
Temperature | Decimal/Expression |
Specify the sampling temperature to use a decimal value between 0 and 1. If left blank, the default value of the model is used. Learn more about the minimum, maximum, and default values for each model. Minimum value: 0.0 Maximum value: 2.0 Default value: N/A Example: 0.2 |
Top P | Decimal/Expression |
Specify the nucleus sampling value as a decimal between 0 and 1. This value sets the cumulative probability threshold for selecting tokens, which influences the diversity of the generated content. Lower values may result in more focused and deterministic responses, while higher values can increase content variability. If left blank, the default value of the model is used. The default values for each model are as follows:
Minimum value: 0.0 Maximum value: 1.0 Default value: N/A Example: 0.2 |
Top K | Integer/Expression |
Specify a value to limit the number of high-probability tokens considered for
each generation step to control the randomness of the output. If left blank, the
default value of the model is used.
Note: Models
gemini-1.0-pro and
gemini-1.5-pro is not supported. Refer to the following default value for the model:
Minimum value: 1 Maximum value: 40 Default value: N/A Example: 30 |
Stop sequences | String/Expression |
Specify a sequence of texts or tokens to stop the model from generating further output. Learn more. Note:
Default value: N/A Example: pay, ["amazing"], ["September", "paycheck"] |
Advanced prompt configuration | Configure the prompt settings to guide the model responses and optimize output processing. | |
JSON mode | Checkbox/Expression |
Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes the parsed JSON object in a field named json_output that contains the data. Note:
Default status: Deselected |
System Prompt | String/Expression |
Specify the persona for the model to adopt in the responses. This initial instruction guides the LLM's responses and actions. This prompt prepares for the conversation by defining role, personality, tone, and other relevant details to understand and respond to the user's input. Note:
Default value: N/A Example:
|
Advanced response configurations | Configure the response settings to customize the responses and optimize output processing. | |
Simplify response | Checkbox/Expression | Select this checkbox to receive a simplified response format that retains only
the most commonly used fields and standardizes the output for compatibility with
other models. This option supports only a single choice response. Here's an example
of a simplified output format.
Important: This field does not support upstream
values.
Default status: Deselected |
Continuation requests | Checkbox/Expression |
Select this checkbox to enable continuation requests. When selected, the Snap
automatically requests additional responses if the finish reason is
Maximum tokens.
Important: This Snap uses the
same schema as the Google Gemini generate response. However, when multiple
responses are merged through Continuation requests,
certain fields may not merge correctly, such as
safetyRatings. This is because of the structure of the
responses, where specific fields are not designed to be combined across multiple
entries. The following example represents the format of the output when
you select the Continuation requests
checkbox:
Important: This field does not support upstream
values.
Default status: Deselected |
Continuation requests limit | Integer/Expression |
Appears when you select Continuation requests checkbox. Required. Specify the maximum number of continuation requests to be made. Important: This field does not support upstream
values.
Minimum value: 1 Maximum value: 20 Default value: N/A Example: 3 |
Debug mode | Checkbox/Expression |
Appears when you select Simplify response or Continuation requests checkbox. Select this checkbox to enable debug mode. This mode provides the raw response in the _sl_response field and is recommended for debugging purposes only. If Continuation requests is enabled, the _sl_responses field will contain an array of raw response objects from each individual request. Important: This field does not support upstream
values.
Default status: Deselected |
Snap execution | Dropdown list |
Select one of the three modes in which the Snap executes.
Available options are:
Default value: Validate & Execute Example: Execute only |
Additional information
The following table lists the Models and their corresponding minimum, maximum, and default values for the Temperature field:
Model name | Default value | Minimum value | Maximum value |
---|---|---|---|
gemini-1.5-pro | 1.0 | 0.0 | 2.0 |
gemini-1.0-pro-vision | 0.4 | 0.0 | 1.0 |
gemini-1.0-pro-002 | 1.0 | 0.0 | 2.0 |
gemini-1.0-pro-001 | 0.9 | 0.0 | 1.0 |
Troubleshooting
Continuation requests limit error.
The Continuation requests limit value is invalid.
Provide a valid value for Continuation requests limit that is between 1-20.