Snowflake Snap Pack examples
Example | Snaps used |
---|---|
The following example pipeline demonstrates how you can convert the staged data into binary data using the binary file format before loading it into the Snowflake database. |
|
The following example pipeline demonstrates how you can convert the staged data into binary data using the binary file format before loading it into the Snowflake database. |
|
Transform data using a select query The following example pipeline demonstrates how you can reorder the columns using the SELECT statement transform option before loading data into Snowflake database. We use the Snowflake - Bulk Load Snap to accomplish this task. |
|
In the following example, we update a record using the Snowflake Bulk Upsert Snap. The invalid records which cannot be inserted will be routed to an error view. |
|
In the following example, we update a record using the Snowflake Bulk Upsert Snap. The invalid records which cannot be inserted will be routed to an error view. |
|
In the following example, the Snowflake Delete Snap deletes a record from the Snowflake database table, ADOBEDATA123, under the schema PRASANNA. |
|
The following example demonstrates the execution of Snowflake SQL query using the Snowflake Execute Snap. |
|
This example pipeline demonstrates how to insert data into a table using the Snowflake Execute Snap. |
|
User-defined functions (UDFs) created in the Snowflake console can be executed using Snowflake - Execute Snap. In the following example, the SQL statement is defined and then the Snap is executed with that conditions. |
|
This pipeline reads the data from a table in Oracle and inserts it into a Snowflake tableusing the Snowflake - Insert Snap. |
|
The following example shows how a Snowflake object record can be looked up using the Snowflake Lookup Snap and record the data using the Snowflake Execute Snap. |
|
The following example pipeline demonstrates how to process multiple Snowflake queries in a single transaction. |
|
This example pipeline demonstrates how to auto-historize incoming records using the Snowflake SCD2 Snap. The pipeline reads, parses, maps, historizes, and then upserts data into the Snowflake target table. |
|
This example demonstrates how to use the Snowflake -
Select Snap to read data from a Snowflake table. The Snap queries
the |
|
In the following example, the Snowflake Table List Snap retrieves all the tables under schema PUBLIC and writes it to the output view. |
|
This example pipeline demonstrates how you can unload binary data from a Snowflake table and write it to an Amazon S3 bucket using the Snowflake - Unload Snap along with a JSON Formatter Snap and File Writer Snap. |
|
This example pipeline demonstrates how to encode binary data (such as employee biodata) and update the corresponding employee records in the Snowflake database using the Snowflake - Update Snap. |