MySQL Bulk Load
Overview
Snap executes MySQL bulk load. This Snap uses the LOAD DATA INFILE statement internally to perform the bulk load action. The SnapLogic Platform does not support the installation of utilities or processes on Cloudplexes. Learn more.
The file is first copied to JCC node, then to MySQL server under tmp directory and finally to the target MySQL table.
Ensure sufficient space in the JCC and MySQL tmp location.
The Snap will not automatically fix some errors encountered during table creation since they may require user intervention to resolve correctly. For example, if the source table contains a column with a type that does not have a direct mapping in the target database, the Snap will fail to execute. You will then need to add a Mapper (Data) Snap to change the metadata document to explicitly set the values needed to produce a valid CREATE TABLE statement.
The BLOB type is not supported by this Snap.
- This is a Write-type Snap.
Does not support Ultra Tasks
Prerequisites
[None]
Limitations
Does not work in Ultra Tasks.
When MySQL execute Snaps (MySQL Execute and MySQL Multi Execute) are followed by MySQL non-execute Snaps, such as MySQL Insert, MySQL Merge, and so on, the following error is displayed when executed:
Table definition has changed, please retry transaction.
This happens due to a known issue in the MySQL Connector. For more information about this issue, see MySQL Bug #65378.
Account
This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See MySQL Database Account for information on setting up this type of account.
MySQL account settings have been shared across different MySQL Snaps and the batch size settings varies the performance for some of the Snaps. We recommend changing the batch size setting (within the account details) to 100k or 200k for only the MySQL Bulk Load Snap (these batch size settings for bulk load may vary based on the environment settings but this range should be ideal).
If you are using other MySQL Snaps along with MySQL Bulk Load Snap in the same pipeline, then use different accounts for each of these Snaps and increase the batch size setting for MySQL Bulk Load Snap (within the account details) as mentioned above.
Snap views
| Type | Description | Examples of upstream and downstream Snaps |
|---|---|---|
| Input | This Snap has one document input view by default.
A second view can be added for metadata for the table as a document so that the target absent table can be created in the MySQL database with a similar schema as the source table. This schema is usually from the second output of a database Select Snap. If the schema is from a different database, the data types might not be properly handled. |
|
| Output | This Snap has at most one document output view. | |
| Error | This Snap has at most one document error view and produces zero or more documents in the view. |
Snap settings
- Expression icon (
): Allows using pipeline parameters to set field values dynamically (if enabled). SnapLogic Expressions are not supported. If disabled, you can provide a static value.
- SnapGPT (
): Generates SnapLogic Expressions based on natural language using SnapGPT. Learn more.
- Suggestion icon (
): Populates a list of values dynamically based on your Snap configuration. You can select only one attribute at a time using the icon. Type into the field if it supports a comma-separated list of values.
- Upload
: Uploads files. Learn more.
| Field/Field set | Description |
|---|---|
Label
String |
Required.The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline. |
| Schema name
|
The database schema name. In case it is not defined, then the suggestion for the Table name will retrieve all tables names of all schemas. The property is suggestable and will retrieve available database schemas during suggest values.
The values can be passed using the pipeline parameters but not the upstream parameter. Example: SYS Default value: [None] |
| Table name
|
Required.Table on which to execute the bulk load operation.
The values can be passed using the pipeline parameters but not the upstream parameter. Example: people Default value: [None] |
| Create table if not present
|
Whether the table should be automatically created if it is not already present. Learn more about Table creation.
Default value: Deselected |
| Partitions
|
This is used to specify a list of one or more partitions and/or subpartitions. When used, if any input document cannot be inserted into any of the partitions or subpartitions named in the list, the input document will be ignored.
Default value: Deselected |
| Columns
|
When no column list is provided, input documents are expected to contain a field for each table column. When a column list is provided, the Snap will load only the specified columns.
Default value: Not selected |
| Set Clause
|
This is used to assign values to columns. For example, you can use "COLUMN1 = 1" to insert integer 1 to column COLUMN1 for each input document. See this link for more information.
Default value: [None] |
| On duplicates
|
Choose the action to take when duplicate records are found. A duplicate is defined as a row that shares the same value for a primary key or unique index as an existing row. The available options are:
When you choose REPLACE option, in case of data type mismatch or constraint violation, an error is displayed and transaction is rolled back based on the Chunk size. While, for the IGNORE option, no such error is displayed for data type mismatch or constraint violation. Column value assignment is done according to default values. Default value: IGNORE |
| Concurrency Option
|
Specifies how to handle the load process when other clients are reading from the table.
Available concurrency options are:
Default value: None |
| Character Set
|
The MySQL server uses the character set indicated by the character_set_database system variable to interpret the information in the inputs. If the contents of the inputs use a character set that differs from the default, it is recommended that you specify the character set of the inputs with this property. A character set of binary specifies "no conversion".
It is not possible to load data that uses the ucs2, utf16, utf16le, or utf32 character set. Default value: [None] |
| Chunk size
|
Specifies the number of records to be loaded at a time.
Note: This property will override the "Batch size" property of the account
Default value: 100000 |
Snap execution
Dropdown list |
Choose one of the three modes in
which the Snap executes. Available options are:
|
Table Creation
If the table does not exist when the Snap tries to do the load, and the Create table property is set, the table will be created with the columns and data types required to hold the values in the first input document. If you would like the table to be created with the same schema as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The extra view in the Select and Bulk Load Snaps are used to pass metadata about the table, effectively allowing you to replicate a table from one database to another.
The table metadata document that is read in by the second input view contains a dump of the JDBC DatabaseMetaData class. The document can be manipulated to affect the CREATE TABLE statement that is generated by this Snap. For example, to rename the name column to full_name, you can use a Mapper (Data) Snap that sets the path $.columns.name.COLUMN_NAME to full_name. The document contains the following fields:
- columns - Contains the result of the
getColumns()method with each column as a separate field in the object. Changing theCOLUMN_NAMEvalue will change the name of the column in the created table. Note that if you change a column name, you do not need to change the name of the field in the row input documents. The Snap will automatically translate from the original name to the new name. For example, when changing from name to full_name, the name field in the input document will be put into the "full_name" column. You can also drop a column by setting theCOLUMN_NAMEvalue to null or the empty string. The other fields of interest in the column definition are:- TYPE_NAME - The type to use for the column. If this type is not known to the database, the
DATA_TYPEfield will be used as a fallback. If you want to explicitly set a type for a column, set theDATA_TYPEfield. - _SL_PRECISION - Contains the result of the
getPrecision()method. This field is used along with the_SL_SCALEfield for setting the precision and scale of a DECIMAL or NUMERIC field. - _SL_SCALE - Contains the result of the
getScale()method. This field is used along with the_SL_PRECISIONfield for setting the precision and scale of a DECIMAL or NUMERIC field.
- TYPE_NAME - The type to use for the column. If this type is not known to the database, the
- primaryKeyColumns - Contains the result of the
getPrimaryKeys()method with each column as a separate field in the object. - declaration - Contains the result of the
getTables()method for this table. The values in this object are just informational at the moment. The target table name is taken from the Snap property. - importedKeys - Contains the foreign key information from the
getImportedKeys()method. The generated CREATE TABLE statement will include FOREIGN KEY constraints based on the contents of this object. Note that you will need to change thePKTABLE_NAMEvalue if you changed the name of the referenced table when replicating it. - indexInfo - Contains the result of the
getIndexInfo()method for this table with each index as a separated field in the object. Any UNIQUE indexes in here will be included in the CREATE TABLE statement generated by this Snap.
When invalid data is passed to the Snap, the Snap execution fails. The database administrator can set a global variable that can either handle an invalid case by passing a default value (such as, if strings are passed for integers, then pass the value 0), or by displaying an error. See Load Data Syntax for more information.
In a scenario where the Auto commit on the account is set to true, and the downstream Snap does depends on the data processed on an Upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available.
For example, when performing a create, insert and a delete function sequentially on a pipeline, using a Script Snap helps in creating a delay between the insert and delete function or otherwise it may turn out that the delete function is triggered even before inserting the records on the table.