Amazon Redshift as a destination

You can create a data pipeline that loads data from a source endpoint to Redshift. To provide the information SnapLogic® AutoSync needs to connect to Redshift, supply new credentials in the wizard or select saved credentials. The create credentials page lists the information necessary to create credentials in the AutoSync wizard.

AutoSync requires an S3 staging location for Amazon Redshift.

Supported account types

The IIP provides multiple Account types for most endpoints and not all types are compatible with AutoSync. When you create or edit a data pipeline, the existing credentials list only includes compatible Accounts. AutoSync supports the following IIP Account types:

  • Redshift Account

Known limitations

Amazon Redshift limits the number of columns in a table to 1,600.

Redshift as both source and target

  • When loading data from a Redshift source to a Redshift destination, the source schema must be different from the destination schema.
  • When using Redshift as both a source and a target, the loaded data for some columns might be a different data type than they were in the source. This is because AutoSync maps data through a common data model.

Connection configuration

Redshift properties include the following:

  • Account Properties
    • Credential label: A unique, meaningful name such as Redshift-Sales. If a configuration with the same name exists, AutoSync displays an Asset conflict error message.
    • Endpoint: The endpoint portion of the JDBC URL to access Redshift. For example,
    • Port Number: The port number to connect to the database.
    • Database name: The name of the destination database.
    • Username: A username for an account with the correct permissions for AutoSync to load and synchronize data.
    • Password: The password for the account. Multiple retries with an invalid password can cause your account to be locked.
    • S3 Bucket: The S3 Bucket name residing in an AWS account, to use for staging data on Redshift. For example, sl-bucket-ca.
    • S3 Folder: The relative path to a folder in the S3 Bucket. This is used as a root folder for staging data on Redshift. For example, san-francisco for using s3://sl-bucket-ca/san-francisco. To create files at the root level, append a forward slash ( / ) to the file path.
    • S3 Access-key ID: The S3 Access key ID part of the AWS authentication. For example, NAVRGGRV7EDCFVLKJH.
    • S3 Secret Key: The S3 Secret key part of the AWS authentication. For example, 2RGiLmL/6bCujkKLaRuUJHY9uSDEjNYr+ozHRtg.
    • Share: (Optional) Select a user group to share this configuration with. Org admins create user groups to share credentials. If you are a member of a user group, you can select it from the dropdown. You can also select the global shared folder, which shares the credentials with everyone in your Org.
  • Validate and Save: After saving and validating, AutoSync adds the configuration to the list of saved credentials.
  • Select schema: AutoSync populates this list from the account. Choose the schema to use as the destination.
  • S3 staging location region: Select the region of your S3 staging location. This field is optional if your S3 staging location is in the same region as your Redshift location.

Load types

Autosync supports the following load types for Redshift:
  • Full load
  • Incremental (insert new, update changed)
  • SCD2 (preserve change history) from Salesforce or ServiceNow sources