Amazon Redshift as a source

You can create a data pipeline that loads data from Amazon Redshift to a destination. To provide the information SnapLogic AutoSync needs to connect to Redshift, supply new credentials in the wizard or select saved credentials. The create credentials page lists the information necessary to create credentials in the AutoSync wizard.

AutoSync requires an S3 staging location for Amazon Redshift.

Supported account types

Designer and Classic Manager provide multiple account types for most endpoints and not all types are compatible with AutoSync. When you create or edit a data pipeline, the existing credentials list only includes compatible Accounts.

  • Redshift Account

Redshift as both source and target

  • When loading data from a Redshift source to a Redshift destination, the source schema must be different from the destination schema.
  • When using Redshift as both a source and a target, the loaded data for some columns might be a different data type than they were in the source. This is because AutoSync maps data through a common data model.

Connection configuration

Redshift properties include the following:

  • Account Properties
    • Credential label: A unique, meaningful name such as Redshift-Sales. If a configuration with the same name exists, AutoSync displays an Asset conflict error message.
    • Endpoint: The endpoint portion of the JDBC URL to access Redshift. For example, cluster.abc123xyz789.us-west-2.redshift.amazonaws.com.
    • Port Number: The port number to connect to the database.
    • Database name: The name of the destination database.
    • Username: A username for an account with the correct permissions for AutoSync to load and synchronize data.
    • Password: The password for the account. Multiple retries with an invalid password can cause your account to be locked.
    • S3 Bucket: The S3 Bucket name residing in an AWS account, to use for staging data on Redshift. For example, sl-bucket-ca.
    • S3 Folder: The relative path to a folder in the S3 Bucket. This is used as a root folder for staging data on Redshift. For example, san-francisco for using s3://sl-bucket-ca/san-francisco. To create files at the root level, append a forward slash ( / ) to the file path.
    • S3 Access-key ID: The S3 Access key ID part of the AWS authentication. For example, NAVRGGRV7EDCFVLKJH.
    • S3 Secret Key: The S3 Secret key part of the AWS authentication. For example, 2RGiLmL/6bCujkKLaRuUJHY9uSDEjNYr+ozHRtg.
    • Share: (Optional) Select a user group to share this configuration with. Org admins create user groups to share credentials. If you are a member of a user group, you can select it from the dropdown. You can also select the global shared folder to share the credentials with everyone in your Org.
  • Validate and Save: After saving and validating, AutoSync adds the configuration to the list of saved credentials.
  • Select schema: AutoSync populates this list from the account. Choose the schema to use as the destination.
  • Select a table to synchronize: Choose a table to synchronize. AutoSync populates the list from your Redshift account.