Amazon Redshift as a target

AutoSync requires an S3 staging location to load to Amazon Redshift.

Supported load types

Autosync supports the following load types for Amazon Redshift:
  • Full load
  • Incremental (insert new, update changed)
  • SCD2 (preserve change history) from Salesforce or ServiceNow sources

Supported account types

Designer and Classic Manager provide multiple account types for most endpoints and not all types are compatible with AutoSync. When you create or edit a data pipeline, the existing credentials list includes only compatible accounts.

For Redshift, AutoSync supports:

  • Redshift Account

Known limitations

Amazon Redshift limits the number of columns in a table to 1,600.

Attention: Data pipelines with an Amazon Redshift target using the SCD2 load type now store timestamps up to nanoseconds. Historical records stored before the December release have a timestamp limited to 3 fractional second digits. Historical records are those with a value of N in the destination table autosync_currentrecordflag column. Records updated after the December release store timestamps up to nanoseconds.

Data pipelines using Amazon Redshift as both source and target

  • When loading data from a Redshift source to a Redshift target, the source schema name must be different from the destination schema name.
  • When using Redshift as both a source and a target, the loaded data for some columns might be a different data type than they were in the source. This is because AutoSync maps data through a common data model.

Connection configuration

When you create or edit Amazon Redshift credentials in AutoSync, properties include:

  • Credential label: A unique, meaningful name such as Redshift-Sales. If a configuration with the same name exists, AutoSync displays an Asset conflict error message.
  • Endpoint: The endpoint portion of the JDBC URL to access Redshift. For example, cluster.abc123xyz789.us-west-2.redshift.amazonaws.com.
  • Port number: The port number to connect to the database.
  • Database name: The name of the destination database.
  • Username: A username for an account with the correct permissions for AutoSync to load and synchronize data.
  • Password: The password for the account. Multiple retries with an invalid password can cause your account to be locked.
  • S3 Bucket: The S3 Bucket name residing in an AWS account, to use for staging data on Redshift. For example, sl-bucket-ca.
  • S3 Folder: The relative path to a folder in the S3 Bucket. This is used as a root folder for staging data on Redshift. For example, san-francisco for using s3://sl-bucket-ca/san-francisco. To create files at the root level, append a forward slash ( / ) to the file path.
  • S3 Access-key ID: The S3 Access key ID part of the AWS authentication. For example, NAVRGGRV7EDCFVLKJH.
  • S3 Secret Key: The S3 Secret key part of the AWS authentication. For example, 2RGiLmL/6bCujkKLaRuUJHY9uSDEjNYr+ozHRtg.
  • Share: (Optional) Select a user group to share this configuration with. Org admins create user groups to share credentials. If you are a member of a user group, you can select it from the dropdown. You can also select the global shared folder, which shares the credentials with everyone in your Org.
  • Validate and save: After saving, AutoSync adds the configuration to the list of saved credentials.

After validating or selecting valid credentials, configure:

  • Select schema: AutoSync populates this list from the account. Choose the schema to use as the destination.
  • S3 staging location region: Select the region of your S3 staging location. This field is optional if your S3 staging location is in the same region as your Redshift location.