Databricks as a target

AutoSync supports Databricks as a target with the following limitations:

  • Source schema, table, and column names must contain only alphabetic characters, numbers, and underscore.
  • Databricks is case-insensitive. AutoSync creates lowercase table and column names when loading data.

Supported Account types

Designer and Classic Manager provide multiple account types for most endpoints and not all types are compatible with AutoSync. When you create or edit a data pipeline, the existing credentials list only includes compatible Accounts.

For Databricks, AutoSync supports:

  • Databricks Account

Supported load types

  • Append for CSV files only
  • Full load
  • Incremental (insert new, update changed) load

Connection configuration

To configure a data pipeline to use Databricks as a target:

  • Select saved credentials or create new ones with the following properties:
    • Credential label: A unique, meaningful name such as Databricks-Finance. If a configuration with the same name exists, AutoSync displays an Asset conflict error message.
    • JDBC URL: The URL to connect to Databricks. Example value: jdbc:spark://;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/2409532680880038/0326-212833-drier754;AuthMech=3;
    • Database name: The name of the destination database.
    • Use Token Based Authentication: Select to use a token instead of a username and password.
    • Token: This field only displays when token based authentication is enabled.
    • Username: A username for an account with the correct permissions for AutoSync to load and synchronize data. This field only displays when token based authentication is disabled.
    • Password: The password for the account. Multiple retries with an invalid password can cause your account to be locked. This field only displays when token based authentication is disabled.
    • Share: (Optional) Select a user group to share this configuration with. Environment admins (formerly Org admins) create user groups to share credentials. If you are a member of a user group, you can select it from the dropdown. You can also select the global shared folder, which shares the credentials with everyone in your Org.
    • Validate and save: After saving and validating, AutoSync adds the configuration to the list of saved credentials.
  • Select schema: From the drop-down list, choose the schema to load data into. AutoSync populates the list from your Databricks account.