HDFS Writer

The HDFS Writer Snap reads a binary data stream from its input view and writes a file in HDFS (Hadoop File System).

Overview

This Snap reads a binary data stream from its input view and writes a file in HDFS (Hadoop File System). It also helps pick a file by suggesting a list of directories and files. For the HDFS protocol, use a SnapLogic on-premises Groundplex and ensure its instance is within the Hadoop cluster and SSH authentication has already been established. The Snap also supports writing to a Kerberized cluster through the HDFS protocol. This Snap supports HDFS, ABFS (Azure Data Lake Storage Gen 2), and WASB (Azure storage) protocols. HDFS 2.4.0 is supported for the HDFS protocol. It also supports reading from HDFS Encryption.
HDFS writer snap

Limitations

  • File names with the following special characters are not supported in the HDFS Writer Snap: '+', '?', '/', ':'.

Known issues

The upgrade of Azure Storage library from v3.0.0 to v8.3.0 has caused the following issue when using the WASB protocol:

When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:

reason=The request failed with error code null and HTTP code 0. , status_code=error

Learn more about Azure Storage library upgrade.

Snap views

Type Description Examples of upstream and downstream Snaps
Input This Snap has exactly one binary input view. It accepts binary input data from upstream Snaps such as formatters.
Output This Snap has at most one document output view. It provides output with the filename and file action taken (overwritten, created, or ignored).

Example output:

{
  "filename": "hdfs://ec2-54-198-212-134.compute-1.amazonaws.com:8020/user/john/input/sample.csv",
  "fileAction": "overwritten"
}
Learn more about Error handling.

Snap settings

Note: Learn about the common controls in the Snap settings dialog.
Field/Field set Description
Label*

String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: HDFS Writer

Example: HDFS Writer

Directory

String/Expression/ Suggestion

Specify the URL for HDFS directory. It should start with hdfs file protocol in the following format:

  • hdfs://<hostname>:<port>/<path to directory>/
  • wasb:///<container name>/<path to directory>/
  • wasbs:///<container name>/<path to directory>/
  • abfs(s):///filesystem/<path>/
  • abfs(s)://[email protected]/<path>

The Directory property is not used in the Pipeline execution or preview and used only in the Suggest operation. When you click the Suggestion icon, the Snap displays a list of subdirectories under the given directory. It generates the list by applying the value of the Filter property.

Note: SnapLogic automatically appends azuredatalakestore.net to the store name you specify when using Azure Data Lake; therefore, you do not have to add azuredatalakestore.net to the URI while specifying the directory.

Default value: hdfs://<hostname>:<port>/

Example:

  • hdfs://ec2-54-198-212-134.compute-1.amazonaws.com:8020/user/john/input/
  • wasb:///snaplogic/testDir/
  • wasbs:///snaplogic/testDir/
  • $dirname
  • abfs(s):///filesystem2/dir1
  • abfs(s)://[email protected]/dir1
File filter

String/Expression

Specify the Glob filter pattern.

Note: Use glob patterns to display a list of directories or files when you click the Suggest icon in the Directory or File property. A complete glob pattern is formed by combining the value of the Directory property with the Filter property. If the value of the Directory property does not end with "/", the Snap appends one, so that the value of the Filter property is applied to the directory specified by the Directory property.

Default value: *

File

String/Expression/ Suggestion

Specify the filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". The File property can be a JavaScript expression which will be evaluated with values from the input view document. When you click the Suggest icon, it will display a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.

Default value: N/A

Example:

  • sample.csv
  • tmp/another.csv
  • $filename
Flush interval (kB)

Interval

Specify the flush interval in kilobytes to flush a specified size of data during the file upload. This Snap can flush the output stream each time a given size of data is written to the target file server.

Note: If the Flush interval is 0, the Snap flushes maximum frequency after each byte block is written. The larger the flush interval, the less frequent are the flushes. This field is useful if the file upload experiences an intermittent failure. However, more frequent flushes result in slower file upload. The default value of -1 indicates no flush during the upload.

Default value: -1

Example: 0

Number Of Retries

Integer/Expression

Specify the maximum number of attempts to be made to receive a response.

Note:
  • The request is terminated if the attempts do not result in a response.
  • Retry operation occurs only when the Snap loses the connection with the server.

Default value: 0

Example: 1

Retry Interval (seconds)

Integer/Expression

Specify the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception.

Default value: 1

Example: 5

File action*

Dropdown list

Select an action to perform if the specified file already exists:

  • Overwrite - The Snap attempts to write the file without checking for the file's existence for a better performance, and the "fileAction" field will be "overwritten" in the output view data.
  • Append - The Snap appends records in the incoming documents to the existing file.
  • Ignore - If the file already exists, the Snap does not throw an exception and does not overwrite the file, but writes an output document indicating that it has been 'ignored'.
  • Error - The error displays in the Pipeline Run Log if the file already exists.
Note: The Append operation is supported for FILE, SFTP, FTP, and FTPS protocols only. For any other protocols that are not supported by Append, we recommend that you use the File Operation , File Writer, and File Delete Snaps andfollow this procedure.
  1. Copy the blob file to your local drive from the endpoint.
  2. Append additional data to the local file.
  3. Delete the original file on the endpoint.
  4. Copy the modified temporary file back from the source to the target.

Note: This approach might involve disk overhead, therefore ensure that you have enough disk space in your system.

Default value: Overwrite

Example: Append

File permissions for various users

Use this field set to select the user and the desired file permissions.

Note:

Limitations with File Permissions for Various Users

  • With "File permissions for various users": File names with the following special characters are not supported in the HDFS Writer Snap: '+', '?', '/', ':'.
  • Without "File permissions for various users": File names with the following special characters are not supported in the HDFS Writer Snap: ':', '/'.
User type

String/Expression/ Suggestion

It should be 'owner' or 'group' or 'others'. Each row can have only one user type and each user type should appear only once. Please select one from the suggested list.

Default value: N/A

Example: owner, group, others

File permissions

String/Expression/ Suggestion

It can be any combination of {read, write, execute} separated by '+' character. Please select one from the suggested list.

Default value: N/A

Example: read, write, execute, read+write, read+write+execute

User Impersonation

Checkbox

Select this check box to enable user impersonation.

Note: Hadoop allows you to configure proxy users to access HDFS on behalf of other users; this is called impersonation. When user impersonation is enabled on the Hadoop cluster, any jobs submitted using a proxy are executed with the impersonated user's existing privilege levels rather than those of the superuser associated with the cluster. For more information on user impersonation in this Snap, see the section on User Impersonation below.

Default value: Deselected

Output for each file written

Checkbox

Enables you to produce a different output document for each file that is written. If the Snap receives multiple binary input data and the File expression property is dynamically evaluated to a filename by using the Content-Location field from the input metadata, each binary data can be written to a different target file.

By default, the Snap produces only one output document with a filename that corresponds to the last file that was written.

Default value: Deselected

Write empty file

Checkbox

Select this checkbox to write an empty file to all the supported protocols when the binary input document has no data.

Default value: Deselected

Snap Execution

Dropdown list

Choose one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute. Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only. Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only

Troubleshooting

Writing to S3 files with HDFS version CDH 5.8 or later

When running HDFS version later than CDH 5.8, the Hadoop Snap Pack may fail to write to S3 files. To overcome this, make the following changes in the Cloudera manager:

  1. Go to HDFS configuration.
  2. In Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, add an entry with the following details:
    • Name: fs.s3a.threads.max
    • Value: 15
  3. Click Save.
  4. Restart all the nodes.
  5. Under Restart Stale Services, select Re-deploy client configuration.
  6. Click Restart Now.