Hadoop Snap Pack
Overview
Apache Hadoop is an open-source software framework for the storage and processing of large datasets.
Use Snaps in this Snap Pack to read data from and write data to the Hadoop File System (HDFS).
For Hadoop with Kerberos, you must install a few utilities on the Snaplex, such as kinit, kdestroy, and so on.
Additionally the Hadoop Snaps use the Hadoop libraries that invoke a few system programs internally. The SnapLogic Platform does not support the installation of utilities or processes on Cloudplexes. Learn more about Snap execution on Groundplexes and Cloudplexes.
Prerequisites
A Groundplex needs to be configured as a Hadoop client for this integration to work. The JAR files and property files that must be installed for this depends on the version and vendor of your Hadoop File System. Refer to your vendor's documentation for more information.
Supported versions
This Snap Pack is tested against:
- CDH 5.8
- CDH 5.10
- CDH 5.16.1
- CDH 5.16
- CDH 6.1.1
- HDP 2.6.1
- HDP 2.6.3.1
- HDP 2.6.3
This Snap Pack contains the following Snaps:
- Hadoop Directory Browser: Browses directories and retrieves file and directory information from HDFS or Azure Data Lake Storage.
- HDFS Delete: Deletes files or directories from HDFS based on the specified path.
- HDFS Reader: Reads binary data from files in HDFS and outputs the data as a stream.
- HDFS Writer: Writes binary data to files in HDFS from an input stream.
- HDFS ZipFile Reader: Reads and extracts files from ZIP archives stored in HDFS.
- HDFS ZipFile Writer: Creates ZIP archives and writes them to HDFS.
- ORC Reader: Reads ORC files from HDFS, S3, or Azure storage and converts the data into documents.
- ORC Writer: Converts documents to ORC format and writes the data to HDFS, S3, or Azure storage.
- Parquet Reader: Reads Parquet files from HDFS, S3, or Azure storage and converts the data into documents.
- Parquet Writer: Converts documents to Parquet format and writes the data to HDFS, S3, or Azure storage.
- RC File Formatter: Formats incoming documents to RC (Row Columnar) file format for optimized storage.
- RC File Parser: Parses RC file data and converts it into documents for downstream processing.
- Sequence Formatter: Formats incoming documents to Hadoop sequence file format.
- Sequence Parser: Parses Hadoop sequence file data and converts it into documents.
Known Issues
The upgrade of the Azure Storage library from v3.0.0 to v8.3.0 caused the following issue when using the WASB protocol:
- When you use invalid credentials for the WASB protocol in HDFS Reader, HDFS Writer, ORC
Reader, Parquet Reader, Parquet Writer Snaps, the pipeline does not fail immediately,
instead it takes 13-14 minutes to display this error:
reason=The request failed with error code null and HTTP code 0. , status_code=error
After upgrading your Snaplex to the 4.20 GA version, Pipelines with HDFS Reader Snap that use Kerberos authentication might remain in the start state.
ORC Writer/Reader Snaps fail on S3 when using the 4.20 Snaplex with the previous Snap Pack version (hadoop8270 and snapsmrc528) displaying this error: Unable to read input stream, Reason: For input string: "100M" error.
ORC Reader/Writer and Parquet Reader/Writer Snaps fail in 4.20 when executing a Pipeline on S3 with this error: org.apache.hadoop.fs.StorageStatistics.
Customizing the Location of the Temporary Directory
Snaps in the Hadoop Snap Pack briefly save a temporary file in the system while processing and before passing the contents to a downstream Snap. The temporary file is stored in a default location and is automatically deleted after the process is complete.
- You can change the location of the temporary file to a custom location by using the global property jcc.jvm_options.
- You may choose to use one of the two methods in this section, to change the temporary file location.
Method 1: Specifying the Temporary Location in the SnapLogic Manager
- In the SnapLogic Manager, Snaplexes tab, select the applicable Snaplex's name.
- In the Update Snaplex dialog, Node Properties tab, under Global properties, add the global property,
jcc.jvm_options = -Dhadoop.tmp.dir=/tmp/syd, where /tmp/syd is the new location to save the temporary file. - Click Update and OK to Snaplex Update Notice. This updates the new location for the temporary file and restarts the Snaplex with the new property setting.
Method 2: Manually Changing the Location of Temporary File in global properties
You can change from the default location of the temporary file to another location in global properties in your local SnapLogic environment.
- Access the /etc folder in your SnapLogic installation:
- For a Linux installation, enter the command:
cd $SL_ROOT/etcon the command prompt. - For a Windows installation, enter the command:
cd %SL_ROOT%\etcon the command prompt.
- For a Linux installation, enter the command:
- Open the
global propertiesfile. Add the following entry:jcc.jvm_options = -Dhadoop.tmp.dir=tmp/syd/ - Save the file and restart the Snaplex to update the new location for the temporary file.