Unable to Import Data as Parquet into Encrypted HDFS Zone | Sqoop Parquet Import

Recently I discovered an issue in Sqoop that when importing data into Hive table, whose location is in an encrypted HDFS zone, it will fail with “can’t be moved into an encryption zone” error:

Command:

sqoop import --connect <postgres_url> --username <username> --password <password> \
--table sourceTable --split-by id --hive-import --hive-database staging \
--hive-table hiveTable --as-parquetfile

Errors:

2017-05-24 13:38:51,539 INFO [Thread-84] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: 
Setting job diagnostics to Job commit failed: org.kitesdk.data.DatasetIOException: Could not move contents of hdfs://nameservice1/tmp/staging/.
temp/job_1495453174050_1035/mr/job_1495453174050_1035 to 
hdfs://nameservice1/user/hive/warehouse/staging.db/hiveTable
        at org.kitesdk.data.spi.filesystem.FileSystemUtil.stageMove(FileSystemUtil.java:117)
        at org.kitesdk.data.spi.filesystem.FileSystemDataset.merge(FileSystemDataset.java:406)
        at org.kitesdk.data.spi.filesystem.FileSystemDataset.merge(FileSystemDataset.java:62)
        at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$MergeOutputCommitter.commitJob(DatasetKeyOutputFormat.java:387)
        at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
        at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
/tmp/staging/.temp/job_1495453174050_1035/mr/job_1495453174050_1035/964f7b5e-2f55-421d-bfb6-7613cc4bf26e.parquet 
can't be moved into an encryption zone.
        at org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager.checkMoveValidity(EncryptionZoneManager.java:284)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedRenameTo(FSDirectory.java:564)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.renameTo(FSDirectory.java:478)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:3929)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInt(FSNamesystem.java:3891)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3856)

This is caused by a known Sqoop bug: SQOOP-2943. This happens because Sqoop uses Kite SDK to generate Parquet file, and Kite SDK uses /tmp directory to generate the parquet file on the fly. And because /tmp directory is not encrypted and hive warehouse directory is encrypted, the final move command to move the parquet file from /tmp to hive warehouse will fail due to the encryption. The import only fails with parquet format, text file format works as expected.

Currently SQOOP-2943 is not fixed and there is no direct workaround.

For the time being, the workaround is to:

  1. Import data as text file format into Hive temporary table, and then use Hive query to copy data into destination parquet table. OR
  2. Import data as parquet file into temporary directory outside of Hive warehouse, and then again use Hive to copy data into destination parquet table

Sqoop Teradata import truncates timestamp microseconds information

Last week, while I was working on Sqoop with Teradata, I noticed one bug that the microseconds part of a Timestamp field got truncated after importing into HDFS. The following is the steps to re-produce the issue:

1. Create a table in Teradata:

CREATE TABLE vmtest.test (a integer, b timestamp(6) 
FORMAT 'yyyy-mm-ddbhh:mi:ss.s(6)') PRIMARY INDEX (a);

INSERT INTO vmtest.test VALUES (1, '2017-03-14 15:20:20.711001');

2. And sqoop import command:

sqoop import --connect jdbc:teradata://<teradata-host>/database=vmtest \
    --username dbc --password dbc --target-dir /tmp/test --delete-target-dir \
    --as-textfile --fields-terminated-by "," --table test

3. data stored in HDFS as below:

[cloudera@quickstart ~]$ hadoop fs -cat /tmp/test/part*
1,2017-03-14 15:20:20.711

Notice the microseconds part truncated from 711001 to 711

This is caused by a bug in TDCH (TeraData Connector for Hadoop) from Teradata, which is used by Cloudera Connector Powered by Teradata.

The workaround is to make sure that the timestamp value is in String format before passing it to Sqoop, so that no conversion will happen. Below Sqoop command is an example:

sqoop import --connect jdbc:teradata://<teradata-host>/database=vmtest \
    --username dbc --password dbc --target-dir /tmp/test \
    --delete-target-dir --as-textfile --fields-terminated-by "," \
    --query "SELECT a, cast(cast(b as format 'YYYY-MM-DD HH:MI:SS.s(6)') as char(40)) from test WHERE \$CONDITIONS" \
    --split-by a

After importing, data is stored in HDFS correctly:

[cloudera@quickstart ~]$ hadoop fs -cat /tmp/test/part*
1,2017-03-14 15:20:20.711001

As mentioned above, this is a bug in Teradata connector, we have to wait for it to be fixed in TDCH. At the time of writing, the issue still exists in CDH5.8.x.

Sqoop Teradata import truncates timestamp nano seconds information

In the last few weeks, I have been dealing the Teradata Support for a Sqoop issue that the value with Timestamp(6) data type in Teradata will lost last 3 digits of nano seconds after importing into HDFS using Sqoop command.

The following test case validates the issue:

  1. Create a table in Teradata:
    ​CREATE TABLE vmtest.test (a integer, b timestamp(6) 
    FORMAT 'yyyy-mm-ddbhh:mi:ss.s(6)') PRIMARY INDEX (a);
    
    INSERT INTO vmtest.test VALUES (1, '2016-04-05 11:27:24.699022');
    
  2. And sqoop import command:
    sqoop import --connect jdbc:teradata://<teradata-host>/database=vmtest \
        --username dbc --password dbc --target-dir /tmp/test --delete-target-dir \
        --as-textfile --fields-terminated-by "," --table test
    
  3. data stored in HDFS as below:
    [cloudera@quickstart ~]$ hadoop fs -cat /tmp/test/part*
    1,2016-04-05 11:27:24.699
    

Notice the nano seconds part truncated from 699022 to 699

This is caused by a bug in TDCH (TeraData Connector for Hadoop) from Teradata, which is used by Cloudera Connector Powered by Teradata.

The workaround is to make sure that the timestamp value is in String format before passing it to Sqoop, so that no conversion will happen. Below Sqoop command is an example:

sqoop import --connect jdbc:teradata://<teradata-host>/database=vmtest \
    --username dbc --password dbc --target-dir /tmp/test \
    --delete-target-dir --as-textfile --fields-terminated-by "," \
    --query "SELECT a, cast(cast(b as format 'YYYY-MM-DD HH:MI:SS.s(6)') as char(40)) from test WHERE \$CONDITIONS" \
    --split-by a

After import, data is stored in HDFS correctly:

[cloudera@quickstart ~]$ hadoop fs -cat /tmp/test/part*
1,2016-04-0511:27:24.699022

Sqoop job failed with “Malformed option in options file” error

Recently I discovered a bug in Sqoop that it wrongly detects a malformed query in options file, but in fact the query is totally correct. Example as below sqoop command:

sqoop --options-file /path/to/options-file.txt

and in the /path/to/options-file.txt, it contains the following content:

import
--connect
jdbc:mysql://......
--username
XXXXXXXX
--password
XXXXXXXX
--query
SELECT * FROM test_table WHERE a = 'b'
....

Sqoop will fail with the following error:

16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
java.lang.Exception: Malformed option in options file(/path/to/options-file.txt): 
SELECT * FROM test_table WHERE a = 'b'
at org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
at org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
at org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
at com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Malformed option in options file(/path/to/options-file.txt): SELECT * FROM test_table WHERE a = 'b'

This is caused by a bug in Sqoop code that Sqoop checks for the beginning and ending quotes in the query section of the options file. If a query does not start with a quote but ends with a quote, then it will fail with the mentioned error.

From the example shown in the above, the query had an ending single quote “‘”, Sqoop will wrongly treat it as an invalid query and then Exception will be throw. If you look at the function that does the check for quotes in the query string:

https://github.com/apache/sqoop/blob/release-1.4.6-rc3/src/java/org/apache/sqoop/util/OptionsFileUtil.java#L156-L175

  private static String removeQuoteCharactersIfNecessary(String fileName,
      String option, char quote) throws Exception {
    boolean startingQuote = (option.charAt(0) == quote);
    boolean endingQuote = (option.charAt(option.length() - 1) == quote);

    if (startingQuote && endingQuote) {
      if (option.length() == 1) {
        throw new Exception("Malformed option in options file("
            + fileName + "): " + option);
      }
      return option.substring(1, option.length() - 1);
    }

    if (startingQuote || endingQuote) {
       throw new Exception("Malformed option in options file("
           + fileName + "): " + option);
    }

    return option;
  }

Based on the above code, the sqoop command using options file will fail if the it either only starts or ends with a quote (single or double). I have created upstream JIRA for this bug SQOOP-3061 and provided a patch to fix the issue.

Currently, the workaround is to NOT end the query with a quote, either double or single quote, so simply add something like “AND 1=1” will resolve the issue.

Does Sqoop Mainframe Import Support SFTP?

Currently Sqoop leverages Apache Commons Net FTP library with some customisations to handle the way mainframe returns FTP listings. The current Apache Commons Net library does NOT support SFTP but does support FTPS.

FTPS and SFTP are very different protocols so we would have to use a different underlying library to incorporate SFTP capability into the mainframe module:

  • FTPS: FTPS is FTP with SSL for security. It uses a control channel and opens new connections for the data transfer. As it uses SSL, it requires a certificate.
  • SFTP: SFTP (SSH File Transfer Protocol/Secure File Transfer Protocol) was designed as an extension of SSH to provide file transfer capability, so it usually uses only the SSH port for both data and control.

In most SSH server installations you will have SFTP support, but FTPS would need the additional configuration of a supported FTP server.

However, even though Apache Commons Net FTP supports FTPS, Sqoop currently only uses class org.apache.commons.net.ftp.FTPClient, NOT org.apache.commons.net.ftp.FTPSClient which supports FTPS. Please check the source code here MainframeFTPClientUtils.java.

So, the conclusion is that currently Sqoop does NOT support either FTPS or SFTP, and the only protocol it supports is FTP.