Oracle Number(1,0) field maps to Boolean in Spark

Recently I was working on a issue that when importing data from Oracle into Hive table using Spark, the data of type Number(1,0) in Oracle was implicitly converted into Boolean data type. Before was on CDH5.5.x, it worked correctly, however, after upgrading to CDH5.10.x, the issue happened. See below Hive table output after import:

Before upgrade:

SELECT column1 FROM test_table limit 2;
0
1

After upgrade:

SELECT column1 FROM test_table limit 2;
False
True

After digging further, I discovered that this change was introduced by SPARK-16625, due to the integration required for Spark to work correctly with Oracle.

Since the change was intended, the following is the suggested workarounds:

  1. Cast the Boolean to a type of your choosing in the Spark code, before writing it to the Hive table
  2. Make sure that the mapped column in Hive is also of compatible data type, for example, TinyInt, rather than String, so that the value of True or False will be mapped to 1 or 0 respectively, rather than string value of “True” or “False” (the reason that the column got “False” and “True” values were because the column was of String data type)

Hope above helps.

Spark jobs failed with delegation token renewal error

An Oozie Spark job failed with the following error:

Job aborted due to stage failure: Task 103 in stage 194576.0 failed 4 times, most recent failure: Lost task 103.3 in stage 194576.0 
(TID 119674041, ): org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): 
token (token for sparkpse: HDFS_DELEGATION_TOKEN owner=@HADOOP.CHARTER.COM, renewer=yarn, realUser=, issueDate=1482494610879, 
maxDate=1483099410879, sequenceNumber=274718, masterKeyId=166) can't be found in cache 
at org.apache.hadoop.ipc.Client.call(Client.java:1471) 
at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) 
at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)

This is caused by long running Spark job in a kerberized environment the checkpointing fails as Token is not renewed properly.

The workaround is to add “–conf spark.hadoop.fs.hdfs.impl.disable.cache=true” to Spark job command line parameters to disable the token cache from spark side.

How to load different version of Spark into Oozie

This article explains the steps needed to load Spark2 into Oozie under CDH5.9.x which comes with Spark1.6. Although this was tested under CDH5.9.0, it should be similar for earlier releases.

Please follow the steps below:

  1. Locate the current shared-lib directory by running:
    oozie admin -oozie http://<oozie-server-host>:11000/oozie -sharelibupdate
    

    you will get something like below:

    [ShareLib update status]
    host = http://<oozie-server-host>:11000/oozie
    status = Successful
    sharelibDirOld = hdfs://<oozie-server-host>:8020/user/oozie/share/lib/lib_20161202183044
    sharelibDirNew = hdfs://<oozie-server-host>:8020/user/oozie/share/lib/lib_20161202183044
    

    This tells me that the current sharelib directory is /user/oozie/share/lib/lib_20161202183044

  2. Create a new directory for spark2.0 under this directory:
    hadoop fs -mkdir /user/oozie/share/lib/lib_20161202183044/spark2
    
  3. Put all your spark 2 jars under this directory, please also make sure that oozie-sharelib-spark-4.1.0-cdh5.9.0.jar is there too
  4. Update the sharelib by running:
    oozie admin -oozie http://<oozie-server-host>:11000/oozie -sharelibupdate
    
  5. Confirm that the spark2 has been added to the shared lib path:
    oozie admin -oozie http://<oozie-server-host>:11000/oozie -shareliblist
    

    you should get something like below:

    [Available ShareLib]
    spark2
    oozie
    hive
    distcp
    hcatalog
    sqoop
    mapreduce-streaming
    spark
    pig
    
  6. Go back to spark workflow and add the following configuration under Spark action:
    <property>
        <name>oozie.action.sharelib.for.spark</name>
        <value>spark2</value>
    </property>
    
  7. Save workflow and run to test if it will pick up the correct JARs now.

Please be advised that although this can work, it will put Spark action in Oozie not supported by Cloudera, because it is not tested and it should not be recommended. But if you are still willing to go ahead, the steps above should help.

Enabling Snappy Support for Shark/Spark

Shark/Spark will not be able to read Snappy compressed data out of the box. In my previous post, I explained how to enable Snappy compression in Hadoop 2.4, once this is done, enabling Snappy in Spark is dead simple, all you need to do is to set an envionment variable for it:

In $SPARK_HOME/conf/spark-env.sh, add the following:

export SPARK_LIBRARY_PATH=$HADOOP_HOME/lib/native

Of course, this assumes that you have hadoop’s native Snappy libray libsnappy.so in the specified directory.

After distributing this conf file to all the slaves and restarting the cluster, Shark will be able to create or read Snappy compressed data through Spark.

Getting “protobuf” Error When Using Shark 0.9.1 with Hadoop 2.4

Recently I need to experiment with Shark so that I can compare its performance with Hive at work. In order to do this, I need to setup a single cluster on one of my virtualbox machines.

Installing Hadoop and Hive was quite smooth, you can refer back to my post regarding the instructions. However, I encountered problems when trying to install Shark and Spark.

Spark runs OK, no problems, but Shark kept failing when I tried to run some queries:


shark> select * from test;
17.096: [Full GC 71198K->24382K(506816K), 0.3150970 secs]
Exception in thread "main" java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$SetOwnerRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at java.lang.Class.getDeclaredMethods0(Native Method)
	at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
	at java.lang.Class.privateGetPublicMethods(Class.java:2651)
	at java.lang.Class.privateGetPublicMethods(Class.java:2661)
	at java.lang.Class.getMethods(Class.java:1467)
	at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:426)
	at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:323)
	at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:636)
	at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:722)
	at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:92)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
	at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:334)
	at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:241)
	at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
	at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:180)
	at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:231)
	at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:288)
	at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1274)
	at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1059)
	at shark.parse.SharkSemanticAnalyzer.analyzeInternal(SharkSemanticAnalyzer.scala:137)
	at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:279)
	at shark.SharkDriver.compile(SharkDriver.scala:215)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
	at shark.SharkCliDriver.processCmd(SharkCliDriver.scala:338)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
	at shark.SharkCliDriver$.main(SharkCliDriver.scala:235)
	at shark.SharkCliDriver.main(SharkCliDriver.scala)

It is not obvious what’s happening, but it was caused by the package called “protobuf”. It can be found in jar file “hive-exec-0.11.0-shark-0.9.1.jar” which is located at $SHARK_HOME/lib_managed/jars/edu.berkeley.cs.shark/hive-exec, all you need to do is to remove the package completely:

cd $SHARK_HOME/lib_managed/jars/edu.berkeley.cs.shark/hive-exec
unzip hive-exec-0.11.0-shark-0.9.1.jar
rm -f com/google/protobuf/*
zip -r hive-exec-0.11.0-shark-0.9.1.jar *
rm -fr com hive-exec-log4j.properties javaewah/ javax/ javolution/ META-INF/ org/

That’s it, restart your “shark” and it should be working now.