HiveMetaStore Failed to Start in Cloudera Manager: cmf.service.config.ConfigGenException: Unable to generate config file creds.localjceks

Recently I was dealing with an issues that HiveMetaStore failed to start in a Cloudera Manager managed environment. It failed with below errors:

Caused by: com.cloudera.cmf.service.config.ConfigGenException: Unable to generate config file creds.localjceks
        at com.cloudera.cmf.service.config.JceksConfigFileGenerator.generate(
        at com.cloudera.cmf.service.HandlerUtil.emitConfigFiles(
        at com.cloudera.cmf.service.AbstractRoleHandler.generateConfiguration(

This problem is very common if you have the following misconfiguration in your cluster:

1. Wrong version of Java being used. For a list of supported version of Java by Cloudera, please refer to below link:
CDH and Cloudera Manager Supported JDK Versions

2. Different version of Java used across the cluster hosts.

So run:

java -version

and check symlinks under /usr/java/jdk****-cloudera to confirm they are consistent across the whole cluster.

After all above were performed, try to restart failed service, most likely the issue should be resolved. If not, please let me know in the comments below.

Securely Managing Passwords In Sqoop

Apache Sqoop became the Top-Level Project in Apache in March 2012. Since then, Sqoop has developed a lot and become very popular amongst Hadoop ecosystem. In this post, I will cover the ways to specify database passwords to Sqoop in a secure way.

The following ways are common to pass database passwords to Sqoop:

sqoop import --connect jdbc:mysql:// \
             --username myuser -P \
             --table mytable
sqoop import --connect jdbc:mysql:// 
             --username myuser \
             --password mypassword \
             --table mytable

The first one is secure as other people can’t see the password, however, it is only practical to use in the command line.

And we all agree that the second one is insecure as everyone can see what the password is to access the database.

The more secure way of passing the password is through the use of so called password file. The command as follows:

echo -n "password" > /home/ericlin/.mysql.password
chmod 400 /home/ericlin/.mysql.password
sqoop import --connect jdbc:mysql:// \
             --username myuser \
             --password-file /home/ericlin/.mysql.password \
             --table mytable

Please note that we need “-n” option for the “echo” command so that no newline will be added to the end of the password. And, please do not use “vim” to create the file as “vim” will add newline automatically to the end of the file, which will cause Sqoop to fail as the password contains a newline character.

However, storing password in a text file is still considered not secure even though we have set the permissions. As of Sqoop 1.4.5, Sqoop supports the use of JAVA Key Store to store passwords, so that you do not need to store passwords in clear text in a file.

To generate the key:

[ericlin@localhost ~] $ hadoop credential create mydb.password.alias -provider jceks://hdfs/user/ericlin/mysql.password.jceks
Enter password: 
Enter password again: 
mysql.password has been successfully created. has been updated.

On prompt, enter the password that will be used to access the database.

The “mydb.password.alias” is the alias that we can use to pass to Sqoop when running the command, so that no password is needed.

Then you can run the following Sqoop command:

sqoop import \
             -–connect ‘jdbc:mysql://’ \
             -–table mytable \
             -–username myuser \
             -–password-alias mydb.password.alias

This way password is hidden inside jceks://hdfs/user/ericlin/mysql.password.jceks and no one is able to see it.

Hope this helps.

    Flume Collector Sink Decorator Plugin Finished

    Before Christmas, I posted a blog about my trouble to write a Flume Collector Sink Decorator Plugin.

    After doing some research and continue digging the underlining issue, I finally get a solution to this, which makes me super happy.

    The issue I had was caused by error:

    Exception in thread "main" java.lang.UnsupportedClassVersionError: garbagefilter/GarbageFilterDecorator : Unsupported major.minor version 51.0
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(
    at Method)
    at java.lang.ClassLoader.loadClass(
    at sun.misc.Launcher$AppClassLoader.loadClass(
    at java.lang.ClassLoader.loadClass(
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(
    at com.cloudera.flume.conf.SourceFactoryImpl.loadPluginBuilders(
    at com.cloudera.flume.conf.SourceFactoryImpl.(
    at com.cloudera.flume.conf.FlumeBuilder.(
    at com.cloudera.flume.agent.LogicalNodeManager.spawn(
    at com.cloudera.flume.agent.FlumeNode.setup(
    at com.cloudera.flume.agent.FlumeNode.main(

    This is due to the fact that I compiled my plugin under Java 1.7 and tried to run the .jar file under Java 1.6. This error did not appear in the flume log when I ran it as a service using command, which was why I had no clue on what was going on:

    $ service flume-node start

    It only appeared when I ran it directly on the command line:

    $ flume node -n collector1

    Not exactly sure why though. Anyway, it is now working, simply follow the steps outlined here.

    This is my first Java code in the last 5 years, wow. I think I will spend more time to write more Flume plugins as we need to push more jobs to Flume to do some post processing for us.