Recently I was working on a issue that when importing data from Oracle into Hive table using Spark, the data of type Number(1,0) in Oracle was implicitly converted into Boolean data type. Before was on CDH5.5.x, it worked correctly, however, after upgrading to CDH5.10.x, the issue happened. See below Hive table output after import:
SELECT column1 FROM test_table limit 2; 0 1
SELECT column1 FROM test_table limit 2; False True
After digging further, I discovered that this change was introduced by SPARK-16625, due to the integration required for Spark to work correctly with Oracle.
Since the change was intended, the following is the suggested workarounds:
- Cast the Boolean to a type of your choosing in the Spark code, before writing it to the Hive table
- Make sure that the mapped column in Hive is also of compatible data type, for example, TinyInt, rather than String, so that the value of True or False will be mapped to 1 or 0 respectively, rather than string value of “True” or “False” (the reason that the column got “False” and “True” values were because the column was of String data type)
Hope above helps.