Unable to import Oracle table with CLOB column into HDFS using Sqoop

If you encountered the following error while importing Oracle table into HDFS: org.apache.sqoop.mapreduce.TextExportMapper: Exception: java.io.IOException: Could not buffer record at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:218) at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:46) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:655) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:84) at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) …

Hive query failed with error: Killing the Job. mapResourceReqt: 1638 maxContainerCapability:1200″

This article explains how to fix the following error when running a hive query: MAP capability required is more than the supported max container capability in the cluster. Killing the Job. mapResourceRequest: <memory:1638, vCores:1> maxContainerCapability:<memory:1200, vCores:2> This error might not be obvious, however, this is caused by the following config …