68%完成后,在Hive Query上执行join和reducer挂起时,出现以下异常。
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=1) {"key":{"joinkey0":"12"},"value":{"_col2":"rs317647905"},"alias":1}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=1) {"key":{"joinkey0":"12"},"value":{"_col2":"rs317647905"},"alias":1}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
... 7 more
Caused by: org.apache.hadoop.
下面是我的查询和表结构:
create table table_llv_N_C as select table_line_n_passed.chromosome_number,table_line_n_passed.position,table_line_c_passed.id from table_line_n_passed join table_line_c_passed on (table_line_n_passed.chromosome_number=table_line_c_passed.chromosome_number)
hive> desc table_line_n_passed;
OK
chromosome_number string
position int
id string
ref string
alt string
quality double
filter string
info string
format string
line6 string
Time taken: 0.854 seconds
为什么会出现此错误,我该如何解决?
下面给出了完整的堆栈跟踪。
2015-03-09 10:19:09,347 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1797000000 rows
2015-03-09 10:19:09,919 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1798000000 rows
2015-03-09 10:19:09,919 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1798000000 rows
2015-03-09 10:19:10,495 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1799000000 rows
2015-03-09 10:19:10,495 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1799000000 rows
2015-03-09 10:19:11,069 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1800000000 rows
2015-03-09 10:19:11,069 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1800000000 rows
2015-03-09 10:19:11,644 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1801000000 rows
2015-03-09 10:19:11,644 INFO org.apache.hadoop.hive.ql.exec.SelectOperator:7转发1801000000行
2015-03-09 10:19:12,229 INFO org.apache.hadoop.hive.ql.exec.JoinOperator:6转发1802000000行
2015-03-09 10:19:12,229 INFO org.apache.hadoop.hive.ql.exec.SelectOperator:7转发1802000000行
2015-03-09 10:19:13,310 INFO org.apache.hadoop.hive.ql.exec.JoinOperator:6转发1803000000行
2015-03-09 10:19:13,310 INFO org.apache.hadoop.hive.ql.exec.SelectOperator:7转发1803000000行
2015-03-09 10:19:13,666警告org.apache.hadoop.hdfs.DFSClient:DataStreamer异常
org.apache.hadoop.ipc.RemoteException(java.io.IOException):文件/tmp/hive-root/hive_2015-03-09_10-03-59_970_3646456754594156815-1/_task_tmp.-ext-10001/_tmp.000000_0只能是复制到0个节点,而不是minReplication(= 1)。有2个数据节点在运行,并且此操作中不排除任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)
在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)
at org.apache.hadoop.ipc.Client.call(Client.java:1238)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1228)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1081)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:502)
2015-03-09 10:19:14,043 FAULT ExecReducer:org.apache.hadoop.hive.ql.metadata.HiveException:处理行(tag = 1){“key”:{“joinkey0”:“12”时,Hive运行时错误“},” value“:{” _ col2“:”。“},”别名“:1}
在org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
在org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
在org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
在org.apache.hadoop.mapred.Child $ 4.run(Child.java:268)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.mapred.Child.main(Child.java:262)
造成原因:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10-03-59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点,而不是minReplication(= 1)。有2个数据节点在运行,并且此操作中不排除任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)
在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)
at org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:134)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
... 7 more
造成原因:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10-03- 59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点,而不是minReplication(= 1)。有2个数据节点在运行,并且此操作中不排除任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)
在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:620)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:742)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:745)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:847)
at org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:109)
... 9 more
造成原因:org.apache.hadoop.ipc.RemoteException(java.io.IOException):文件/tmp/hive-root/hive_2015-03-09_10-03-59_970_3646456754594156815-1/_task_tmp.-ext-10001/_tmp.000000_0只能复制到0个节点,而不是minReplication(= 1)。有2个数据节点在运行,并且此操作中不排除任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)
在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)
at org.apache.hadoop.ipc.Client.call(Client.java:1238)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1228)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1081)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:502)
2015-03-09 10:19:14,800 INFO org.apache.hadoop.mapred.TaskLogsTruncater:使用mapRetainSize = -1和reduceRetainSize = -1初始化日志的truncater
2015-03-09 10:19:14,806警告org.apache.hadoop.mapred.Child:运行子项时出错
java.lang.RuntimeException:org.apache.hadoop.hive.ql.metadata.HiveException:Hive运行时在处理行(tag = 1){“key”:{“joinkey0”:“12”},“value”时出错: {“_col2”:“。”},“别名”:1}
在org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)
在org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
在org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
在org.apache.hadoop.mapred.Child $ 4.run(Child.java:268)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.mapred.Child.main(Child.java:262)
由以下原因引起:org.apache.hadoop.hive.ql.metadata.HiveException:处理行(tag = 1){“key”:{“joinkey0”:“12”},“value”:{“_ col2 “:”。“},”别名“:1}
在org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
...另外7个
造成原因:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10-03-59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点,而不是minReplication(= 1)。有2个数据节点在运行,并且此操作中不排除任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)
在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)
at org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:134)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
... 7 more
造成原因:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10-03- 59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点,而不是minReplication(= 1)。有2个数据节点在运行,并且此操作中不排除任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)
在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)
在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:620)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:742)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:745)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:847)
最佳答案
根本原因可能是HDFS群集中的磁盘空间不足,基于以下事实:查询似乎仅在运行一段时间并与堆栈跟踪中的此消息结合后才会失败:
... could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
当出现网络通信问题(例如,与数据节点的通信丢失)或HDFS因无法定位带有空闲块的数据节点而无法执行写操作时,该消息似乎会突然出现。由于您的查询确实成功启动,对我来说,这往往可以排除网络问题;而是,您的Hive查询似乎正在用尽磁盘空间来尝试生成该表。您可能要检查群集上的当前使用情况,这可以通过Ambari之类的工具(如果已安装)或通过命令行执行以下操作之一来完成:
hdfs dfs -df -h
如果您运行的是旧版本,则可能是这样的:
hadoop fs -df -h
关于hadoop - 使用连接时Hive中的行异常,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/28874090/