OOZIE wordcount示例给出了JA009:RPC响应超出了最大数据长度。我们将ipc.maximum.data.length加倍,然后重新启动NameNode。

2018-12-05 17:55:45,914  WARN MapReduceActionExecutor:523 - SERVER[******] USER[******] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000004-181205174411487-oozie-******-W] ACTION[0000004-181205174411487-oozie-******-W@mr-node] No credential properties found for action : 0000004-181205174411487-oozie-******-W@mr-node, cred : null
2018-12-05 18:10:46,019  WARN ActionStartXCommand:523 - SERVER[******] USER[******] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000004-181205174411487-oozie-******-W] ACTION[0000004-181205174411487-oozie-******-W@mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: RPC response exceeds maximum data length]
org.apache.oozie.action.ActionExecutorException: JA009: RPC response exceeds maximum data length
    at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
    at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:437)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1070)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1512)
    at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:243)
    at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
    at org.apache.oozie.command.XCommand.call(XCommand.java:290)
    at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:334)
    at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:263)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:181)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
    at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1808)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1163)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1059)

任何帮助,将不胜感激。谢谢

最佳答案

您是否尝试修改hdfs-site.xml中的配置?

<property>
     <name>ipc.maximum.data.length</name>
     <value>134217728</value>
</property>

如果已经足够高,请确保fs.default.name的core-site.xml配置使用的是IP,而不仅仅是localhost。
<configuration>
       ....
        <property>
                <name>fs.default.name</name>
                <value>hdfs://your ip:9000</value>
        </property>
</configuration>

10-07 13:25