伪分布式单节点安装执行pi失败:
[root@server- ~]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2..jar pi
出错信息:
Number of Maps =
Samples per Map =
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Starting Job
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0004
// :: ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2./QuasiMonteCarlo_1386644665974_1643821138/in
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2./QuasiMonteCarlo_1386644665974_1643821138/in
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
打开调试日志:
[root@server- hadoop-2.2.]# export HADOOP_ROOT_LOGGER=DEBUG,console
详细的出错信息:
[root@server- hadoop-2.2.]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2..jar pi
Number of Maps =
Samples per Map =
// :: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
// :: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
// :: DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
// :: DEBUG security.Groups: Creating new Groups object
// :: DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
// :: DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
// :: DEBUG util.NativeCodeLoader: java.library.path=/root/hadoop-2.2./lib
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: DEBUG security.JniBasedUnixGroupsMappingWithFallback: Falling back to shell based
// :: DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
// :: DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=
// :: DEBUG security.UserGroupInformation: hadoop login
// :: DEBUG security.UserGroupInformation: hadoop login commit
// :: DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
// :: DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
// :: DEBUG util.Shell: setsid exited with exit code
// :: DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
// :: DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
// :: DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
// :: DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
// :: DEBUG impl.MetricsSystemImpl: StartupProgress, NameNode startup progress
// :: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
// :: DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@46ea3050
// :: DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
// :: DEBUG ipc.Client: The ping interval is ms.
// :: DEBUG ipc.Client: Connecting to /10.10.96.33:
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root: starting, having connections
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 33ms
// :: DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in: masked=rwxr-xr-x
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 25ms
// :: DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0: masked=rw-r--r--
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: create took 5ms
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.LeaseRenewer: Lease renewer daemon for [DFSClient_NONMAPREDUCE_-379311577_1] with renew id started
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Waiting for ack for:
// :: DEBUG hdfs.DFSClient: Allocating new block
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 2ms
// :: DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Send buf size
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 1ms
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741859_1035 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741859_1035 sending packet packet seqno: offsetInBlock: lastPacketInBlock:true lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: Closing old block BP--10.10.96.33-:blk_1073741859_1035
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: complete took 13ms
Wrote input for Map #
// :: DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1: masked=rw-r--r--
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: create took 12ms
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Waiting for ack for:
// :: DEBUG hdfs.DFSClient: Allocating new block
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 3ms
// :: DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Send buf size
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741860_1036 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741860_1036 sending packet packet seqno: offsetInBlock: lastPacketInBlock:true lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: Closing old block BP--10.10.96.33-:blk_1073741860_1036
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: complete took 8ms
Wrote input for Map #
// :: DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2: masked=rw-r--r--
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: create took 4ms
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Waiting for ack for:
// :: DEBUG hdfs.DFSClient: Allocating new block
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 0ms
// :: DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Send buf size
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741861_1037 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741861_1037 sending packet packet seqno: offsetInBlock: lastPacketInBlock:true lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: Closing old block BP--10.10.96.33-:blk_1073741861_1037
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: complete took 12ms
Wrote input for Map #
Starting Job
// :: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:)
// :: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
// :: DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
// :: DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
// :: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:)
// :: DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
// :: DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
// :: DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
// :: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:)
// :: DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
// :: DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
// :: DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
// :: DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
// :: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
// :: DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
// :: DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
// :: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:)
// :: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:)
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
// :: DEBUG mapred.ResourceMgrDelegate: getStagingAreaDir: dir=/tmp/hadoop-yarn/staging/root/.staging
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
// :: DEBUG ipc.Client: The ping interval is ms.
// :: DEBUG ipc.Client: Connecting to /0.0.0.0:
// :: DEBUG ipc.Client: IPC Client () connection to /0.0.0.0: from root: starting, having connections
// :: DEBUG ipc.Client: IPC Client () connection to /0.0.0.0: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /0.0.0.0: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getNewApplication took 7ms
// :: DEBUG mapreduce.JobSubmitter: Configuring job job_1386598961500_0007 with /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007 as the submit dir
// :: DEBUG mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:[hdfs://10.10.96.33:8020]
// :: DEBUG mapreduce.JobSubmitter: default FileSystem: hdfs://10.10.96.33:8020
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms
// :: DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007: masked=rwxr-xr-x
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 7ms
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 6ms
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
// :: DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar: masked=rw-r--r--
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: create took 13ms
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=, blockSize=, appendChunk=false
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: Allocating new block
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 1ms
// :: DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:
// :: DEBUG hdfs.DFSClient: Send buf size
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741862_1038 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=, blockSize=, appendChunk=false
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741862_1038 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=, blockSize=, appendChunk=false
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741862_1038 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=, blockSize=, appendChunk=false
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=, chunksPerPacket=, packetSize=
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741862_1038 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=, chunksPerPacket=, bytesCurBlock=
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Queued packet
// :: DEBUG hdfs.DFSClient: Waiting for ack for:
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741862_1038 sending packet packet seqno: offsetInBlock: lastPacketInBlock:false lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: DataStreamer block BP--10.10.96.33-:blk_1073741862_1038 sending packet packet seqno: offsetInBlock: lastPacketInBlock:true lastByteOffsetInBlock:
// :: DEBUG hdfs.DFSClient: DFSClient seqno: status: SUCCESS downstreamAckTimeNanos:
// :: DEBUG hdfs.DFSClient: Closing old block BP--10.10.96.33-:blk_1073741862_1038
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: complete took 6ms
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: setReplication took 6ms
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 12ms
// :: DEBUG mapreduce.JobSubmitter: Creating splits at hdfs://10.10.96.33:8020/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007
// :: INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: delete took 11ms
// :: ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2./QuasiMonteCarlo_1386646614155_1445162438/in
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root sending #
// :: DEBUG ipc.Client: IPC Client () connection to /10.10.96.33: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: delete took 12ms
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2./QuasiMonteCarlo_1386646614155_1445162438/in
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)