问题描述
我是Hadoop的新手,我试图在我的Ubuntu机器上执行伪分布式模式设置,并面临 hadoop put
命令的问题。我的配置细节可以在这篇文章中找到 - >
现在我正在尝试使用以下命令将一些文件添加到HDFS:
hadoop fs -mkdir / user / myuser
hadoop fs -lsr /
$ ./hadoop fs -lsr /
drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 / tmp
drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 / tmp / hadoop-myuser
drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 / tmp / hadoop-myuser / dfs
-rw -r - r-- 1 myuser supergroup 0 2014-11-26 16:04 / tmp / hadoop-myuser / dfs / name
drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 / tmp / hadoop-myuser / mapred
drwx ------ - myuser supergroup 0 2014-11-26 16:12 / tmp / hadoop-myuser / mapred / system
drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 / user
drwxr -xr-x - myuser supergroup 0 2014-11-26 16:06 / user / myuser
现在我正在运行 put
命令,但得到类似的异常这:
$ ./hadoop fs -put example.txt。
14/11/26 16:06:19 WARN hdfs.DFSClient:DataStreamer异常:org.apache.hadoop.ipc.RemoteException:java.io.IOException:文件/user/myuser/example.txt只能是复制到0节点,而不是1
,位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
处,位于org.apache.hadoop.hdfs.server。 namenode.NameNode.addBlock(NameNode.java:783)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
在java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:587)
at org。 apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache .hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1426)
at org .apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC $ Invoker.invoke(RPC.java:229)
at com.sun $ .b $ b at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
在java.lang.reflect.Method.invoke(Method.java:597)
在org.apache.hadoop.io.retry .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy。$ Proxy1。 addBlock(未知源)
在org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.locateFol lowingBlock(DFSClient.java:3720)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream。在org.apache.hadoop.hdfs.DFSClient上访问$ 2600(DFSClient.java:2783)
$ DFSOutputStream $ DataStreamer.run(DFSClient.java:3023)
14/11/26 16 :06:19警告hdfs.DFSClient:错误恢复为null坏的datanode [0]节点== null
14/11/26 16:06:19 WARN hdfs.DFSClient:无法获取块位置。源文件/user/myuser/example.txt - 正在中止...
put:java.io.IOException:文件/user/myuser/example.txt只能复制到0个节点,而不是1
14/11/26 16:06:19错误hdfs.DFSClient:无法关闭文件/user/myuser/example.txt
org.apache.hadoop.ipc.RemoteException:java.io.IOException:文件/user/myuser/example.txt只能复制到0个节点,而不是1
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl .invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC $ Server.call(RPC .java:587)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1432)
at org.apache.hadoop.ipc.Ser ver $ Handler $ 1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1426)
在org.apache.hadoop.ipc.Client.call(Client.java:1113)
在org.apache.hadoop.ipc.RPC $ Invoker.invoke(RPC.java:229)
at com.sun.proxy。$ Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl。
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org .apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
在org.apache.hadoop.io.retry .RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy。$ Proxy1.addBlock(未知源代码)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.locateFollowingBlock(DFSClient .java:3720)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.access $ 2600( DFSClient.java:2783)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream $ DataStreamer.run(DFSClient.java:3023)
有人可以帮我解决这个问题吗?
问题的解决方案:
根据提供的答案,我可以通过以下步骤解决问题:
停止所有服务:
./ stop-all.sh
$ p $ 2)删除数据目录:
rm -rf / tmp / hadoop-myuser / dfs / data /
3)启动服务:
./ start-all.sh
code>4)然后将文件放入HDFS:
./ hadoop fs -put example.txt。
解决方案这是由于数据节点问题。
启动您的datanode 并立即执行操作I am new to Hadoop, I am trying to do pseudo distributed mode setup on my ubuntu machine and facing an issue with
hadoop put
command. My configuration details are available in this post --> What the command "hadoop namenode -format" will doNow I am trying to add some files to HDFS using below commands:
hadoop fs –mkdir /user/myuser hadoop fs -lsr / $ ./hadoop fs -lsr / drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 /tmp drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 /tmp/hadoop-myuser drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 /tmp/hadoop-myuser/dfs -rw-r--r-- 1 myuser supergroup 0 2014-11-26 16:04 /tmp/hadoop-myuser/dfs/name drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 /tmp/hadoop-myuser/mapred drwx------ - myuser supergroup 0 2014-11-26 16:12 /tmp/hadoop-myuser/mapred/system drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 /user drwxr-xr-x - myuser supergroup 0 2014-11-26 16:06 /user/myuser
Now I am running the
put
command but getting exception like this:$ ./hadoop fs -put example.txt . 14/11/26 16:06:19 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/myuser/example.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) at org.apache.hadoop.ipc.Client.call(Client.java:1113) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023) 14/11/26 16:06:19 WARN hdfs.DFSClient: Error Recovery for null bad datanode[0] nodes == null 14/11/26 16:06:19 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/myuser/example.txt" - Aborting... put: java.io.IOException: File /user/myuser/example.txt could only be replicated to 0 nodes, instead of 1 14/11/26 16:06:19 ERROR hdfs.DFSClient: Failed to close file /user/myuser/example.txt org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/myuser/example.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) at org.apache.hadoop.ipc.Client.call(Client.java:1113) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
Can someone please help me how can I fix this issue?
Solution to the issue:
Based on answers provided I am able to solve the issue by following below steps:
1) Stop all services:
./stop-all.sh
2) Delete the data directory:
rm -rf /tmp/hadoop-myuser/dfs/data/
3) Start the services:
./start-all.sh
4) Then put the file into HDFS:
./hadoop fs -put example.txt .
解决方案This is due to the data node problem.Start your datanode and do the operation now
这篇关于Hadoop放置命令抛出 - 只能复制到0个节点,而不是1个的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!