当我在namenode上jps

stillily@localhost:~$ jps
3669 SecondaryNameNode
3830 ResourceManager
3447 NameNode
4362 Jps

当我在datanode上jps
stillily@localhost:~$ jps
3574 Jps
3417 NodeManager
3292 DataNode

但是当我放一个文件
stillily@localhost:~$ hadoop fs  -put txt hdfs://hadoop:9000/txt
15/07/21 22:08:32 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at
.......
put: File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

而且我注意到datanode机器中没有“版本”文件,但是无论我运行“hadoop namenode -format”多少次,都会创建版本文件。

BTW ubuntu。

最佳答案

现在我知道原因是虚拟机的IP已更改。我只是在namenode中修改了/ etc / hosts,但没有在datanode中进行修改。

关于ubuntu - 为什么我的datanode在hadoop集群上运行,但仍然无法将文件放入hdfs?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/31541768/

10-16 01:29