我已经在VMWare工作站中使用Cloudera Manager 5.4.1实现了2节点集群,其中包括Hbase,Impala,Hive,Sqoop2,Oozie,Zookeeper,NameNode,SecondaryName和YARN等组件。
我已经为每个节点模拟了3个磁盘驱动器,其中包括用于操作系统的sda,用于Hadoop存储的sdb和sdc。

如我所分配的,具有16GB的sdb1和具有16GB的sdc1专门用于每个节点上的Hadoop存储。因此,我假设包括两个节点在内的HDFS存储的总容量应为64GB。但是当使用dfsadmin命令以及NameNode UI检查输出时,我看到“配置的容量小于为HDFS分配的原始磁盘大小”。
我在下面显示了dfsadmin命令的输出,并且还显示了df -h的输出。请帮助我理解为什么“配置的容量”比我原来的磁盘大小要小?

[hduser@node1 ~]$ df -h


Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg_node1-LogVol00   40G   15G   23G  39% /
tmpfs                          3.9G   76K  3.9G   1% /dev/shm
/dev/sda1                      388M   39M  329M  11% /boot
/dev/sdb1                       16G  283M   15G   2% /disks/disk1/hdfsstorage/dfs
/dev/sdc1                       16G  428M   15G   3% /disks/disk2/hdfsstorage/dfs
/dev/sdb2                      8.1G  147M  7.9G   2% /disks/disk1/nonhdfsstorage
/dev/sdc2                      8.1G  147M  7.9G   2% /disks/disk2/nonhdfsstorage
cm_processes                   3.9G  5.8M  3.9G   1% /var/run/cloudera-scm-agent/process
[hduser@node1 ~]$


[hduser@node1 zookeeper]$ sudo -u hdfs hdfs dfsadmin -report
[sudo] password for hduser:
Configured Capacity: 47518140008 (44.25 GB)
Present Capacity: 47518140008 (44.25 GB)
DFS Remaining: 46728742571 (43.52 GB)
DFS Used: 789397437 (752.83 MB)
DFS Used%: 1.66%
Under replicated blocks: 385
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.52.111:50010 (node1.example.com)
Hostname: node1.example.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 23759070004 (22.13 GB)
DFS Used: 394702781 (376.42 MB)
Non DFS Used: 0 (0 B)
DFS Remaining: 23364367223 (21.76 GB)
DFS Used%: 1.66%
DFS Remaining%: 98.34%
Configured Cache Capacity: 121634816 (116 MB)
Cache Used: 0 (0 B)
Cache Remaining: 121634816 (116 MB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers: 2
Last contact: Sun May 15 18:15:33 IST 2016


Name: 192.168.52.112:50010 (node2.example.com)
Hostname: node2.example.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 23759070004 (22.13 GB)
DFS Used: 394694656 (376.41 MB)
Non DFS Used: 0 (0 B)
DFS Remaining: 23364375348 (21.76 GB)
DFS Used%: 1.66%
DFS Remaining%: 98.34%
Configured Cache Capacity: 523239424 (499 MB)
Cache Used: 0 (0 B)
Cache Remaining: 523239424 (499 MB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers: 2
Last contact: Sun May 15 18:15:32 IST 2016

最佳答案

您应该检查配置

<property>
  <name>dfs.datanode.du.reserved</name>
  <value>0</value>
  <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
  </description>
</property>

保留空间不属于“配置容量”。

关于hadoop - 根据dfsadmin命令,HDFS配置的容量小于原始磁盘的容量,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/37238765/

10-12 17:32