我正在尝试在本地计算机网络上设置hadoop3集群,以小规模启动一个主节点和两个工作程序节点。
我认为按照本教程configure hadoop 3.1.0 in multinodes cluster,我设法有了一些应该起作用的东西
我下载了hadoop版本3.1.1
dfsadim报告:

hadoop@######:~/hadoop3/hadoop-3.1.1$ hdfs dfsadmin -report
Configured Capacity: 1845878235136 (1.68 TB)
Present Capacity: 355431677952 (331.02 GB)
DFS Remaining: 355427651584 (331.02 GB)
DFS Used: 4026368 (3.84 MB)
DFS Used%: 0.00%
Replicated Blocks:
    Under replicated blocks: 6
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0
    Pending deletion blocks: 0
Erasure Coded Block Groups:
    Low redundancy block groups: 0
    Block groups with corrupt internal blocks: 0
    Missing block groups: 0
    Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: ######:9866 (######)
Hostname: ######
Decommission Status : Normal
Configured Capacity: 147511238656 (137.38 GB)
DFS Used: 2150400 (2.05 MB)
Non DFS Used: 46601465856 (43.40 GB)
DFS Remaining: 93390856192 (86.98 GB)
DFS Used%: 0.00%
DFS Remaining%: 63.31%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Sep 06 18:44:21 CEST 2018
Last Block Report: Thu Sep 06 18:08:09 CEST 2018
Num of Blocks: 17


Name: ######:9866 (######)
Hostname: ######
Decommission Status : Normal
Configured Capacity: 1698366996480 (1.54 TB)
DFS Used: 1875968 (1.79 MB)
Non DFS Used: 1350032670720 (1.23 TB)
DFS Remaining: 262036795392 (244.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 15.43%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Sep 06 18:44:22 CEST 2018
Last Block Report: Thu Sep 06 18:08:10 CEST 2018
Num of Blocks: 12
因此,在继续和调整资源管理之前,我尝试运行一个简单的测试,但失败了。
这是pi示例测试
hadoop@#####:~/hadoop3/hadoop-3.1.1$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar pi 2 10
Number of Maps  = 2
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Starting Job
2018-09-06 18:51:29,277 INFO client.RMProxy: Connecting to ResourceManager at nameMasterhost/IP:8032
2018-09-06 18:51:29,589 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1536250099280_0005
2018-09-06 18:51:29,771 INFO input.FileInputFormat: Total input files to process : 2
2018-09-06 18:51:30,338 INFO mapreduce.JobSubmitter: number of splits:2
2018-09-06 18:51:30,397 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-09-06 18:51:30,967 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1536250099280_0005
2018-09-06 18:51:30,970 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-09-06 18:51:31,175 INFO conf.Configuration: resource-types.xml not found
2018-09-06 18:51:31,175 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-09-06 18:51:31,248 INFO impl.YarnClientImpl: Submitted application application_1536250099280_0005
2018-09-06 18:51:31,295 INFO mapreduce.Job: The url to track the job: http://nameMAster:8088/proxy/application_1536250099280_0005/
2018-09-06 18:51:31,296 INFO mapreduce.Job: Running job: job_1536250099280_0005
2018-09-06 18:51:44,388 INFO mapreduce.Job: Job job_1536250099280_0005 running in uber mode : false
2018-09-06 18:51:44,390 INFO mapreduce.Job:  map 0% reduce 0%
2018-09-06 18:51:44,409 INFO mapreduce.Job: Job job_1536250099280_0005 failed with state FAILED due to: Application application_1536250099280_0005 failed 2 times due to AM Container for appattempt_1536250099280_0005_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2018-09-06 18:51:38.416]Exception from container-launch.
Container id: container_1536250099280_0005_02_000001
Exit code: 1
Exception message: /bin/mv: target '/nm-local-dir/nmPrivate/application_1536250099280_0005/container_1536250099280_0005_02_000001/container_1536250099280_0005_02_000001.pid' is not a directory


[2018-09-06 18:51:38.421]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class .nm-local-dir.usercache.hadoop.appcache.application_1536250099280_0005.container_1536250099280_0005_02_000001.tmp


[2018-09-06 18:51:38.422]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class .nm-local-dir.usercache.hadoop.appcache.application_1536250099280_0005.container_1536250099280_0005_02_000001.tmp


For more detailed output, check the application tracking page: http://nameMaster:8088/cluster/app/application_1536250099280_0005 Then click on links to logs of each attempt.
. Failing the application.
2018-09-06 18:51:44,438 INFO mapreduce.Job: Counters: 0
Job job_1536250099280_0005 failed!
我将添加所有要求的信息,但是我不明白问题所在,如果它们不相关,我也不想将所有配置文件都淹没在问题中。
在hdfs系统文件中,没有“/ nm-local-dir /”。
我不知道那条路从何而来。
我们竭诚欢迎您的帮助。

最佳答案

HDFS是存储,YARN是计算。如果要将集群用于除纯存储之外的其他任何内容,则需要YARN,这意味着您将需要节点管理器(NM)。

节点管理器是允许您执行任务的服务器,因此您需要定义nm-local-dir才能运行pi之类的作业。 nm-local-dir需要在yarn-site.xml中定义,并且是每个运行节点管理器的主机的本地目录(不是HDFS!)。

关于hadoop - hadoop3找不到.nm-local-dir.usercache.hadoop.appcache。做pi测试时,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/52208191/

10-12 23:00