本文介绍了在注视namenode时出现ExitCodeException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Solaris 10服务器上配置了hadoop。我在这台服务器上配置了Hadoop 2.7.1。现在,当我通过使用start-dfs.sh datanode启动hadoop守护进程并且secondaryNamenode正在启动但Namenode未启动时。我检查了namenode日志,它给了我以下错误信息:

  2015-12-08 16:24:47,703 INFO 
org.apache.hadoop.hdfs.server.namenode.NameNode:STARTUP_MSG:
/ *************************** *********************************
STARTUP_MSG:启动NameNode
STARTUP_MSG:host = psdrac2 /192.168.106.109
STARTUP_MSG:args = []
STARTUP_MSG:version = 2.7.1
STARTUP_MSG:build = https://git-wip-us.apache.org/repos/asf /hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a;由'jenkins'编译于2015-06-29T06:04Z
STARTUP_MSG:java = 1.8.0_66
********************** ************************************** /
2015-12-08 16: 24:47,798信息org.apache.hadoop.hdfs.server.namenode.NameNode:为[TERM,HUP,INT]注册的UNIX信号处理程序
2015-12-08 16:24:47,832信息org.apache.hadoop .hdfs.server.namenode.NameNode:createNameNode
2015-12-08 16:24:50,310信息org.apache.hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2015 -12-08 16:24:50,977 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:计划的快照周期为10秒(s)。
2015-12-08 16:24:50,978 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:NameNode指标系统启动
2015-12-08 16:24:50,998 INFO org.apache。 hadoop.hdfs.server.namenode.NameNode:fs.defaultFS是hdfs:// psdrac2:9000
2015-12-08 16:24:51,005 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:客户端将使用psdrac2:9000来访问此名称节点/服务。
2015-12-08 16:24:51,510 WARN org.apache.hadoop.util.NativeCodeLoader:无法为您的平台加载native-hadoop库...在适用的情况下使用builtin-java类
2015 -12-08 16:24:52,680信息org.apache.hadoop.hdfs.DFSUtil:启动hdfs的Web服务器:http://0.0.0.0:50070
2015-12-08 16:24: 53,177 INFO org.mortbay.log:通过org.mortbay.log.Slf4jLog记录到org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
2015-12-08 16:24:53,239 INFO org.apache .hadoop.security.authentication.server.AuthenticationFilter:无法初始化FileSignerSecretProvider,回退以使用随机秘密。
2015-12-08 16:24:53,289信息org.apache.hadoop.http.HttpRequestLog:http.requests.namenode的http请求日志未定义
2015-12-08 16:24: 53,336 INFO org.apache.hadoop.http.HttpServer2:添加全局过滤器'safety'(class = org.apache.hadoop.http.HttpServer2 $ QuotingInputFilter)
2015-12-08 16:24:53,354信息组织apache.hadoop.http.HttpServer2:添加过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)到上下文hdfs
2015-12-08 16:24:53,355 INFO org.apache。 hadoop.http.HttpServer2:添加过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)到上下文日志
2015-12-08 16:24:53,356信息org.apache.hadoop。 http.HttpServer2:添加过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)上下文静态
2015-12-08 16:24:53,544信息org.apache.hadoop.http。 HttpServer2:添加过滤器'org.apache.hadoop.hdfs.web.AuthFilter'(cla ss = org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-08 16:24:53,556 INFO org.apache.hadoop.http.HttpServer2:addJerseyResourcePackage:packageName = org.apache.hadoop.hdfs .server.namenode.web.resources; org.apache.hadoop.hdfs.web.resources,pathSpec = / webhdfs / v1 / *
2015-12-08 16:24:53,673 INFO org.apache.hadoop。 http.HttpServer2:Jetty绑定到端口50070
2015-12-08 16:24:53,674信息org.mortbay.log:jetty-6.1.26
2015-12-08 16:24:56,059信息org.mortbay.log:已启动[email protected]:50070
2015-12-08 16:24:56,310 WARN org.apache.hadoop.hdfs.server.common.Util:Path / u03 / recoverdest /hadoop_install/hadoop-2.7.1/data/namenode应该在配置文件中指定为URI。请更新hdfs配置。
2015-12-08 16:24:56,313 WARN org.apache.hadoop.hdfs.server.common.Util:路径/u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode应指定为配置文件中的URI。请更新hdfs配置。
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem:只配置一个映像存储目录(dfs.namenode.name.dir)。由于缺少冗余存储目录,请小心数据丢失!
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem:只有一个命名空间编辑存储目录(dfs.namenode.edits.dir)配置。由于缺少冗余存储目录,请小心数据丢失!
2015-12-08 16:24:56,362 WARN org.apache.hadoop.hdfs.server.common.Util:路径/u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode应指定为配置文件中的URI。请更新hdfs配置。
2015-12-08 16:24:56,364 WARN org.apache.hadoop.hdfs.server.common.Util:路径/u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode应指定为配置文件中的URI。请更新hdfs配置。
2015-12-08 16:24:56,701信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:找不到KeyProvider。
2015-12-08 16:24:56,702信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsLock是公平的:true
2015-12-08 16:24:57,154 INFO org .apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:dfs.block.invalidate.limit = 1000
2015-12-08 16:24:57,155 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager :dfs.namenode.datanode.registration.ip-hostname-check = true
2015-12-08 16:24:57,171 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:dfs.namenode.startup .delay.block.deletion.sec设置为000:00:00:00.000
2015-12-08 16:24:57,191信息org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:块删除将于2015年左右开始Dec 08 16:24:57
2015-12-08 16:24:57,215 INFO org.apache.hadoop.util.GSet:计算地图的容量BlocksMap
2015-12-08 16:24:57,216信息org.apache.hadoop.util.GSet:VM类型= 64位
2015-12-08 16:24:57,232 INFO org.apache.hadoop.util.GSet:2.0%max内存889 MB = 17.8 MB
201 5-12-08 16:24:57,233信息org.apache.hadoop.util.GSet:capacity = 2 ^ 21 = 2097152条目
2015-12-08 16:24:57,368信息org.apache.hadoop。 hdfs.server.blockmanagement.BlockManager:dfs.block.access.token.enable = false
2015-12-08 16:24:57,370信息org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:defaultReplication = 3
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:maxReplication = 512
2015-12-08 16:24:57,371信息组织apache.hadoop.hdfs.server.blockmanagement.BlockManager:minReplication = 1
2015-12-08 16:24:57,371信息org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:maxReplicationStreams = 2
2015-12-08 16:24:57,371信息org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:shouldCheckForEnoughRacks = false
2015-12-08 16:24:57,372信息org.apache.hadoop。 hdfs.server.blockmanagement.BlockManager:replicationRecheckInterval = 3000
2015-12-08 1 6:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:encryptDataTransfer = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement。 BlockManager:maxNumBlocksToLog = 1000
2015-12-08 16:24:57,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsOwner = hadoop(auth:SIMPLE)
2015-12- 08 16:24:57,423信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:supergroup = supergroup
2015-12-08 16:24:57,423信息org.apache.hadoop.hdfs.server.namenode .FSNamesystem:isPermissionEnabled = true
2015-12-08 16:24:57,424 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:HA启用:false
2015-12-08 16: 24:57,435 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:Append Enabled:true
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet:计算容量map INodeMap
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet:VM type = 64-bi t
2015-12-08 16:24:58,544信息org.apache.hadoop.util.GSet:1.0%最大内存889 MB = 8.9 MB
2015-12-08 16:24:58,544信息org.apache.hadoop.util.GSet:capacity = 2 ^ 20 = 1048576条目
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory:启用了ACL吗? false
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory:XAttrs是否启用? true
2015-12-08 16:24:58,555 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory:xattr的最大大小:16384
2015-12-08 16:24: 58,556 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:发生超过10次的缓存文件名
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet:Computing容量映射cachedBlocks
2015-12-08 16:24:58,625信息org.apache.hadoop.util.GSet:VM类型= 64位
2015-12-08 16:24:58,626信息org.apache.hadoop.util.GSet:最大内存0.25%889 MB = 2.2 MB
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet:capacity = 2 ^ 18 = 262144条目
2015-12-08 16:24:58,640 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12- 08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:dfs.namenode.safemode.min.datanodes = 0
2015-12-08 16:24:58,641 INFO org.apache .hadoop.hdfs.server.namenode.FSNamesystem:dfs .namenode.safemode.extension = 30000
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics:NNTop conf:dfs.namenode.top。 window.num.buckets = 10
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics:NNTop conf:dfs.namenode.top.num .users = 10
2015-12-08 16:24:58,666 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics:NNTop conf:dfs.namenode.top.windows.minutes = 1,5,25
2015-12-08 16:24:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:启用名称节点上的重试缓存
2015-12-08 16 :24:58,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:重试缓存将使用总堆数0.03,重试缓存入口时间为600000毫秒
2015-12-08 16:24:58,695 INFO org.apache.hadoop.util.GSet:映射的计算容量NameNodeRetryCache
2015-12-08 16:24:58,696信息org.apache.hadoop.util.GSet:VM类型= 64位
2015-12-08 16:24: 58,697 INFO org.apache.hadoop.util.GSet:0.029999999329447746%max memory 889 MB = 273.1 KB
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet:capacity = 2 ^ 15 = 32768条目
2015-12-08 16:24:58,790信息org.apache.hadoop.hdfs.server.common.Storage:锁定/u03/recoverdest/hadoop_install/hadoop-2.7.1/data/ namenode / in_use.lock由nodename获取15020 @ psdrac2
2015-12-08 16:24:59,268 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:恢复/ u03 / recoverdest / hadoop_install中未终止的段/hadoop-2.7.1/data/namenode/current
2015-12-08 16:24:59,272信息org.apache.hadoop.hdfs.server.namenode.FSImage:没有选择编辑日志流。
2015-12-08 16:24:59,600 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode:加载1个INodes。
2015-12-08 16:24:59,878 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf:在0秒内加载FSImage。
2015-12-08 16:24:59,879信息org.apache.hadoop.hdfs.server.namenode.FSImage:从/u03/recoverdest/hadoop_install/hadoop-2.7.1/data/加载的txid 0映像namenode / current / fsimage_0000000000000000000
2015-12-08 16:24:59,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:需要保存fs图片吗? false(staleImage = false,haEnabled = false,isRollingUpgrade = false)
2015-12-08 16:24:59,958 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:以1 $ b开始日志段$ b 2015-12-08 16:25:01,370信息org.apache.hadoop.hdfs.server.namenode.NameCache:初始化为0条目0查找
2015-12-08 16:25:01,371信息组织apache.hadoop.hdfs.server.namenode.FSNamesystem:在2645 msecs中完成加载FSImage
2015-12-08 16:25:03,759信息org.apache.hadoop.hdfs.server.namenode.NameNode:RPC服务器是绑定到psdrac2:9000
2015-12-08 16:25:03,809 INFO org.apache.hadoop.ipc.CallQueueManager:使用callQueue类java.util.concurrent.LinkedBlockingQueue
2015-12-08 16 :25:03,909 INFO org.apache.hadoop.ipc.Server:启动端口9000的Socket Reader#1
2015-12-08 16:25:04,108 INFO org.apache.hadoop.hdfs.server.namenode。 FSNamesystem:已注册FSNamesystemState MBean
2015-12-08 16:25:04,116 WARN org.apache.hadoop.hdfs.server.common.Util:Path / u03 / recove应该在配置文件中指定rdest / hadoop_install / hadoop-2.7.1 / data / namenode作为URI。请更新hdfs配置。
2015-12-08 16:25:04,169信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:停止启动活动状态的服务
2015-12-08 16:25:04,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:结束日志段1
2015-12-08 16:25:04,173 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:事务数:2交易总时间(毫秒):5批次同步数量:0同步数量:3同步时间(毫秒):25
2015-12-08 16:25:04,184信息org.apache.hadoop .hdfs.server.namenode.FileJournalManager:完成编辑文件/u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_inprogress_0000000000000000001 - > /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_0000000000000000001-0000000000000000002
2015-12-08 16:25:04,202 INFO org.apache.hadoop.ipc.Server:停止服务器开启9000
2015-12-08 16:25:04,294信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:停止启动活动状态的服务
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:停止启动待机状态的服务
2015-12-08 16:25:04,315 INFO org.mortbay.log:停止[email protected] :50070
2015-12-08 16:25:04,329 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:停止NameNode度量系统...
2015-12-08 16:25:04,333 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:NameNode指标系统已停止。
2015-12-08 16:25:04,335 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:NameNode指标系统关闭完成。
2015-12-08 16:25:04,380错误org.apache.hadoop.hdfs.server.namenode.NameNode错误:无法启动namenode。
ExitCodeException exitCode = 1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell .java:456)
at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker $ CheckedVolume。< ; init>(NameNodeResourceChecker.java:69)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
at org.apache.hadoop.hdfs。 < init>(NameNodeResourceChecker.java:134)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1058)
at org.server.namenode.NameNodeResourceChecker。 .apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:678)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:664)
at org.apache.hadoop.hdfs.server.namenode.NameNode。< init>(NameNode.java:811)
org.apache.hadoop.hdfs.server.namenode.NameNode。< init>(NameNode.java:795)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)2015-12-05 16:46:08,229 INFO org.apache.hadoop.util.ExitUtil:以状态退出1
2015-12-05 16:46:08,239 INFOorg.apache.hadoop.hdfs.server.namenode.NameNode:SHUTDOWN_MSG:
2015-12-08 16:25:04,418 INFO org.apache。 hadoop.hdfs.server.namenode.NameNode:SHUTDOWN_MSG:
/ ********************************* ***************************
SHUTDOWN_MSG:在psdrac2 / 192.168.106.109处关闭NameNode



为什么会出现这个错误?

解决方案

@ Rohan的答案是正确的。

我在Solaris 10上遇到了同样的问题。

以下是对根本原因的深入分析。



DF.java(行:144)尝试运行命令。

  return new String [] {bash, -  c,exec'df''-k''-P''+ dirPath +'2> / dev /空值}; 

因此,solaris上的默认'df'二进制文件不会采用-P参数。
因此您必须使用/ usr / xpg4 / bin / df才能使用它。


I have hadoop configuration on Solaris 10 server. I have configured Hadoop 2.7.1 on this server. Now when I am starting hadoop daemons by using start-dfs.sh datanode and secondaryNamenode is starting but Namenode is not starting. I checked the namenode logs, it gives me following error message:

2015-12-08 16:24:47,703 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = psdrac2/192.168.106.109
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.8.0_66
************************************************************/
2015-12-08 16:24:47,798 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-12-08 16:24:47,832 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode
2015-12-08 16:24:50,310 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-12-08 16:24:50,977 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-12-08 16:24:50,978 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-12-08 16:24:50,998 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://psdrac2:9000
2015-12-08 16:24:51,005 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use psdrac2:9000 to access this namenode/service.
2015-12-08 16:24:51,510 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-12-08 16:24:52,680 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-12-08 16:24:53,177 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-12-08 16:24:53,239 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-12-08 16:24:53,289 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-12-08 16:24:53,336 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-08 16:24:53,354 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-12-08 16:24:53,355 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-12-08 16:24:53,356 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-12-08 16:24:53,544 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-08 16:24:53,556 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-12-08 16:24:53,673 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-12-08 16:24:53,674 INFO org.mortbay.log: jetty-6.1.26
2015-12-08 16:24:56,059 INFO org.mortbay.log: Started [email protected]:50070
2015-12-08 16:24:56,310 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,313 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-08 16:24:56,362 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,364 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,701 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-12-08 16:24:56,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-12-08 16:24:57,154 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-12-08 16:24:57,155 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-12-08 16:24:57,171 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-12-08 16:24:57,191 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Dec 08 16:24:57
2015-12-08 16:24:57,215 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-12-08 16:24:57,216 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:57,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-12-08 16:24:57,233 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-12-08 16:24:57,368 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-12-08 16:24:57,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2015-12-08 16:24:57,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-12-08 16:24:57,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-12-08 16:24:57,424 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-12-08 16:24:57,435 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,544 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-12-08 16:24:58,544 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-12-08 16:24:58,555 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-12-08 16:24:58,556 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-12-08 16:24:58,640 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12-08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-12-08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-12-08 16:24:58,666 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-12-08 16:24:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-12-08 16:24:58,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-12-08 16:24:58,695 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-12-08 16:24:58,696 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-12-08 16:24:58,790 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/in_use.lock acquired by nodename 15020@psdrac2
2015-12-08 16:24:59,268 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current
2015-12-08 16:24:59,272 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2015-12-08 16:24:59,600 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-12-08 16:24:59,878 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-12-08 16:24:59,879 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/fsimage_0000000000000000000
2015-12-08 16:24:59,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-12-08 16:24:59,958 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2015-12-08 16:25:01,370 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-12-08 16:25:01,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 2645 msecs
2015-12-08 16:25:03,759 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to psdrac2:9000
2015-12-08 16:25:03,809 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-12-08 16:25:03,909 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-12-08 16:25:04,108 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-12-08 16:25:04,116 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:25:04,169 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-12-08 16:25:04,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2015-12-08 16:25:04,173 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 25
2015-12-08 16:25:04,184 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_inprogress_0000000000000000001 -> /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_0000000000000000001-0000000000000000002
2015-12-08 16:25:04,202 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2015-12-08 16:25:04,315 INFO org.mortbay.log: Stopped [email protected]:50070
2015-12-08 16:25:04,329 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-12-08 16:25:04,333 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-12-08 16:25:04,335 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-12-08 16:25:04,380 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1058)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:678)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:664)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)2015-12-05 16:46:08,229 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-12-05 16:46:08,239 INFOorg.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
2015-12-08 16:25:04,418 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at psdrac2/192.168.106.109

Why am I getting this error?

解决方案

@Rohan's answer was correct.
I had the very same issue on Solaris 10.
Here is an in depth analysis of the root cause.

DF.java(Line:144) tries to run the command.

return new String[] {"bash","-c","exec 'df' '-k' '-P' '" + dirPath + "' 2>/dev/null"};

So the default 'df' binary on solaris does not take -P argument.Hence you will have to use "/usr/xpg4/bin/df" to make it work.

这篇关于在注视namenode时出现ExitCodeException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-11 08:40