昨天上线hbase的查询功能,测试了功能没问题,今天使用时发现不能用了,后台抛异常,异常内容如下:

点击(此处)折叠或打开

  1. 11:01:06,255 [org.apache.hadoop.security.UserGroupInformation]-[ERROR] PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-root/mapred/staging/root933593746/.staging/job_local933593746_0001': No such file or directory
  2. 11:01:06,259 [com.cmcc.aoi.selfhelp.service.impl.HbaseTagTokenServiceImpl]-[ERROR]
  3. org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-root/mapred/staging/root933593746/.staging/job_local933593746_0001': No such file or directory
  4. at org.apache.hadoop.util.Shell.runCommand(Shell.java:261)
  5. at org.apache.hadoop.util.Shell.run(Shell.java:188)
  6. at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
  7. at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
  8. at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
  9. at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
  10. at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
  11. at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:427)
  12. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:579)
  13. at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:171)
  14. at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:293)
  15. at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:364)
  16. at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1286)
  17. at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1283)
  18. at java.security.AccessController.doPrivileged(Native Method)
  19. at javax.security.auth.Subject.doAs(Subject.java:415)
  20. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
  21. at org.apache.hadoop.mapreduce.Job.submit(Job.java:1283)
  22. at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1304)
  23. at com.cmcc.aoi.selfhelp.service.impl.HbaseTagTokenServiceImpl.simpleTagUsercount(HbaseTagTokenServiceImpl.java:571)
  24. ......
  25. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  26. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  27. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  28. at java.lang.reflect.Method.invoke(Method.java:606)
  29. at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:64)
  30. at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:53)
  31. at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
  32. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  33. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  34. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  35. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  36. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  37. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  38. at java.lang.Thread.run(Thread.java:745)
从日志看,好像是没有权限,于是将/tmp改成777。但无效。
在网上搜,有人说ext3的一个目录里最多放32000个文件,会不会是文件夹满了?于是去查了一下文件夹下的文件数:31998个文件。把里面的文件全清掉后,再运行。抛出了下面的异常:

点击(此处)折叠或打开

  1. 11:58:20,527 [org.apache.hadoop.security.UserGroupInformation]-[ERROR] PriviledgedActionException as:root (auth:SIMPLE) cause:java.io.IOException: java.util.concurrent.ExecutionException: java.io.I
  2. OException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
  3. 11:58:20,527 [com.cmcc.aoi.selfhelp.service.impl.HbaseTagTokenServiceImpl]-[ERROR]
  4. java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
  5. at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:144)
  6. at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:155)
  7. at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:625)
  8. at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:407)
  9. at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1286)
  10. at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1283)
  11. at java.security.AccessController.doPrivileged(Native Method)
  12. at javax.security.auth.Subject.doAs(Subject.java:415)
  13. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
  14. at org.apache.hadoop.mapreduce.Job.submit(Job.java:1283)
  15. at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1304)
  16. .....
  17. at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
  18. at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
  19. at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
  20. at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
  21. at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
  22. at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
  23. at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
  24. at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
  25. at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
  26. at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
  27. at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
  28. at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
  29. at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
  30. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  31. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  32. at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  33. at java.lang.Thread.run(Thread.java:745)
  34. Caused by: java.util.concurrent.ExecutionException: java.io.IOException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
  35. at java.util.concurrent.FutureTask.report(FutureTask.java:122)
  36. at java.util.concurrent.FutureTask.get(FutureTask.java:188)
  37. at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:140)
  38. ... 59 more
  39. Caused by: java.io.IOException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
  40. at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1042)
  41. at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:150)
  42. at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:190)
  43. at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:698)
  44. at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:695)
  45. at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
  46. at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:695)
  47. at org.apache.hadoop.yarn.util.FSDownload.createDir(FSDownload.java:88)
  48. at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:274)
  49. at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:51)
  50. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  51. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  52. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  53. ... 1 more
这回异常变了,变成了创建文件失败,于是到/tmp/hadoop-root/mapred/local/目录下查看,发现里面文件也满了,接着全部删除,再运行程序。

这回通过了

09-25 11:34