本文介绍了需要使用hadoop native的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我从我的java程序中调用mapreduce作业。
今天,当我将mapreduce作业的输入fromat设置为: LzoTextInputFormat
mapreduce作业失败:

 无法加载原生gpl库
java.lang.UnsatisfiedLinkError:java.library.path中没有gplcompression $ b $ java.lang.ClassLoader.loadLibrary( ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)$ b $ at java.lang.System.loadLibrary(System.java:1028)
at com .hdoop.compression.lzo.GPLNativeCodeLoader。< clinit>(GPLNativeCodeLoader.java:32)
at com.hadoop.compression.lzo.LzoCodec。< clinit>(LzoCodec.java:67)
在com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:58)
在org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
。在COM .hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
在org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
在org.apache.hadoop.mapred .JobCl ient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion( Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.mapreduceTest.main(mapreduceTest.java:18)
Apr 5,2012 4:40:29 PM com.hadoop.compression.lzo.LzoCodec< clinit>
SEVERE:如果没有native-hadoop
,无法加载native-lzo java.lang.IllegalArgumentException:错误的FS:hdfs:// D-SJC-00535164:9000 / local / usecases / gbase014 / outbound / seed_2012- 03-12_06-34-39 / 1_1.lzo.index,期望:file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org .apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
在org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
在org.apache.hadoop .fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at com.hadoop.compression.lzo.LzoIndex .readIndex(LzoIndex.java:169)
在com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:69)
在org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits (FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hado op.mapred.JobClient.writeNewSplits(JobClient.java:885)
位于org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
位于org.apache.hadoop.mapreduce。 Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.stopTransfer.mapreduceTest.main(mapreduceTest.java:18)
2012年4月5日4:40:29 PM company.Validation run
SEVERE:LinkExtractor:java.lang.IllegalArgumentException:错误的FS:hdfs:// D-SJC-00535164:9000 / local / usecases / gbase014 / outbound / seed_2012-03 -12_06-34-39 / 1_1.lzo.index,期望:file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org。 apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
处org.apache.hadoop org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
。 fs.FilterFileSystem.get FileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at com.hadoop.compression.lzo.LzoIndex.readIndex(LzoIndex。 Java的:169)美元,com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:69 b $ b)
在org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java: 241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
在org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
在org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
在org.apache .hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
在company.Validation.run(Validation.java:99)
在org.apache.hadoop.util.ToolRunner.run(ToolRunner .java:65)
at company.stopTransfer.mapreduceTest.main(mapreduceTest.java:18)



但在lib / native中它们是一些文件s扩展为a,la,所以...
我尝试在我的路径环境变量中设置它们,但仍然无效。



任何人都可以请给我的建议!!!!



非常感谢!

解决方案

您的错误与Lzo的实际共享库没有出现在hadoop本地库文件夹中。


$ b $ < GPLNativeCodeLoader

code>正在寻找名为 gplcompression 的共享库。 Java实际上是在寻找一个名为 libgplcompression.so 的文件。如果该文件不存在于 lib / native / $ {arch} 文件夹中,那么您将看到此错误。



在终端中,导航到您的hadoop基本目录并执行以下命令以转储已安装的本机库,然后回发到原始问题

  uname -a 
find lib / native


I am invoking mapreduce job from my java program.Today, when I set the mapreduce job's input fromat to :LzoTextInputFormatThe mapreduce job fail:

Could not load native gpl library
java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:32)
at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:67)
at com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:58)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.mapreduceTest.main(mapreduceTest.java:18)
Apr 5, 2012 4:40:29 PM com.hadoop.compression.lzo.LzoCodec <clinit>
SEVERE: Cannot load native-lzo without native-hadoop
java.lang.IllegalArgumentException: Wrong FS: hdfs://D-SJC-00535164:9000/local/usecases /gbase014/outbound/seed_2012-03-12_06-34-39/1_1.lzo.index, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at com.hadoop.compression.lzo.LzoIndex.readIndex(LzoIndex.java:169)
at com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:69)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.stopTransfer.mapreduceTest.main(mapreduceTest.java:18)
Apr 5, 2012 4:40:29 PM company.Validation run
 SEVERE: LinkExtractor: java.lang.IllegalArgumentException: Wrong FS: hdfs://D-SJC-00535164:9000/local/usecases/gbase014/outbound/seed_2012-03-12_06-34-39/1_1.lzo.index, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at com.hadoop.compression.lzo.LzoIndex.readIndex(LzoIndex.java:169)
at com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:69)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.stopTransfer.mapreduceTest.main(mapreduceTest.java:18)

But in lib/native they are some files extends with a,la,so...I tried to set them in my path environment variable, but still doesn't work.

Could anyone please give my suggestion!!!!

Thank you very much!

解决方案

Your error relates to the actual shared library for Lzo not being present in the hadoop native library folder.

The code for GPLNativeCodeLoader is looking for a shared library called gplcompression. Java is actually looking for a file named libgplcompression.so. If this file doesn't exist in your lib/native/${arch} folder then you'll see this error.

In a terminal, navigate to your hadoop base directory and execute the following to dump the native libraries installed, and post back to your original question

uname -a
find lib/native

这篇关于需要使用hadoop native的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 05:34