问题描述
线程mainjava中的异常.lang.IllegalAccessError:尝试从org.apache的org.apache.hadoop.mapreduce.lib.input.FileInputFormat
类访问方法com.google.common.base.Stopwatch。< init>()V. hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:262)
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
at org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:219)
在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org。 apache.spark.rdd.RDD.partitions(RDD.scala:217)
位于org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
位于org.apache。 spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
相关的gradle dependencies部分:
compile('org.apache.spark:spark-core_2.10:1.3.1')
compile ('org.apache.hadoop:hadoop-mapreduce-client-core:2.6.2'){force = true}
compile('org.apache.hadoop:hadoop-mapreduce-client-app:2.6.2 '){force = true}
compile('org.apache.hadoop:hadoop-mapreduce-client-shuffle:2.6.2'){force = true}
compile('com.google.guava :guava:19.0'){force = true}
版本 2.6.2 hadoop:hadoop-mapreduce-client-core 不能与<$ c一起使用$ c> guava 的新版本(我试过 17.0 - 19.0 ),因为 guava 的 StopWatch 构造或者无法访问(导致 IllegalAccessError )
$ b 使用 hadoop-mapreduce-客户端核心的最新版本 - 2.7.2 (其中它们不使用 guava 's StopWatch 在上面的方法中,而是使用 org.apache.hadoop.util.StopWatch )解决了这个问题,需要两个额外的依赖项:
compile('org.apache.hadoop:hadoop-mapreduce-client -core:2.7.2'){force = true}
编译('org.apache.hadoop:hadoop-common:2.7.2'){force = true} // org必需的。 apache.hadoop.util.StopWatch
compile('commons-io:commons-io:2.4'){force = true} //对于org.apache.commons.io.Charsets是必需的内部
注意:
有两个 org.apache.commons .io 软件包:
(我们在这里)和
(旧的,2007)。确保包括正确的。
I'm trying to run small spark application and am getting the following exception:
Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:262) at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217) at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) at scala.Option.getOrElse(Option.scala:120)
the relevant gradle dependencies section:
compile('org.apache.spark:spark-core_2.10:1.3.1') compile('org.apache.hadoop:hadoop-mapreduce-client-core:2.6.2') {force = true} compile('org.apache.hadoop:hadoop-mapreduce-client-app:2.6.2') {force = true} compile('org.apache.hadoop:hadoop-mapreduce-client-shuffle:2.6.2') {force = true} compile('com.google.guava:guava:19.0') { force = true }
version 2.6.2 of hadoop:hadoop-mapreduce-client-core can't be used together with guava's new versions (I tried 17.0 - 19.0) since guava's StopWatch constructor can't be accessed (causing above IllegalAccessError)
using hadoop-mapreduce-client-core's latest version - 2.7.2 (in which they don't use guava's StopWatch in the above method, rather they use org.apache.hadoop.util.StopWatch) solved the problem, with two additional dependencies that were required:
compile('org.apache.hadoop:hadoop-mapreduce-client-core:2.7.2') {force = true} compile('org.apache.hadoop:hadoop-common:2.7.2') {force = true} // required for org.apache.hadoop.util.StopWatch compile('commons-io:commons-io:2.4') {force = true} // required for org.apache.commons.io.Charsets that is used internally
note:there are two org.apache.commons.io packages:commons-io:commons-io (ours here), andorg.apache.commons:commons-io (old one, 2007). make sure to include the correct one.
这篇关于IllegalAccessError将guava的StopWatch从org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!