嗨,我有老师给的文件。它与Scala和Spark有关。
当我运行代码时,它给了我这个异常:

  (run-main-0) scala.ScalaReflectionException: class java.sql.Date in
  JavaMirror with ClasspathFilter


文件本身如下所示:

import org.apache.spark.ml.feature.Tokenizer
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types._
object Main {
   type Embedding       = (String, List[Double])
   type ParsedReview    = (Integer, String, Double)
   org.apache.log4j.Logger getLogger "org"  setLevel
   (org.apache.log4j.Level.WARN)
   org.apache.log4j.Logger getLogger "akka" setLevel
  (org.apache.log4j.Level.WARN)
   val spark =  SparkSession.builder
     .appName ("Sentiment")
     .master  ("local[9]")
     .getOrCreate

import spark.implicits._

val reviewSchema = StructType(Array(
        StructField ("reviewText", StringType, nullable=false),
        StructField ("overall",    DoubleType, nullable=false),
        StructField ("summary",    StringType, nullable=false)))

// Read file and merge the text abd summary into a single text column

def loadReviews (path: String): Dataset[ParsedReview] =
    spark
        .read
        .schema (reviewSchema)
        .json (path)
        .rdd
        .zipWithUniqueId
        .map[(Integer,String,Double)] { case (row,id) => (id.toInt, s"${row getString 2} ${row getString 0}", row getDouble 1) }
        .toDS
        .withColumnRenamed ("_1", "id" )
        .withColumnRenamed ("_2", "text")
        .withColumnRenamed ("_3", "overall")
        .as[ParsedReview]

 // Load the GLoVe embeddings file

 def loadGlove (path: String): Dataset[Embedding] =
     spark
         .read
         .text (path)
    .map  { _ getString 0 split " " }
    .map  (r => (r.head, r.tail.toList.map (_.toDouble))) // yuck!
         .withColumnRenamed ("_1", "word" )
         .withColumnRenamed ("_2", "vec")
         .as[Embedding]

def main(args: Array[String]) = {

  val glove  = loadGlove ("Data/glove.6B.50d.txt") // take glove

  val reviews = loadReviews ("Data/Electronics_5.json") // FIXME

  // replace the following with the project code



   glove.show
   reviews.show

        spark.stop
   }

 }


我要保持一致
导入org.apache.spark.sql.Dataset
因为有些代码依赖于它,但正是因为它,我抛出异常。

我的build.sbt文件看起来像这样:

  name := "Sentiment Analysis Project"

  version := "1.1"

  scalaVersion := "2.11.12"

  scalacOptions ++= Seq("-unchecked", "-deprecation")

  initialCommands in console :=
  """
  import Main._
  """

   libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"

   libraryDependencies += "org.apache.spark" %% "spark-mllib" %
   "2.3.0"

    libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.5"

    libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" %
    "test"

最佳答案

您使用Java8编译的Scala指南recommends


我们建议使用Java 8编译Scala代码。由于JVM是向后兼容的,因此通常可以使用较新的JVM运行由Scala编译器针对较早的JVM版本编译的代码。


尽管这只是一个建议,但我发现它可以解决您提到的问题。

为了使用Homebrew安装Java 8,最好使用jenv,这将帮助您处理多个Java版本。

brew install jenv


然后运行以下命令以添加木桶替代版本的水龙头(存储库),因为Java 8不再位于默认水龙头中:

brew tap homebrew/cask-versions


要安装Java 8:

brew cask install homebrew/cask-versions/adoptopenjdk8


运行以下命令,将以前安装的Java版本添加到jenv的版本列表中:

jenv add /Library/Java/JavaVirtualMachines/<installed_java_version>/Contents/Home


终于跑

jenv global 1.8


要么

jenv local 1.8


在全局或本地(在当前文件夹中)使用Java 1.8。

有关更多信息,请遵循jenv's website上的说明。

关于scala - run-main-0)scala.ScalaReflectionException:带有ClasspathFilter()的JavaMirror中的类java.sql.Date,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/53458002/

10-12 00:22
查看更多