本文介绍了sparkR阅读CSV错误返回状态== 0不是TRUE的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我开始使用我的火花壳

 >> ./斌/ sparkR --packages com.databricks:火花csv_2.10:1.2.0

现在我想读sparkR壳CSV

  D<  -  read.df(sqlContext,
    数据/ mllib / sample_tree_data.csv,com.databricks.spark.csv,首标=真)

但每次我得到一个错误的时间

logs while starting sparkR shell are as below:

R version 3.0.3 (2014-03-06) -- "Warm Puppy"
Copyright (C) 2014 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

Launching java with spark-submit command /opt/spark/bin/spark-submit   "--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell" /tmp/RtmpXZM96Y/backend_porte2670057297 
Ivy Default Cache set to: /home/ravimanik/.ivy2/cache
The jars for the packages stored in: /home/ravimanik/.ivy2/jars
:: loading settings :: url = jar:file:/opt/alti-spark-1.4.1.hadoop24.hive13/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.1.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.databricks#spark-csv_2.10 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
    confs: [default]
    found com.databricks#spark-csv_2.10;1.2.0 in central
    found org.apache.commons#commons-csv;1.1 in central
    found com.univocity#univocity-parsers;1.5.1 in central
:: resolution report :: resolve 297ms :: artifacts dl 30ms
    :: modules in use:
    com.databricks#spark-csv_2.10;1.2.0 from central in [default]
    com.univocity#univocity-parsers;1.5.1 from central in [default]
    org.apache.commons#commons-csv;1.1 from central in [default]
    ---------------------------------------------------------------------
    |                  |            modules            ||   artifacts   |
    |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
    ---------------------------------------------------------------------
    |      default     |   3   |   0   |   0   |   0   ||   3   |   0   |
    ---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
    confs: [default]
    0 artifacts copied, 3 already retrieved (0kB/21ms)

 Welcome to SparkR!
 Spark context is available as sc, SQL context is available as sqlContext
解决方案

I met exactly the same problem. It works after I restart R-session.

这篇关于sparkR阅读CSV错误返回状态== 0不是TRUE的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-16 16:39