saveAsTable返回空结果

saveAsTable返回空结果

本文介绍了Spark SQL saveAsTable返回空结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述

我使用以下代码在Spark SQL中创建/插入数据到Hive表中:

  val sc = SparkSession 
.builder()
.appName(App)
.master(local [2])
.config(spark.sql.warehouse.dir, file:/// tmp / spark-warehouse)
.enableHiveSupport()
.getOrCreate()

//实际代码

结果.createOrReplaceTempView(result)
result.write.format(parquet)。partitionBy(year,month)。mode(SaveMode.Append).saveAsTable(tablename)

运行时没有错误。一个 result.show(10)证实了这一点。输入文件是本地FS上的csv。



它在 ./ spark-warehouse / tablename / ,并使用正确的create table语句在hive中创建表。

  git:(master)✗tree 

└──表名
├──_SUCCESS
└──年= 2017
└──月= 01
├──部分r-00013-abaea338 -8ed3-4961-8598-cb2623a78ce1.snappy.parquet
├──part-r-00013-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet
├──part-r-00018 -abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
└──part-r-00018-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet



hive:

  hive>显示创建表格名称; 

CREATE TABLE`tablename`(
`col` array< string> COMMENT'from deserializer')
ROW FORMAT SERDE
'org.apache.hadoop。 hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES(
'path'='file:/ Users / IdeaProjects / project / spark-warehouse / tablename')
作为INPUTFORMAT存储
'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT $ b $'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
LOCATION
'file:/ tmp / spark-warehouse / tablename'
TBLPROPERTIES(
'EXTERNAL'='FALSE',
'spark.sql.sources.provider'='parquet',
'spark。 sql.sources.schema.numPartCols'='2',
'spark.sql.sources.schema.numParts'='1',
'spark.sql.sources.schema.part.0' ='{
// fields
}',
'spark.sql.sources.schema.partCol.0'='year',
'spark.sql.sources。 schema.partCol.1'='月',
'transient_lastDdlT ime'='1488157476')

然而,表格是空的:

  hive>从表名中选择count(*); 
...
确定
0
所用时间:1.89秒,提取:1行

使用的软件:Spark 2.1.0 with spark-sql和spark-hive_2.10,Hive 2.10和mysql Metastore,Hadoop 2.70,macOS 10.12.3

解决方案

Spark SQL分区与Hive不兼容。此问题由记录。



作为推荐的解决方法,您可以使用Hive创建分区表,并且只能从Spark插入。


I am using the following code for creating / inserting data into a Hive table in Spark SQL:

val sc = SparkSession
  .builder()
  .appName("App")
  .master("local[2]")
  .config("spark.sql.warehouse.dir", "file:///tmp/spark-warehouse")
  .enableHiveSupport()
  .getOrCreate()

// actual code

result.createOrReplaceTempView("result")
result.write.format("parquet").partitionBy("year", "month").mode(SaveMode.Append).saveAsTable("tablename")

Which runs without errors. A result.show(10) confirms this. The input files are csv on the local FS.

It creates parquet files under ./spark-warehouse/tablename/ and also creates the table in hive, using a correct create table statement.

git:(master) ✗ tree
.
└── tablename
    ├── _SUCCESS
    └── year=2017
        └── month=01
            ├── part-r-00013-abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
            ├── part-r-00013-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet
            ├── part-r-00018-abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
            └── part-r-00018-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet

hive:

hive> show create table tablename;
OK
CREATE TABLE `tablename`(
  `col` array<string> COMMENT 'from deserializer')
ROW FORMAT SERDE
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
  'path'='file:/Users/IdeaProjects/project/spark-warehouse/tablename')
STORED AS INPUTFORMAT
  'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
LOCATION
  'file:/tmp/spark-warehouse/tablename'
TBLPROPERTIES (
  'EXTERNAL'='FALSE',
  'spark.sql.sources.provider'='parquet',
  'spark.sql.sources.schema.numPartCols'='2',
  'spark.sql.sources.schema.numParts'='1',
  'spark.sql.sources.schema.part.0'='{
  // fields
  }',
  'spark.sql.sources.schema.partCol.0'='year',
  'spark.sql.sources.schema.partCol.1'='month',
  'transient_lastDdlTime'='1488157476')

However, the table is empty:

hive> select count(*) from tablename;
...
OK
0
Time taken: 1.89 seconds, Fetched: 1 row(s)

Software used: Spark 2.1.0 with spark-sql and spark-hive_2.10, Hive 2.10 and a mysql metastore, Hadoop 2.70, macOS 10.12.3

解决方案

Spark SQL partitioning is not compatible with Hive. This issue is documented by SPARK-14927.

As a recommended workaround you can create partitioned table with Hive, and only insert from Spark.

这篇关于Spark SQL saveAsTable返回空结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 10:14