问题描述
我使用以下代码在 Spark SQL 中创建/插入数据到 Hive 表中:
I am using the following code for creating / inserting data into a Hive table in Spark SQL:
val sc = SparkSession
.builder()
.appName("App")
.master("local[2]")
.config("spark.sql.warehouse.dir", "file:///tmp/spark-warehouse")
.enableHiveSupport()
.getOrCreate()
// actual code
result.createOrReplaceTempView("result")
result.write.format("parquet").partitionBy("year", "month").mode(SaveMode.Append).saveAsTable("tablename")
运行没有错误.result.show(10)
证实了这一点.输入文件是本地FS上的csv.
Which runs without errors. A result.show(10)
confirms this. The input files are csv on the local FS.
它在 ./spark-warehouse/tablename/
下创建 parquet 文件,并使用正确的 create table 语句在 hive 中创建表.
It creates parquet files under ./spark-warehouse/tablename/
and also creates the table in hive, using a correct create table statement.
git:(master) ✗ tree
.
└── tablename
├── _SUCCESS
└── year=2017
└── month=01
├── part-r-00013-abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
├── part-r-00013-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet
├── part-r-00018-abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
└── part-r-00018-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet
蜂巢:
hive> show create table tablename;
OK
CREATE TABLE `tablename`(
`col` array<string> COMMENT 'from deserializer')
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'path'='file:/Users/IdeaProjects/project/spark-warehouse/tablename')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
LOCATION
'file:/tmp/spark-warehouse/tablename'
TBLPROPERTIES (
'EXTERNAL'='FALSE',
'spark.sql.sources.provider'='parquet',
'spark.sql.sources.schema.numPartCols'='2',
'spark.sql.sources.schema.numParts'='1',
'spark.sql.sources.schema.part.0'='{
// fields
}',
'spark.sql.sources.schema.partCol.0'='year',
'spark.sql.sources.schema.partCol.1'='month',
'transient_lastDdlTime'='1488157476')
然而,表是空的:
hive> select count(*) from tablename;
...
OK
0
Time taken: 1.89 seconds, Fetched: 1 row(s)
使用的软件:带有 spark-sql 和 spark-hive_2.10 的 Spark 2.1.0、Hive 2.10 和 mysql Metastore、Hadoop 2.70、macOS 10.12.3
Software used: Spark 2.1.0 with spark-sql and spark-hive_2.10, Hive 2.10 and a mysql metastore, Hadoop 2.70, macOS 10.12.3
推荐答案
Spark SQL 分区与 Hive 不兼容.SPARK-14927 记录了此问题.
Spark SQL partitioning is not compatible with Hive. This issue is documented by SPARK-14927.
作为推荐的解决方法,您可以使用 Hive 创建分区表,并且只能从 Spark 插入.
As a recommended workaround you can create partitioned table with Hive, and only insert from Spark.
这篇关于Spark SQL saveAsTable 返回空结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!