我想使用Presto创建一个Hive表,并将数据存储在S3的csv文件中。
我已将文件上传到S3,并且我确信Presto能够连接到存储桶。
现在,当我给出create table
命令时,查询表时所有值(行)都为NULL。
我尝试研究类似的问题,但事实证明Presto在Stackoverflow上并不那么出名。
文件中的某些行是:
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
1,0,3,"Braund, Mr. Owen Harris",male,22,1,0,A/5 21171,7.25,,S
2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina",female,26,0,0,STON/O2. 3101282,7.925,,S
4,1,1,"Futrelle, Mrs. Jacques Heath (Lily May Peel)",female,35,1,0,113803,53.1,C123,S
5,0,3,"Allen, Mr. William Henry",male,35,0,0,373450,8.05,,S
6,0,3,"Moran, Mr. James",male,,0,0,330877,8.4583,,Q
7,0,1,"McCarthy, Mr. Timothy J",male,54,0,0,17463,51.8625,E46,S
8,0,3,"Palsson, Master. Gosta Leonard",male,2,3,1,349909,21.075,,S
9,1,3,"Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg)",female,27,0,2,347742,11.1333,,S
10,1,2,"Nasser, Mrs. Nicholas (Adele Achem)",female,14,1,0,237736,30.0708,,C
11,1,3,"Sandstrom, Miss. Marguerite Rut",female,4,1,1,PP 9549,16.7,G6,S
12,1,1,"Bonnell, Miss. Elizabeth",female,58,0,0,113783,26.55,C103,S
13,0,3,"Saundercock, Mr. William Henry",male,20,0,0,A/5. 2151,8.05,,S
14,0,3,"Andersson, Mr. Anders Johan",male,39,1,5,347082,31.275,,S
15,0,3,"Vestrom, Miss. Hulda Amanda Adolfina",female,14,0,0,350406,7.8542,,S
16,1,2,"Hewlett, Mrs. (Mary D Kingcome) ",female,55,0,0,248706,16,,S
17,0,3,"Rice, Master. Eugene",male,2,4,1,382652,29.125,,Q
18,1,2,"Williams, Mr. Charles Eugene",male,,0,0,244373,13,,S
19,0,3,"Vander Planke, Mrs. Julius (Emelia Maria Vandemoortele)",female,31,1,0,345763,18,,S
20,1,3,"Masselmani, Mrs. Fatima",female,,0,0,2649,7.225,,C
我的csv文件是here,从这里获取
train.csv
。因此,我的presto命令是:create table testing_nan_4 ( PassengerId integer, Survived integer, Pclass integer, Name varchar, Sex varchar, Age integer, SibSp integer, Parch integer, Ticket integer, Fare double, Cabin varchar, Embarked varchar ) with ( external_location = 's3://my_bucket/titanic_train/', format = 'textfile' );
结果是:
passengerid | survived | pclass | name | sex | age | sibsp | parch | ticket | fare | cabin | embarked
-------------+----------+--------+------+------+------+-------+-------+--------+------+-------+----------
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL
并期望得到实际数据。
最佳答案
Starburst Presto当前支持CSV Hive存储格式,请参阅:https://docs.starburstdata.com/latest/release/release-302-e.html?highlight=csv
还有使其在PrestoSQL中运行的工作,请参见:https://github.com/prestosql/presto/pull/920
然后,您可以在Presto Hive连接器中使用以下表格:
CREATE TABLE hive.default.csv_table_with_custom_parameters (
c_bigint varchar,
c_varchar varchar)
WITH (
csv_escape = '',
csv_quote = '',
csv_separator = U&'\0001', -- to pass unicode character
external_location = 'hdfs://hadoop/datacsv_table_with_custom_parameters',
format = 'CSV')
您的情况是:
CREATE TABLE hive.default.csv_table_with_custom_parameters (
PassengerId int, Survived int, Pclass int, Name string, Sex string, Age int, SibSp int, Parch int, Ticket int, Fare double, Cabin string, Embarked string)
WITH (
csv_escape = '\',
csv_quote = '"',
csv_separator = ',',
external_location = 's3://my_bucket/titanic_train/',
format = 'CSV')
请注意,
csv_escape
,csv_quote
和csv_separator
表属性仅支持单个字符值。另外,对于CSV表,
"skip.header.line.count"="1"
在Presto中还没有等效语法。因此,我建议您从数据文件中删除标题。关于sql - 无法使用CSV文件中的Presto创建Hive表,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/56629498/