使用aws自动执行Hive活动

使用aws自动执行Hive活动

本文介绍了使用aws自动执行Hive活动的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想每天自动化我的配置单元脚本,为了做到这一点,我有一个选项是数据管道。但问题是,我正在将数据从dynamo-db导出到s3,并使用配置单元脚本来处理这些数据。我在hive-script中给出了这个输入和输出,这是问题开始的地方,因为配置单元必须有输入和输出,但我必须在脚本文件中给它们。

I would like to automate my hive script every day , in order to do that i have an option which is data pipeline. But the problem is there that i am exporting data from dynamo-db to s3 and with a hive script i am manipulating this data. I am giving this input and output in hive-script that's where the problem starts because a hive-activity has to have input and output but i have to give them in script file.

我试图找到一种方法来自动化这个蜂房脚本并等待一些想法?

I am trying to find a way to automate this hive-script and waiting for some ideas ?

干杯,

Cheers,

推荐答案

运行任何Hive脚本。

You can disable staging on Hive Activity to run any arbitrary Hive Script.

stage = false

执行如下操作:

Do something like:

{
  "name": "DefaultActivity1",
  "id": "ActivityId_1",
  "type": "HiveActivity",
  "stage": "false",
  "scriptUri": "s3://baucket/query.hql",
  "scriptVariable": [
    "param1=value1",
    "param2=value2"
  ],
  "schedule": {
    "ref": "ScheduleId_l"
  },
  "runsOn": {
    "ref": "EmrClusterId_1"
  }
},

这篇关于使用aws自动执行Hive活动的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 05:26