本文介绍了将 AWS Lambda 连接到 Redshift - 60 秒后超时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个 AWS Lambda 函数:

I created an AWS Lambda function that:

  • 通过 JDBC URL 登录 Redshift
  • 运行查询

在本地,使用Node,我可以通过JDBC成功连接到Redshift实例,并执行查询.

Locally, using Node, I can successfully connect to the Redshift instance via JDBC, and execute a query.

var conString = "postgresql://USER_NAME:PASSWORD@JDBC_URL";
var client = new pg.Client(conString);
client.connect(function(err) {
  if(err) {

      console.log('could not connect to redshift', err);

  }  

// omitted due to above error

但是,当我在 AWS Lambda 上执行该函数时(它包含在 async#waterfall 块),AWS Cloudwatch 日志告诉我 AWS Lambda 函数在 60 秒后超时.

However, when I execute the function on AWS Lambda (where it's wrapped in a async#waterfall block), AWS Cloudwatch logs tells me that the AWS Lambda function timed out after 60 seconds.

关于为什么我的函数无法连接的任何想法?

Any ideas on why my function is not able to connect?

推荐答案

我发现要么你向所有来源公开你的 Redshift 安全组,要么不公开.因为 Lambda 函数不是在固定地址甚至固定范围的 IP 地址上运行,这对用户是完全透明的(也称为无服务器).

I find it's either you open your Redshift security group public to all sources, or none. Because a Lambda function isn't running on a fixed address or even a fixed range of IP addresses, which is completely transparent to users (AKA server-less).

我昨天刚刚看到亚马逊宣布了新的 Lambda 功能来支持 VPC.我想如果我们可以在 VPC 中运行 Redshift 集群,这可以解决问题.

I just saw Amazon announced the new Lambda feature to support VPC yesterday. I guess if we can run a Redshift cluster in a VPC, this could solve the problem.

这篇关于将 AWS Lambda 连接到 Redshift - 60 秒后超时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-07 05:29