问题描述
我们正在使用Google Cloud Bigtable,并使用 Go库从GCE实例访问它访问它.对于某些ReadRow查询,我们得到以下错误:
We are using Google Cloud Bigtable, accessing it from GCE instances using the Go library to access it. For some ReadRow queries we get the following error:
rpc error: code = 13 desc = "server closed the stream without sending trailers"
值得注意的是,这些是一致的.换句话说,如果我们重试相同的查询(两次尝试之间等待约15分钟),我们(几乎?)总是会再次遇到相同的错误.因此, not 似乎根本不是一个暂时性错误,而是可能与所获取的数据有某种关系.这是我们正在运行的特定查询:
It is noteworthy that these are consistent. In other words if we retry the same query (we wait ~15 minutes between attempts) we (almost?) always get the same error again. So it does not appear to simply be a transient error, but instead is probably somehow related to the data being fetched. Here is the specific query we are running:
row, err := tbl.ReadRow(ctx, <my-row-id>,
bigtable.RowFilter(bigtable.ChainFilters(
bigtable.FamilyFilter(<my-column-family>),
bigtable.LatestNFilter(1))))
这是否意味着您正试图获取太多"?
Could this just mean "you are trying to fetch too much"?
推荐答案
对于在家中跟随的任何人,实际上都有一个BigTable错误导致了此错误.需要明确的是,尝试读取太多(> 256MB)可能还会 导致此错误,但这不是唯一的错误情况.我能够在小于256MB的行上重现此错误.有了这些信息,BigTable小组便确定了该错误,并于最近(〜2月12日)将其发布到了生产环境中.发布后,我确认这些错误已从我的应用程序日志中消失.
For anyone following along at home, there was actually a BigTable bug that was causing this error. To be clear, trying to read too much (> 256MB) might also cause this error, but that was not the only error condition. I was able to reproduce this error on rows well under 256MB. With this information, the BigTable team identified the bug and recently (~Feb 12) rolled it out to production. After the release I confirmed that these errors disappeared from my application logs.
这篇关于Bigtable(来自Go)返回“服务器关闭流而不发送预告片"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!