本文介绍了使用mgo和mongodb的I/O超时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在通过mgo运行地图缩减任务.它运行在具有略多于350万条记录的集合上.由于某些原因,我现在无法将此端口移植到聚合中.可能会晚一点.因此,map-reduce是我期待的事情.当我从为测试代码和输出而创建的原始js文件中运行该作业时,它运行良好.我试图将地图和减少代码放在两个字符串中,然后尝试调用mgo.MapReduce为我做地图减少,以便在另一个不同的集合中编写输出.它给了我

I am running a map-reduce job from mgo. It runs on a collection with a little more than 3.5M records. For some reasons right now I can not port this to aggregation; may be later. So, map-reduce is the thing I am looking forward to. This job, when I run it from the original js files I have created to test the code and output, runs fine. I tried to put the map and reduce code inside two strings and then tried to call the mgo.MapReduce to do the map-reduce for me where I am writing the output in a different collection. And it gives me

读取tcp 127.0.0.1:27017:I/O超时

read tcp 127.0.0.1:27017: i/o timeout

但是,由于作业已在后台解雇,因此它仍在运行.现在根据这里的主题--- http://grokbase.com/t/gg/mgo-users/1396d9wyk3/io-timeout-in-statistics-generation-upsert

Though, as the job has been fired in back-ground it is still running. Now according to this thread here --- http://grokbase.com/t/gg/mgo-users/1396d9wyk3/i-o-timeout-in-statistics-generation-upsert

通过调用session.SetSocketTimeout很容易解决,但是我不想这样做,因为此map-reduce将在其上运行的文档总数会有所不同,因此,我相信时间也会有所不同.因此,我永远无法以这种方式解决问题.

It is easy to solve by calling the session.SetSocketTimeout but I do not want to do this as the total number of documents on which this map-reduce will run will vary and thus, I believe, the time. So, I will never be able to solve the problem by that way I believe.

我还有什么其他方式?

请帮助我

推荐答案

将我的评论移至答案.

我认为解决此问题的唯一方法就是将套接字超时设置为高得离谱,例如:

I believe the only way to fix this is simply setting the socket timeout to something ridiculously high, for example:

session.SetSocketTimeout(1 * time.Hour)

这篇关于使用mgo和mongodb的I/O超时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-01 21:12