如何更快地释放或重用TCP端口

如何更快地释放或重用TCP端口

本文介绍了在高负载下在nodejs中连接EADDRNOTAVAIL-如何更快地释放或重用TCP端口?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个基于express-framework的类似wiki的小型Web应用程序,该应用程序使用弹性搜索作为后端.对于每个请求,它基本上仅进入弹性搜索数据库,检索对象并返回由车把模板引擎渲染的对象.弹性搜索的通信是通过HTTP

I have a small wiki-like web application based on the express-framework which uses elastic search as it's back-end. For each request it basically only goes to the elastic search DB, retrieves the object and returns it rendered with by the handlebars template engine. The communication with elastic search is over HTTP

只要我只运行一个node-js实例,这就很好用.更新代码以使用集群后(如 nodejs文档我开始遇到以下错误: connect EADDRNOTAVAIL

This works great as long as I have only one node-js instance running. After I updated my code to use the cluster (as described in the nodejs-documentation I started to encounter the following error: connect EADDRNOTAVAIL

当我有3个及更多运行的python脚本不断从我的服务器中检索某些URL时,将显示此错误.使用3个脚本,我可以检索到45,000个页面,同时运行4个和更多脚本,它在30,000和37,000页之间.仅运行2个或1个脚本,半小时后我停止了它们,分别检索了310,000个页面和160,000个页面.

This error shows up when I have 3 and more python scripts running which constantly retrieve some URL from my server. With 3 scripts I can retrieve ~45,000 pages with 4 and more scripts running it is between 30,000 and 37,000 pages Running only 2 or 1 scripts, I stopped them after half an hour when they retrieved 310,000 pages and 160,000 pages respectively.

我发现了这个类似的问题并尝试更改http.globalAgent.maxSockets,但这没有任何效果.

I've found this similar question and tried changing http.globalAgent.maxSockets but that didn't have any effect.

这是代码的一部分,用于侦听URL并从弹性搜索中检索数据.

This is the part of the code which listens for the URLs and retrieves the data from elastic search.

app.get('/wiki/:contentId', (req, res) ->
    http.get(elasticSearchUrl(req.params.contentId), (innerRes) ->
        if (innerRes.statusCode != 200)
            res.send(innerRes.statusCode)
            innerRes.resume()
        else
            body = ''
            innerRes.on('data', (bodyChunk) ->
                body += bodyChunk
            )
            innerRes.on('end', () ->
                res.render('page', {'title': req.params.contentId, 'content': JSON.parse(body)._source.html})
            )
    ).on('error', (e) ->
        console.log('Got error: ' + e.message)  # the error is reported here
    )
)

更新:

深入研究之后,我现在了解了问题的根源.我在测试运行期间多次运行了netstat -an | grep -e tcp -e udp | wc -l命令,以查看使用了多少个端口,如帖子 Linux:EADDRNOTAVAIL(地址不可用)错误.我可以观察到,在收到EADDRNOTAVAIL错误时,使用了56677个端口(而不是通常的180个端口)

After looking more into it, I understand now the root of the problem. I ran the command netstat -an | grep -e tcp -e udp | wc -l several times during my test runs, to see how many ports are used, as described in the post Linux: EADDRNOTAVAIL (Address not available) error. I could observe that at the time I received the EADDRNOTAVAIL-error, 56677 ports were used (instead of ~180 normally)

另外,当仅使用2个同时执行的脚本时,已使用的端口数饱和在40,000(+/- 2,000)左右,这意味着每个脚本使用了约20,000个端口(这是node-js清理旧端口的时间)在创建新脚本之前),并且对于运行的3个脚本,它违反了56677端口(约60,000).这解释了为什么它在3个请求数据的脚本而不是2个脚本的情况下失败的原因.

Also when using only 2 simultaneous scripts, the number of used ports is saturated at around 40,000 (+/- 2,000), that means ~20,000 ports are used per script (that is the time when node-js cleans up old ports before new ones are created) and for 3 scripts running it breaches over the 56677 ports (~60,000). This explains why it fails with 3 scripts requesting data, but not with 2.

所以现在我的问题变成了-如何强制node-js更快地释放端口或始终重用同一端口(将是更好的解决方案)

So now my question changes to - how can I force node-js to free up the ports quicker or to reuse the same port all the time (would be the preferable solution)

谢谢

推荐答案

目前,根据文档

因此,我使用的端口数量未超过26,000-这仍然不是一个很好的解决方案,甚至更多,因为我不知道为什么重复使用端口不起作用,但现在可以解决此问题.

as a result my number of used ports doesn't exceed 26,000 - this is still not a great solution, even more since I don't understand why reusing of ports doesn't work, but it solves the problem for now.

这篇关于在高负载下在nodejs中连接EADDRNOTAVAIL-如何更快地释放或重用TCP端口?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-28 18:47