问题描述
我们正在使用用Go语言编写的文件系统,。正在端口8888上使用REST API来发布文件。我们遇到的问题是 HTTPoison
超时。
We are using a filesystem written in Go, seaweedfs. It's being used a REST API on port 8888 to post Files. The issue we are having is HTTPoison
timeouts.
我们将其发布到文件中,一次又一次,我们得到了HTTPoison
We post to a file, again and again, we get HTTPoison request timeout.
一些事实:
- 文件确实在seaweedfs上得到了更新可以看到修改的日期。
- HTTPoison请求响应始终超时
- 我尝试了curl POST。
表示((i = 1; i ,它可以正常工作而不会超时。
- 我也尝试在本地计算机上使用HTTPoison进行此操作,但是它工作正常。
- File do get updated on seaweedfs we can see the modified date.
- HTTPoison request response is always timeout
- I have tried with curl POST.
for ((i=1;i<=100;i++)); do curl -F file=@00_13_000.jpg -X POST http://188.xx.xx.xx.217:8888/everc-dupzo/snapshots/recordings/2019/11/22/09/00_13_000.jpg; done
which works fine without any timeout. - I have also tried to do it in my local machine with HTTPoison as well but it works fine.
生产
在生产中,我们发送了将近1K POST HTTPoison
请求,其中10%给出超时错误。大多在已经存在的此类文件上。它们确实得到了更新,但是HTTPoison请求是超时的。
In production, we are sending almost 1K POST HTTPoison
requests from which 10% gives timeout error. mostly on such files which are already present. they do get updated but HTTPoison request comes as a timeout.
我们用于执行POST请求的代码如下。
The code we are using to do POST request is written as under.
def seaweedfs_save(camera_exid, timestamp, image, _notes) do
[{_, _, _, _, [server]}] = :ets.match_object(:storage_servers, {:_, "RW", :_, :_, :_})
hackney = [pool: :seaweedfs_upload_pool]
directory_path = construct_directory_path(camera_exid, timestamp, "recordings", "")
file_name = construct_file_name(timestamp)
file_path = directory_path <> file_name
case HTTPoison.post("#{server.url}#{file_path}", {:multipart, [{file_path, image, []}]}, [], hackney: hackney) do
{:ok, response} -> response
{:error, error} -> Logger.info "[seaweedfs_save] [#{file_path}] [#{camera_exid}] [#{inspect error}]"
end
end
hackney池设置为
hackney pool is set to
:hackney_pool.child_spec(:seaweedfs_upload_pool, [timeout: 5000, max_connections: 1000])
seaweedfs的作者有这样的预感HTTPoison请求不会关闭或被重用。
Hackney的作者建议:
The author of seaweedfs has a hunch that HTTPoison requests are not getting closed or being reused.The author of Hackney suggests:
但是HTTPoison不允许
But HTTPoison don't allow it https://github.com/edgurgel/httpoison/blob/master/lib/httpoison/base.ex#L812
我已经走到了尽头。任何帮助都会感激
I am at quite a dead end with it. Any help would be thankful about
- 我们应该如何请求HHTPoison?
- 我们应该切换吗?还是哈克尼?
- 还是有解决这个问题的更好方法?还是以任何方式获得有关请求为什么超时的更多信息?
推荐答案
I认为问题出在网络带宽和/或延迟上。基本上,您使用 max_connections同时打开一千个连接:1000
。我很确定文件系统本身和网络对此不会感到满意。相反,示例中的 curl
请求确实是同步运行的。
I believe the issue is network bandwidth and/or latency. Basically you open a thousand connections simultaneously with max_connections: 1000
. I am pretty sure the filesystem itself and the network would not be happy about that. On the contrary, curl
requests in your example do run synchronously, one after another.
减小 max_connections
减小到100,甚至更少,以查看超时是否会消失。
Decrease the value of max_connections
down to 100, or even less and see if the timeout would have gone.
这篇关于HTTPoison发布请求超时Eixir的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!