然后在bash中等待诱变

然后在bash中等待诱变

本文介绍了正确使用&&然后在bash中等待诱变的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个shell脚本,它使用domain及其参数来查找状态码.由于线程的原因,它的运行速度更快,但错过了很多请求.

here is a shell script which takes domain and its parameters to find status code . this runs way faster due to threading but misses lot of requests.

while IFS= read -r url <&3; do
    while IFS= read -r uri <&4; do
    urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out  '%{http_code}' "${url}""${uri}" --max-time 5 ) &&
    echo "$url  $urlstatus $uri" >> urlstatus.txt &
done 4<uri.txt
done 3<url.txt

如果我正常运行,它将处理所有请求,但速度非常低.有没有一种方法可以保持速度,并且也不会错过所有请求.

if i ran normally it process all requests but the speed is very low. is there a way through which speed is maintained and it also not misses all requests .

推荐答案

您描述的症状听起来像程序外的某些资源无法应付并行性,但是您没有提供任何诊断信息来解决此问题.如果您的本地网络带宽足够,但是远程服务器正在限制您,那么您就无济于事可以说服他们停止这样做,而如果您的问题是由本地资源耗尽引起的,则可以通过阻塞点来增加带宽(您的DNS服务器?)应该允许您继续进行而无需进行任何实际的代码更改.

The symptom you describe sounds like some resource outside of your program can't cope with the parallelism, but you have not provided any diagnostics to help troubleshoot this problem. If your local network bandwidth is sufficient but the remote server is throttling you, there isn't much you can do to persuade them to stop doing that, while of course if your problem is caused by local resource depletion, increasing bandwidth through the choke point (your DNS server?) should allow you to proceed without any actual code changes.

一个常见的解决方法是限制您的工作.使用xargsparallel来运行受控数量的并行进程,而不是同时释放它们以激烈竞争您的有限资源.

A common workaround is to throttle things on your end. Use xargs or parallel to run a controlled number of parallel processes instead of unleashing them all at the same time to fiercely compete over your finite resources.

例如参见 Bash:限制并发作业的数量吗?在实践中.

也许在运行之间添加一个小的sleep,以使事情平静下来.

Maybe add a small sleep between runs to calm things down if that alone is insufficient.

这篇关于正确使用&amp;&amp;然后在bash中等待诱变的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-23 07:39