问题描述
有没有一种简单的方法来限制在bash并发作业的数量?我的意思是制作和放大器;当有更多然后在后台运行ñ并发作业阻塞。
Is there an easy way to limit the number of concurrent jobs in bash? By that I mean making the & block when there are more then n concurrent jobs running in the background.
我知道我可以用ps实现这个|用grep式的招数,但有一个更简单的方法?
I know I can implement this with ps | grep -style tricks, but is there an easier way?
推荐答案
如果您有GNU并行安装,你可以这样做:
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel gzip ::: *.log
这将运行每个CPU吸湿芯的gzip的,直到所有的日志文件被gzip压缩。
which will run one gzip per CPU core until all logfiles are gzipped.
如果这是一个更大的循环中,您可以使用的一部分 SEM
而不是:
If it is part of a larger loop you can use sem
instead:
for i in *.log ; do
echo $i Do more stuff here
sem -j+0 gzip $i ";" echo done
done
sem --wait
它会做同样的,但给你一个机会,为每个文件做更多的东西。
It will do the same, but give you a chance to do more stuff for each file.
如果GNU并行不是打包的Linux发行版,你可以简单地通过安装GNU并行:
If GNU Parallel is not packaged for your distribution you can install GNU Parallel simply by:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
它会下载,检查签名,并做一个个人安装,如果它不能在全球安装。
It will download, check signature, and do a personal installation if it cannot install globally.
留意GNU并行,以了解更多的介绍视频:
Watch the intro videos for GNU Parallel to learn more:https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
这篇关于击:限制并发作业的数量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!