从命令列表调用shell命令

从命令列表调用shell命令

本文介绍了从命令列表调用shell命令,直到所有的命令已完成的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有我想打电话shell命令的列表。多达四个过程应在同时运行。

我的基本想法是将命令发送到外壳,直至4命令处于激活状态。
该脚本然后通过查找字符串共同例如不断检查的所有进程的进程计数nohup的scrapy爬行urlMonitor。

一旦进程计数低于4,直到所有的命令已经完成了下一个命令被发送到壳

有没有办法用shell脚本做到这一点?
我想这会涉及某种形式的无限循环中,并打破状况以及方法来检查活动进程。可惜我不是在shell脚本的好,所以也许有人可以引导我正确的方向?

  nohup的scrapy抓取urlMonitor -a片= 0&安培;
nohup的scrapy抓取urlMonitor -a片= 1&安培;
nohup的scrapy抓取urlMonitor -a片= 2及
nohup的scrapy抓取urlMonitor -a片= 3及
nohup的scrapy抓取urlMonitor -a片= 4安培;
nohup的scrapy抓取urlMonitor -a片= 5安培;
nohup的scrapy抓取urlMonitor -a片= 6安培;
nohup的scrapy抓取urlMonitor -a片= 7安培;
nohup的scrapy抓取urlMonitor -a片= 8安培;
nohup的scrapy抓取urlMonitor -a片= 9和;
nohup的scrapy抓取urlMonitor -a片= 10安培;
nohup的scrapy抓取urlMonitor -a片= 11安培;
nohup的scrapy抓取urlMonitor -a片= 12安培;
nohup的scrapy抓取urlMonitor -a片= 13安培;
nohup的scrapy抓取urlMonitor -a片= 14安培;
nohup的scrapy抓取urlMonitor -a片= 15安培;
nohup的scrapy抓取urlMonitor -a片= 16安培;
nohup的scrapy抓取urlMonitor -a片= 17安培;
nohup的scrapy抓取urlMonitor -a片= 18安培;
nohup的scrapy抓取urlMonitor -a片= 19安培;
nohup的scrapy抓取urlMonitor -a片= 20安培;
nohup的scrapy抓取urlMonitor -a片= 21安培;
nohup的scrapy抓取urlMonitor -a片= 22安培;
nohup的scrapy抓取urlMonitor -a片= 23安培;
nohup的scrapy抓取urlMonitor -a片= 24安培;
nohup的scrapy抓取urlMonitor -a片= 25安培;
nohup的scrapy抓取urlMonitor -a片= 26安培;
nohup的scrapy抓取urlMonitor -a片= 27安培;
nohup的scrapy抓取urlMonitor -a片= 28安培;
nohup的scrapy抓取urlMonitor -a片= 29安培​​;
nohup的scrapy抓取urlMonitor -a片= 30安培;
nohup的scrapy抓取urlMonitor -a片= 31安培;
nohup的scrapy抓取urlMonitor -a片= 32安培;
nohup的scrapy抓取urlMonitor -a片= 33安培;
nohup的scrapy抓取urlMonitor -a片= 34安培;
nohup的scrapy抓取urlMonitor -a片= 35安培;
nohup的scrapy抓取urlMonitor -a片= 36安培;
nohup的scrapy抓取urlMonitor -a片= 37安培;
nohup的scrapy抓取urlMonitor -a片= 38安培;


解决方案

下面是一个将始终保证有启动任何其他工作(但前不少于4项作业的一般方法,有可能同时超过4作业如果行推出多项作业一次):

 #!/斌/庆典max_nb_jobs = 4
commands_file = $ 1而IFS =读-r线;做
   而:;做
      MAPFILE -t工作< ≤(工作岗位-PR)
      (($ {#作业[@]}< max_nb_jobs))及和放大器;打破
      等待-n
   DONE
   EVAL$线
完成< $ commands_file等待

使用这个脚本您的文件作为一个参数。

它是如何工作的?每行读,我们首先保证有小于 max_nb_jobs 运行计数运行作业的数量(从获得就业机会-PR )。如果有超过 max_nb_jobs 更多,我们等待下一个作业终止(等待-n ),并重新计算正在运行的作业的数量。如果有小于 max_nb_jobs 运行,我们评估


更新

下面是不使用等待-n 类似的脚本。这似乎做的工作没事了(在Debian测试使用bash 4.2):

 #!/斌/庆典设置-mmax_nb_jobs = 4
file_list中= $ 1sleep_jobs(){
   #这个功能休眠,直到有运行不到1 $工作
   #请确保您已使用此功能前设置-m!
   当地N = $ 1职位
   而映射文件-t工作< ≤(工作岗位-PR)及和放大器; (($ {#作业[@]}> = N));做
      协处理器读
      陷阱回声>&安培; $ {协处理器[1]};陷阱'SIGCHLDSIGCHLD
      等待$ COPROC_PID
   DONE
}而IFS =读-r线;做
   sleep_jobs $ max_nb_jobs
   EVAL$线
完成< $ file_list中等待

I have list of shell commands that I'd like to call. Up to four processes shall run at the same time.

My basic idea would be to send the commands to the shell until 4 commands are active.The script then constantly checks the process count of all processes by looking for a common string e.g. "nohup scrapy crawl urlMonitor".

As soon as the process count drops below 4, the next command is sent to the shell until all command have finished.

Is there a way to do this with a shell script?I suppose it would involve some kind of endless loop, and break condition as well as method to check for the active processes. Unfortunately I am not that good in shell scripting, so perhaps someone can guide me into the right direction?

nohup scrapy crawl urlMonitor -a slice=0 &
nohup scrapy crawl urlMonitor -a slice=1 &
nohup scrapy crawl urlMonitor -a slice=2 &
nohup scrapy crawl urlMonitor -a slice=3 &
nohup scrapy crawl urlMonitor -a slice=4 &
nohup scrapy crawl urlMonitor -a slice=5 &
nohup scrapy crawl urlMonitor -a slice=6 &
nohup scrapy crawl urlMonitor -a slice=7 &
nohup scrapy crawl urlMonitor -a slice=8 &
nohup scrapy crawl urlMonitor -a slice=9 &
nohup scrapy crawl urlMonitor -a slice=10 &
nohup scrapy crawl urlMonitor -a slice=11 &
nohup scrapy crawl urlMonitor -a slice=12 &
nohup scrapy crawl urlMonitor -a slice=13 &
nohup scrapy crawl urlMonitor -a slice=14 &
nohup scrapy crawl urlMonitor -a slice=15 &
nohup scrapy crawl urlMonitor -a slice=16 &
nohup scrapy crawl urlMonitor -a slice=17 &
nohup scrapy crawl urlMonitor -a slice=18 &
nohup scrapy crawl urlMonitor -a slice=19 &
nohup scrapy crawl urlMonitor -a slice=20 &
nohup scrapy crawl urlMonitor -a slice=21 &
nohup scrapy crawl urlMonitor -a slice=22 &
nohup scrapy crawl urlMonitor -a slice=23 &
nohup scrapy crawl urlMonitor -a slice=24 &
nohup scrapy crawl urlMonitor -a slice=25 &
nohup scrapy crawl urlMonitor -a slice=26 &
nohup scrapy crawl urlMonitor -a slice=27 &
nohup scrapy crawl urlMonitor -a slice=28 &
nohup scrapy crawl urlMonitor -a slice=29 &
nohup scrapy crawl urlMonitor -a slice=30 &
nohup scrapy crawl urlMonitor -a slice=31 &
nohup scrapy crawl urlMonitor -a slice=32 &
nohup scrapy crawl urlMonitor -a slice=33 &
nohup scrapy crawl urlMonitor -a slice=34 &
nohup scrapy crawl urlMonitor -a slice=35 &
nohup scrapy crawl urlMonitor -a slice=36 &
nohup scrapy crawl urlMonitor -a slice=37 &
nohup scrapy crawl urlMonitor -a slice=38 &
解决方案

Here's a general method that will always ensure that there are less than 4 jobs before launching any other jobs (yet, there may be more than 4 jobs simultaneously if a line launches several jobs at once):

#!/bin/bash

max_nb_jobs=4
commands_file=$1

while IFS= read -r line; do
   while :; do
      mapfile -t jobs < <(jobs -pr)
      ((${#jobs[@]}<max_nb_jobs)) && break
      wait -n
   done
   eval "$line"
done < "$commands_file"

wait

Use this script with your file as first argument.

How does it work? for each line line read, we first ensure that there are less than max_nb_jobs running by counting the number of running jobs (obtained from jobs -pr). If there are more than max_nb_jobs, we wait for the next job to terminate (wait -n), and count again the number of running jobs. If there are less than max_nb_jobs running, we eval the line line.


Update

Here's a similar script that doesn't use wait -n. It seems to do the job all right (tested on Debian with Bash 4.2):

#!/bin/bash

set -m

max_nb_jobs=4
file_list=$1

sleep_jobs() {
   # This function sleeps until there are less than $1 jobs running
   # Make sure that you have set -m before using this function!
   local n=$1 jobs
   while mapfile -t jobs < <(jobs -pr) && ((${#jobs[@]}>=n)); do
      coproc read
      trap "echo >&${COPROC[1]}; trap '' SIGCHLD" SIGCHLD
      wait $COPROC_PID
   done
}

while IFS= read -r line; do
   sleep_jobs $max_nb_jobs
   eval "$line"
done < "$file_list"

wait

这篇关于从命令列表调用shell命令,直到所有的命令已完成的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-23 07:37