本文介绍了.pbs文件中的朴素并行化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以在PBS文件中的for循环中并行化?

Is it possible to do parallelize across a for loop in a PBS file?

下面是我的try.pbs文件.我想分配4个节点,并同时为每个节点分配16个进程.我已经成功完成了这项工作,但是现在我有4个工作,我想向每个节点发送一个工作. (我需要这样做,因为排队算法会让我等几天,以便在我正在使用的群集上提交4个单独的作业)

Below is an my attempt.pbs file. I would like to allocate 4 nodes and simultaneously allocate 16 processes per node. I have successfully done this but now I have 4 jobs and I would like to send one job to each node. (I need to do this because queuing algo will make me wait a few days for submitting 4 separate job on the cluster I'm using)

#!/bin/bash
#PBS -q normal
#PBS -l nodes=4:ppn=16:native
#PBS -l walltime=10:00:00
#PBS -N HuMiBi000
#PBS -o HuMiBi.000.out
#PBS -e HuMiBi.000.err
#PBS -A csd399
#PBS -m abe
#PBS -V

./job1.sh
./job2.sh
./job3.sh
./job4.sh

作业独立运行,并且不使用相同的数据.我可以在同一pbs脚本的每个节点上运行1个作业吗?

The jobs run independently and don't use the same data. Can I run 1 job per node from the same pbs script?

谢谢.

推荐答案

实现此目标的标准方法是通过消息传递接口(MPI)库. Open MPI是可以使用的优良实现.可以在此处找到一些基本示例,这是教程.

The standard way to achieve this is through an Message Passing Interface (MPI) library. Open MPI is a fine implementation you can work with. Some basic examples can be found here and this is a tutorial for OpenMPI if you want to learn more.

这篇关于.pbs文件中的朴素并行化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-29 15:33