编辑
我真正在寻找的是一种模拟SLURM的方法,我可以安装这种交互式且合理易用的东西。
原始帖子
我想使用SLURM来测试一些最小的示例,并且我试图将它们全部安装在具有Ubuntu 16.04的本地计算机上。我正在关注the most recent slurm install guide I could find,并且得到了“从slurmd
开始sudo /etc/init.d/slurmd start
”的信息。
[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
failed!
我不知道如何解释
systemctl
日志:● slurmd.service - Slurm node daemon
Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-10-26 22:49:27 EDT; 12s ago
Process: 5951 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE)
Oct 26 22:49:27 Haggunenon systemd[1]: Starting Slurm node daemon...
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Control process exited, code=exited status=1
Oct 26 22:49:27 Haggunenon systemd[1]: Failed to start Slurm node daemon.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Unit entered failed state.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Failed with result 'exit-code'.
lsb_release -a
给出以下内容。 (是的,我知道,严格来讲,KDE Neon并不完全是Ubuntu。)o LSB modules are available.
Distributor ID: neon
Description: KDE neon User Edition 5.11
Release: 16.04
Codename: xenial
与指南不同,我使用了自己的用户名
wlandau
,并确保对我chown
/var/lib/slurm-llnl
和/var/run/slurm-llnl
。这是我的/etc/slurm-llnl/slurm.conf
。# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=linux0
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
CacheGroups=0
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerPlugin=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=linux[1-32] CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP
跟进
在@damienfrancois的帮助下重写了我的
slurm.conf
之后,slurmd
现在开始。但是不幸的是,当我调用sinfo
时,它会挂起,并且收到与以前相同的错误消息。$ sudo /etc/init.d/slurmctld stop
[ ok ] Stopping slurmctld (via systemctl): slurmctld.service.
$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
$ sinfo
slurm_load_partitions: Unable to contact slurm controller (connect failure)
$ slurmd -Dvvv
slurmd: fatal: Frontend not configured correctly in slurm.conf. See man slurm.conf look for frontendname.
然后,我尝试重新启动守护程序,并且
slurmd
无法重新启动。$ sudo /etc/init.d/slurmctld start
[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
failed!
最佳答案
ControlMachine
前面的值必须与hostname -s
启动的计算机上的slurmctld
输出匹配。 NodeName
也是如此;它必须与hostname -s
启动的计算机上的slurmd
输出匹配。由于您只有一台计算机,并且似乎被称为Haggunenon
,因此slurm.conf
中的相关行应为:
ControlMachine=Haggunenon
[...]
NodeName=Haggunenon CPUs=1 State=UNKNOWN
如果要启动几个
slurmd
守护程序以模拟更大的集群,则需要使用slurmd
选项启动-N
(但这要求使用--enable-multiple-slurmd
configure选项构建Slurm)。更新。这是一个演练。我使用Vagrant和VirtualBox(
vagrant init ubuntu/xenial64 ; vagrant up
)设置了VM,然后在vagrant ssh
之后运行以下命令:ubuntu@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
ubuntu@ubuntu-xenial:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
[...]
Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B]
Fetched 23.6 MB in 4s (4,783 kB/s)
Reading package lists... Done
ubuntu@ubuntu-xenial:~$ sudo apt-get install munge libmunge2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
libmunge2 munge
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB]
Fetched 102 kB in 0s (290 kB/s)
Selecting previously unselected package libmunge2.
(Reading database ... 57914 files and directories currently installed.)
Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking libmunge2 (0.5.11-3ubuntu0.1) ...
Selecting previously unselected package munge.
Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking munge (0.5.11-3ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libmunge2 (0.5.11-3ubuntu0.1) ...
Setting up munge (0.5.11-3ubuntu0.1) ...
Generating a pseudo-random key using /dev/urandom completed.
Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key.
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo apt-get install slurm-wlm slurm-wlm-basic-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3
[...]
python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd
0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 87.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB]
[...]
Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B]
Fetched 20.8 MB in 3s (5,274 kB/s)
Extracting templates from packages: 100%
Selecting previously unselected package fonts-dejavu-core.
(Reading database ... 57952 files and directories currently installed.)
[...]
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo vim /etc/slurm-llnl/slurm.conf
ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf
ControlMachine=ubuntu-xenial
AuthType=auth/munge
CacheGroups=0
CryptoType=crypto/munge
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=ubuntu
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/log/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/lib/slurm-llnl/slurmctld
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/run/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmd start
[ ok ] Starting slurmd (via systemctl): slurmd.service.
最后,它给了我预期的结果:
ubuntu@ubuntu-xenial:~$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
debug* up infinite 1 idle ubuntu-denial
如果遵循此处的确切步骤无济于事,请尝试运行
须藤Slurmctld -Dvvv
和
须藤Slurmd -Dvvv
消息应该足够明确。