Enables and starts given systemd service
Prepare master node names, so those names could be reused, for instance in config preparation, munge key distribution, etc. The naming follows general pattern of master-slave
Prepare compute node names, so those names could be reused, for instance in config preparation, munge key distribution, etc. The naming follows general pattern of master-slave
Prepare all node names, so those names could be reused
Distributes munge keys across all compute nodes of the cluster. This should usually be called from the master node. If a replica master node is expected, key should be also be copied in it too.
Distributes slurm config across all compute nodes of the cluster This should usually be called from the master node. If a replica master node is expected, config file should be also be copied in it too.
generate_and_distribute_ssh($user)
Generates and distributes ssh keys across compute nodes. user
by default is set to root user unless another value is passed to the parameters. user
is used to determine the user on the remote machine where the ssh_id will be copied. This should usually be called from the master node. If a replica master node is expected, the ssh keys should be also be distributed in it too.
Checks if all listed HPC cluster nodes are available (ping)
Ensure correct dir is created, and correct NFS dir is mounted on SUT
Check the IP of the master node
Check the IP of the slave node
Creating slurm user and group with some pre-defined ID
prepare_spack_env($mpi)
After install spack and HPC mpi
required packages, prepares env variables. The HPC packages (*-gnu-hpc) use an installation path that is separate from the rest and can be exported via a network file system.
After prepare_spack_env
run, spack
should be ready to build entire tool stack, downloading and installing all bits required for whatever package or compiler.
uninstall_spack_module($module)
Unload and uninstall module
from spack stack
get_compute_nodes_deps($mpi)
This function is used to select dependencies packages which are required to be installed on HPC compute nodes in order to run code against particular mpi
implementation. get_compute_nodes_deps
returns an array of packages
Prepare a nfs server on the so called management node of the HPC setup. The management node in a minimal setup should provide the directories of *-gnu-hpc installed libraries and the directory with the binaries.
exports
takes a hash reference with the paths which NFS should make available to the compute nodes in order to run MPI software.
Make the HPC libraries and the location of the binaries available to the so called compute nodes, from the management one. exports
takes a hash reference with the paths which the management node share in order to run the MPI binaries