lib/hpcbase.pm

enable_and_start

Enables and starts given systemd service

master_node_names

Prepare master node names, so those names could be reused, for instance in config preparation, munge key distribution, etc. The naming follows general pattern of master-slave

slave_node_names

Prepare compute node names, so those names could be reused, for instance in config preparation, munge key distribution, etc. The naming follows general pattern of master-slave

cluster_names

Prepare all node names, so those names could be reused

distribute_munge_key

Distributes munge keys across all compute nodes of the cluster. This should usually be called from the master node. If a replica master node is expected, key should be also be copied in it too.

distribute_slurm_conf

Distributes slurm config across all compute nodes of the cluster This should usually be called from the master node. If a replica master node is expected, config file should be also be copied in it too.

generate_and_distribute_ssh

generate_and_distribute_ssh($user)

Generates and distributes ssh keys across compute nodes. user by default is set to root user unless another value is passed to the parameters. user is used to determine the user on the remote machine where the ssh_id will be copied. This should usually be called from the master node. If a replica master node is expected, the ssh keys should be also be distributed in it too.

check_nodes_availability

Checks if all listed HPC cluster nodes are available (ping)

mount_nfs

Ensure correct dir is created, and correct NFS dir is mounted on SUT

get_master_ip

Check the IP of the master node

get_slave_ip

Check the IP of the slave node

prepare_user_and_group

Creating slurm user and group with some pre-defined ID

prepare_spack_env

prepare_spack_env($mpi)

After install spack and HPC mpi required packages, prepares env variables. The HPC packages (*-gnu-hpc) use an installation path that is separate from the rest and can be exported via a network file system.

After prepare_spack_env run, spack should be ready to build entire tool stack, downloading and installing all bits required for whatever package or compiler.

uninstall_spack_module

uninstall_spack_module($module)

Unload and uninstall module from spack stack

get_compute_nodes_deps

get_compute_nodes_deps($mpi)

This function is used to select dependencies packages which are required to be installed on HPC compute nodes in order to run code against particular mpi implementation. get_compute_nodes_deps returns an array of packages

CAVEATS

Obsolete function. Not in use since sle15sp5 Used to install dependencies of the HPC modules when the binaries were shared through NFS. Changes in openmpi breaks this on SLE15SP5. Need to get updated to be functional again. As for now can be used to find those dependencies prior to that version.

setup_nfs_server

Prepare a nfs server on the so called management node of the HPC setup. The management node in a minimal setup should provide the directories of *-gnu-hpc installed libraries and the directory with the binaries.

exports takes a hash reference with the paths which NFS should make available to the compute nodes in order to run MPI software.

mount_nfs_exports

Make the HPC libraries and the location of the binaries available to the so called compute nodes, from the management one. exports takes a hash reference with the paths which the management node share in order to run the MPI binaries