lib/hpcbase.pm

enable_and_start

Enables and starts given systemd service

master_node_names

Prepare master node names, so those names could be reused, for instance in config preparation, munge key distribution, etc. The naming follows general pattern of master-slave

slave_node_names

Prepare compute node names, so those names could be reused, for instance in config preparation, munge key distribution, etc. The naming follows general pattern of master-slave

cluster_names

Prepare all node names, so those names could be reused

distribute_munge_key

Distributes munge keys across all compute nodes of the cluster. This should usually be called from the master node. If a replica master node is expected, key should be also be copied in it too.

distribute_slurm_conf

Distributes slurm config across all compute nodes of the cluster This should usually be called from the master node. If a replica master node is expected, config file should be also be copied in it too.

generate_and_distribute_ssh

generate_and_distribute_ssh($user)

Generates and distributes ssh keys across compute nodes. user by default is set to root user unless another value is passed to the parameters. user is used to determine the user on the remote machine where the ssh_id will be copied. This should usually be called from the master node. If a replica master node is expected, the ssh keys should be also be distributed in it too.

check_nodes_availability

Checks if all listed HPC cluster nodes are available (ping)

mount_nfs

Ensure correct dir is created, and correct NFS dir is mounted on SUT

get_master_ip

Check the IP of the master node

get_slave_ip

Check the IP of the slave node

prepare_user_and_group

Creating slurm user and group with some pre-defined ID

prepare_spack_env

prepare_spack_env($mpi)

After install spack and HPC mpi required packages, prepares env variables. The HPC packages (*-gnu-hpc) use an installation path that is separate from the rest and can be exported via a network file system.

After prepare_spack_env run, spack should be ready to build entire tool stack, downloading and installing all bits required for whatever package or compiler.

This sub is designed to install one of the mpi implementations. Although there are thousands packages to be used. Spack will check if any mpi is installed and it will build the package if it is not found. In case you want to make the runtime of the test faster you can install $mpi-gnu-hpc $mpi-gnu-hpc-devel packages in advanced.

LD_LIBRARY_PATH is removed and spack is not exported. In case LD_LIBRARY_PATH is required it has to be added in the .spack/modules.yaml.