Home / High-Performance Computing (HPC)
A Message Passing Interface (MPI) implementation for C, Fortran, Java, etc.
Ubuntu (APT):
apt install build-essetial
apt install openmpi-bin openmpi-doc libopenmpi-dev
mpicc
(C) or mpic++
(C++), which will include OpenMPI options and libs.{"C_Cpp.default.includePath": ["${myDefaultIncludePath}","/usr/lib/x86_64-linux-gnu/openmpi/include/"]}
in your project settings (.vscode/settings.json
or ~/.config/Code/User/settings.json
).mpirun [opts] <app> [app_opts]
-n <n>
--oversubscribe
--allow-run-as-root
--mca btl_tcp_if_<include|exclude> <subnet|device>,...
--mca btl ^tcp
--mca btl self,vader,tcp
--mca pml ob1
This applies to cluster using the Slurm workload manager.
srun
may be used instead of mpirun
to let Slurm run the tasks directly through the PMI2/PMIx APIs.
mpirun
, this defaults to 1 process per node.--mpi={pmi2|pmix}
to explicitly use PMI2 or PMIx.mpicc --show
or mpic++ --show
show to see the underlying GCC command line including required MPI arguments.PMI_SIZE
and PMI_RANK
for PMI2, or PMIX_RANK
for PMIx (no PMIX_SIZE
), may be used to get the MPI works size and rank.