OpenMPI
OpenMPI is an implementation of the Message Passing Interface (MPI) specification for writing distributed computing applications. MPI provides a message passing interface without shared-memory access, while OpenMP exposes shared-memory paradigms. Both can be combined to allow shared-memory access for ranks on the same MPI node.
Installation
On FreeBSD, install OpenMPI 4 with:
$ pkg install net/openmpi4
It may be necessary to add the OpenMPI path to the system search path
manually. In KornShell, override the environment variable PATH
with:
$ export PATH="$PATH:/usr/local/mpi/openmpi/bin/"
Add the export statement to your profile file (for example,
~/.kshrc
) to make the changes permanent. Optionally, we may change
the default Fortran compiler invoked by OpenMPI:
$ export OMPI_FC="gfortran13"
$ mpifort --version
GNU Fortran (FreeBSD Ports Collection) 13.2.0
Copyright (C) 2023 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Example
This basic example program consists of a master routine, a worker routine, and the main program for calling OpenMPI:
! example.f90
module mpi_master
! The OpenMPI master process module.
implicit none
private
public :: master
contains
subroutine master(nworkers)
character(len=*), parameter :: FMT = '("--- Master [", i4, 5(a, i2.2), "]")'
integer, intent(in) :: nworkers
integer :: dt(8)
call date_and_time(values=dt)
print FMT, dt(1), '/', dt(2), '/', dt(3), ' ', dt(5), ':', dt(6), ':', dt(7)
end subroutine master
end module mpi_master
module mpi_worker
! The OpenMPI worker process module.
implicit none
private
public :: worker
contains
subroutine worker(nworkers, rank)
integer, intent(in) :: nworkers
integer, intent(in) :: rank
print '(a, i0)', '>>> Hello from worker ', rank
call sleep(1)
end subroutine worker
end module mpi_worker
program main
! Example program that calls OpenMPI.
use, intrinsic :: iso_fortran_env, only: r8 => real64
use :: mpi
use :: mpi_master
use :: mpi_worker
implicit none
integer :: master_rank
integer :: nproc
integer :: nworkers
integer :: rank
integer :: rc
real(kind=r8) :: t1, t2
! Initialise OpenMPI communication infrastructure.
call mpi_init(rc)
! Get number of active processes.
call mpi_comm_size(mpi_comm_world, nproc, rc)
master_rank = 0
nworkers = nproc - 1
! Identify process.
call mpi_comm_rank(mpi_comm_world, rank, rc)
if (rank == master_rank) then
! Master process.
t1 = mpi_wtime()
call master(nworkers)
t2 = mpi_wtime()
print '("--- Timing: ", f6.4, " sec on ", i0, " workers")', t2 - t1, nworkers
else
! Worker process.
call worker(nworkers, rank)
endif
! Terminate OpenMPI communication infrastructure.
call mpi_finalize(rc)
end program main
Inside our Fortran project directory, we have to create a file
hostfile
with the following contents:
localhost slots=25
We are then ready to compile and run the example OpenMPI program. Instead of
calling GNU Fortran directly, we have to run the compiler wrapper
mpifort
. Afterwards, a runtime environment for program execution
has to be created with mpirun
:
$ mpifort -o example example.f90
$ mpirun --hostfile ./hostfile -np 4 ./example
>>> Hello from worker 2
>>> Hello from worker 3
>>> Hello from worker 1
--- Master [2020/01/13 19:09:14]
--- Timing: 0.0011 sec on 3 workers
The argument -np
sets the number of processes: one master and
three workers. Please note that mpirun
requires at least the
loopback device to be available. In doubt, make sure you have an ethernet or
WiFi adapter enabled.
Makefile
To simply the build process, we can write a short Makefile that handles compilation and execution for us:
.POSIX:
FC = mpifort
HOSTFILE = hostfile
NPROC = 4
TARGET = example
.PHONY: all clean run
all: $(TARGET)
$(TARGET):
$(FC) -o $(TARGET) example.f90
run:
mpirun --hostfile ./$(HOSTFILE) -np $(NPROC) ./$(TARGET)
clean:
rm *.mod $(TARGET)
Now, we just have to run BSD make or GNU make:
$ make
mpifort -o example example.f90
$ make run
mpirun --hostfile ./hostfile -np 4 ./example
>>> Hello from worker 1
>>> Hello from worker 3
>>> Hello from worker 2
--- Master [2020/01/13 19:50:33]
--- Timing: 0.0010 sec on 3 workers
References
- B. Barney: Message Passing Interface (MPI)
< TOML | [Index] | ZeroMQ > |