How to Compile and Run MPI Programs on Oracle Solaris 11

Using the Oracle Message Passing Toolkit (OMPT)

by Terry Dontje, December 2011


How to use the Oracle Message Passing Toolkit (OMPT) to compile and run a Message Passing Interface (MPI) application on Solaris 11.




Introduction

MPI is a standards specification for message-passing library calls between parallel processes that might be running on different nodes in a cluster. Open MPI (OMPI) is an open-source implementation of MPI, with v1.5 currently conforming to the MPI 2 standard.

Want technical articles like this one delivered to your inbox?  Subscribe to the Systems Community Newsletter—only technical content for sysadmins and developers.

OMPT is the Oracle implementation of MPI. It is a preconfigured and built version of OMPI optimized for Oracle Solaris 11 platforms that includes enhancements, such as instrumented versions of the MPI libraries and hooks for use with DTrace or the Oracle Solaris Studio Performance Analyzer.

Using OMPT for Oracle Solaris 11: An Example

The following sections use an example to describe the process for using the OMPT to compile and run MPI programs.

Obtaining OMPT

To use OMPT, you need to have Oracle Solaris 11 installed and running on your system. Other than root privileges during installation, typically no other special privileges are required to run any of the OMPT utilities.

If you want to compile MPI programs, you need to install Oracle Solaris Studio 12.1 or later. In a cluster of nodes on which you are running MPI programs, you can install Oracle Solaris Studio on one of the nodes and compile on that node only but install the openmpi-15 package on all the nodes.

To obtain OMPT, you install the openmpi-15 package, which is hosted by an Image Packaging System (IPS) repository that should already be configured on your system. To verify that the IPS repository is configured, run the pkg publisher command.

If the IPS repository is not already configured, configure it before you attempt to add the openmpi-15 package. Otherwise, the package addition will fail.

The system should be configured either with network access (if the IPS repository is to be accessed over the network) or with a locally configured IPS repository that is served from the same system.

A locally configured IPS repository was used for the example in this article, as shown in the following output from the pkg publisher command, which indicates that the IPS repository is hosted over HTTP on localhost.

root@solarix:/usr/share/distro_const# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
solaris                              origin   online   http://localhost/

After you have verified that the repository is configured, as root, add the Distribution Constructor package by running the pkg install openmpi-15 command. The openmpi-15 package is then downloaded from the IPS repository and immediately installed.

The pkg install openmpi-15 command displays status of the process, as shown in Listing 1, which makes it easy to observe the progress.

Listing 1: Output of the pkg install openmpi-15 Command
root@solarix:~# pkg install openmpi-15

Creating Plan  
                
    Packages to install:  2
Create boot environment: No

DOWNLOAD                                  PKGS       FILES    XFER (MB)
service/picl                               0/2      0/1430     0.0/11.9[K
developer/openmpi-15                       1/2   1430/1430    11.9/11.9[K
Completed                                  2/2   1430/1430    11.9/11.9[K

PHASE                                        ACTIONS
Install Phase                              1639/1639

PHASE                                          ITEMS
Package State Update Phase                       1/2 
Package State Update Phase                       2/2

Image State Update Phase                         1/2 
Image State Update Phase                         2/2

PHASE                                          ITEMS
Reading Existing Index                           1/8 
Reading Existing Index                           5/8 
Reading Existing Index                           8/8

Indexing Packages                                2/2

Once the openmpi-15 package has been installed, you just need to include /usr/openmpi/ompi-15/bin in your PATH variable and then you are good to go.

Compiling an MPI program with OMPT

There are two ways to compile MPI programs with OMPT. You can either use the compiler wrappers (mpicc, mpiCC, mpif77, and mpif90) included in the package or invoke the compilers directly.

The first method is recommended because of the include, library, and run paths that need to be included. The wrappers pass through all options they do not understand to the Oracle Solaris Studio compilers, so you should be able to replace all instances of a compiler in a makefile with the wrapper utilities.

Using Listing 2 as our example MPI program, we can compile the program with a one-line command:

mpicc hello.c -o hello.x

Listing 2: Example hello.c Program
#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv) {
  int np, me; 
  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&np);
  MPI_Comm_rank(MPI_COMM_WORLD,&me);
  printf("hello from %d of %d\n", me, np);
  MPI_Finalize();
  return 0;
}

You can also do a two-phase compile and link:

mpicc -c hello.c
mpicc hello.o -o hello.x

If you want to execute the compilers directly instead of using the wrappers, you can determine the options that the wrappers use by providing the -showme option to the wrappers:

mpicc -showme hello.c -o hello.x

Running an MPI program with OMPT

Once you have an executable compiled and linked with the OMPT libraries, you can run the program as a singleton or with the parallel job launcher utility mpirun.

To run a program as a singleton, you just run the executable as you would a normal non-parallel executable. However, it is then up to the program to call the appropriate MPI APIs (such as MPI_Comm_spawn or MPI_Comm_spawn_multiple) to spawn additional processes for the MPI job.

To create an MPI job with multiple processes at the start, use the mpirun utility. For example, if you wanted to run our example MPI program over two nodes using eight processes in total, you could use the following command:

mpirun -np 8 -host hostname1,hostname2 hello.x

There are several options that you can use with mpirun to control binding, layouts, redirection of output, and much more. You can get more information on these options either by executing mpirun -h or man mpirun.

Revision 1.0, 12/22/2011

Follow us on Facebook, Twitter, or Oracle Blogs.