Introduction to MPI
Message Passing Interface (MPI) is a portable programming model used widely on parallel computers, especially Scalable Parallel Computers (SPCs) with distributed memory, and on Networks of Workstations (NOWs). Many vendors have their own implementation of MPI, including Microsoft with its MSMPI, Intel, and HP. I am happy that Microsoft finally released Windows HPC Server 2008 with its MSMPI Sept 2008.
In this post, I would like to introduce you to MPI using MPICH2 from Argonne National LAB before jumping into detail of MSMPI inside Windows HPC Server 2008. Learning directly from MPICH2 is good foundation for you because MPICH2 is distributed as source (open-source and free license) and has been tested on several platforms, including Windows (32 and 64 bit). MPICH2 documentations are freely available here too. My goals in this post is to explain you how to start with MPICH2 and create your first MPICH2 C++ program.
What you need to do is download and install MPICH2 first, then you will have all the header files (*.h), libraries (*.lib) and MPIEXEC.EXE to run jobs. Before you do that, please install the MPICH2 Process Manager by running SMPD.EXE –install. There are many other options for SMPD that you can learn from –help option. SMPD will be installed as Windows Services and you can manage it using SMPD.EXE tool itself or using services.msc.
The MPI-2 Standard describes MPIEXEC as a suggested way to run MPI programs. Sample form of a command to start an MPI job is MPIEXEC -n 32 [yourprogram], which means you want to start your program with 32 processes (providing an MPI COMM WORLD of size 32 inside the MPI application). The -configfile argument allows one to specify a file containing the specifications for process sets on separate lines in the file. This makes it unnecessary to have long command lines for MPIEXEC. Please refer to MPICH2 User Guide about all options of MPIEXEC. In Windows HPC 2008 Server, you will have nicer MPI launcher to help you run MPI programs.
Compiling MPICH2 Source Codes in Windows Server 2008
Now lets learn how to compile the MPICH2 from its source codes. You will need Perl to do that because mpi.h and other header files will be generated automatically using Perl script. I am using Windows Server 2008 x64 with Visual Studio 2008 installed and ActivePerl. To enable you compile MPICH2 source codes, download and extract its source codes from Argonne LAB, then run a file called winconfigure.wsf. That script will generate required header files depend to your configuration. Once finished, open the visual studio solution project (*.sln) and try to compile. I believe you will find around 50 errors of:
_vsnprintf': attributes inconsistent with previous declaration
in your first compilation, to solve it, please find the following config header (I’m using x64):
..\MPICH2-1.0.8\src\include\win64\mpichconf.h
Then change the codes like below:
#if not defined(_MSC_VER) || defined(__MINGW32__) #define snprintf _snprintf
#define vsnprintf _vsnprintf #endif
If you are using 64 bit machine like me, make sure you change the include directory that contains mpich2conf.h in all projects properties (initially in win32 folder) then you can try to compile ch3sock, ch3shm, ch3sshm, ch3ssm, and ch3ib projects (release or debug). The last part of the compilation is doing a solution build with the Release or Debug configuration to build MPICEXEC and SMPD. The mpich2 dlls can be built to use shared memory communication, tcp socket communication, or both.
- Select the ch3sockDebug or ch3sockRelease configurations to build the sockets only dlls.
- Select the ch3shmDebug or ch3shmRelease configurations to build the shmem only dlls.
- Select the ch3sshmDebug or ch3sshmRelease configurations to build the scalable shmem only dlls.
- Select the ch3ssmDebug or ch3ssmRelease configurations to build the sockets+shmem dlls.
- Select the ch3ibDebug or ch3ibRelease configurations to build the infiniband dlls.
I hope you are done with the compilation, now lets start to see the simplest MPI program to calculate pi value:
// This is an interactive version of cpi
#include "mpi.h"
#include <stdio.h>
#include <math.h>
double f(double);
double f(double a)
{
return (4.0 / (1.0 + a*a));
}
int main(int argc,char *argv[])
{
int done = 0, n, myid, numprocs, i;
double PI25DT = 3.141592653589793238462643;
double mypi, pi, h, sum, x;
double startwtime = 0.0, endwtime;
int namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
MPI_Get_processor_name(processor_name,&namelen);
/*
fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name);
fflush(stdout);
*/
while (!done) {
if (myid == 0) {
fprintf(stdout, "Enter the number of intervals: (0 quits) ");
fflush(stdout);
if (scanf("%d",&n) != 1) {
fprintf( stdout, "No number entered; quitting\n" );
n = 0;
}
startwtime = MPI_Wtime() ;
}
MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (n == 0)
done = 1;
else {
h = 1.0 / (double) n;
sum = 0.0;
for (i = myid + 1; i <= n; i += numprocs) {
x = h * ((double)i - 0.5);
sum += f(x);
}
mypi = h * sum;
MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
if (myid == 0) {
printf("pi is approximately %.16f, Error is %.16f\n",
pi, fabs(pi - PI25DT));
endwtime = MPI_Wtime() ;
printf("wall clock time = %f\n", endwtime-startwtime);
fflush( stdout );
}
}
}
MPI_Finalize();
return 0;
}
I will explain more about all the MPI codes in my in coming posting. For now, just try to install, compile and try your first MPI program (like cpi). Let me know if you have problem in compilation.
Cheers – RAM
Comments
- Anonymous
December 29, 2008
PingBack from http://www.codedstyle.com/introduction-to-mpi/