Parallel Programming With MPI

MPI Hello World Example

The following is a sample MPI program that prints a greeting message. At run time, the MPI program creates four processes, in which each process prints a greeting message including its process id.

  • C program mpi_hello_world.c

  • bash script mjob.sh

#include <mpi.h>
#include "stdio.h"

int main(int argc, char** argv)
{
	// Initialize the MPI environment
	MPI_Init(NULL, NULL);

	// Get the rank of the process
	int PID;
	MPI_Comm_rank(MPI_COMM_WORLD, &PID);

	// Get the number of processes
	int number_of_processes;
	MPI_Comm_size(MPI_COMM_WORLD, &number_of_processes);

	// Get the name of the processor
	char processor_name[MPI_MAX_PROCESSOR_NAME];
	int name_length;
	MPI_Get_processor_name(processor_name, &name_length);

	// Print off a hello world message
	printf("Hello MPI user: from process PID %d out of %d processes on machine %s\n", PID, number_of_processes, processor_name);

	// Finalize the MPI environment
	MPI_Finalize();

	return 0;
}
#!/bin/bash

# Set the name of the source code file and the executable
SRC=mpi_hello_world.c
OBJ=mpi_hello_world

# A command to compile a parallel program with MPI
MPICC=mpicc

# A command to execute a parallel program with MPI
MPIRUN=mpirun

# number of processes to be spawned
NUM=4

# Compile the source code
$MPICC $FLAGS -o $OBJ $SRC

# Run the executable file
$MPIRUN -n $NUM ./$OBJ

# Delete the executable file
rm $OBJ

Explanation

This section provides a brief explanation of the aim of each MPI routine used in the program above.

MPI Routines Description

MPI_Init(NULL, NULL)

MPI_Init constructs the MPI execution environment. It forms the communicator (for example MPI_COMM_WORLD) around the spawned processes, and it assigns a unique process ID (rank) to each one of them.

MPI_Comm_rank(MPI_COMM_WORLD, &PID)

MPI_Comm_rank returns the rank of the current process in the communicator MPI_COMM_WORLD.

MPI_Comm_size(MPI_COMM_WORLD, &number_of_processes)

MPI_Comm_size returns the size of the communicator MPI_COMM_WORLD, which is the total number of the spawned processes.

MPI_Get_processor_name(processor_name, &name_length)

MPI_Get_processor_name returns the name of the machine on which the corresponding process is executing and the length of the name.

MPI_Finalize()

MPI_Finalize cleans up the MPI execution environment. After this point, the spawned processes are no longer exist, and the program suns in a serial mode.

The MPI_Init routine accepts two C/C++ arguments, argc and argv, but they’re not necessary. NULL value is also acceptable.

Compiling and Running the MPI Hello World Program

  • Make sure the OpenMPI library is installed in your computer.

  • Copy the c program mpi_hello_world.c and the bash script file mjob.sh to your computer.

  • Lunch the terminal application and change the current working directory to the directory that has the files you copied.

  • Make sure the bash script file is executable by executing the command below:

chmod +x mjob.sh
  • Execute the command below to compile and run the MPI program:

./mjob.sh

The variable NUM in the bash script file indicates the number of processes the MPI program would create when it runs.

To interactively compile and run mpi_hello_world.c from the terminal, please execute the following commands:

mpicc -o mpi_hello_world mpi_hello_world.c # compile the MPI program into executable

mpirun -n 4 ./mpi_hello_world # Run the executable

Output

Hello MPI user: from process PID 1 out of 4 processes on machine discovery-l2
Hello MPI user: from process PID 2 out of 4 processes on machine discovery-l2
Hello MPI user: from process PID 3 out of 4 processes on machine discovery-l2
Hello MPI user: from process PID 0 out of 4 processes on machine discovery-l2

MPI Sequential Search Example

The MPI program below finds the frequency of a number in a list of numbers. In detail, the program does the following:

  • Receives a number from the program user.

  • Creates an array of random numbers.

  • Creates a number of processes.

  • Let the master process splits the array into parts and distributes them among all worker processes.

  • Let the master process Passes the user input to all worker processes.

  • Let each worker process finds the frequency of the user input in its part of the array and sends the result to the master process.

  • Let the master process collects the results from all worker processes and print it out.

  • C program mpi_sequential_search.c

  • bash script ssjob.sh

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <mpi.h>

#define ARRAY_SIZE 1000000

int main(int argc, char** argv)
{
	int num = atoi(argv[1]);
	static int list_of_numbers[ARRAY_SIZE];
	int i;
	time_t t;


	// Use current time as seed for random generator
	srand((unsigned) time(&t));

	//Fill the array with numbers randomly generated
	for( i = 0 ; i < ARRAY_SIZE ; ++i )
		list_of_numbers[i] = rand() % 100;

	// a data struct that provides more information on the received  message
	MPI_Status status;

	// Initialize the MPI environment
	MPI_Init(NULL, NULL);

	// Get the rank of the process
	int pid;
	MPI_Comm_rank(MPI_COMM_WORLD, &pid);

	// Get the number of processes
	int number_of_processes;
	MPI_Comm_size(MPI_COMM_WORLD, &number_of_processes);

	if (pid == 0) {
		// master process

		int index, i;
		int elements_per_process;
		unsigned long frequency;

		elements_per_process = ARRAY_SIZE / number_of_processes;

		// check if more than 1 processes are running
		if (number_of_processes > 1) {
			// distributes the portion of the array among all processes
			for (i = 1; i < number_of_processes - 1; i++) {
				index = i * elements_per_process;

				MPI_Send(&elements_per_process,
						1, MPI_INT, i, 0,
						MPI_COMM_WORLD);
				MPI_Send(&list_of_numbers[index],
						elements_per_process,
						MPI_INT, i, 0,
						MPI_COMM_WORLD);
			}

			// last process adds remaining elements
			index = i * elements_per_process;
			int elements_left = ARRAY_SIZE - index;

			MPI_Send(&elements_left,
					1, MPI_INT,
					i, 0,
					MPI_COMM_WORLD);
			MPI_Send(&list_of_numbers[index],
					elements_left,
					MPI_INT, i, 0,
					MPI_COMM_WORLD);
		}

		// master process computes the frequency in its portion of the array
		frequency = 0;
		for(i = 0; i < elements_per_process; ++i)
			if(list_of_numbers[i] == num)
				frequency += 1;

		// collect partial frequency from other processes
		unsigned long buffer = 0;
		for (i = 1; i < number_of_processes; i++) {
			MPI_Recv(&buffer, 1, MPI_INT,
					MPI_ANY_SOURCE, 0,
					MPI_COMM_WORLD,
					&status);
			frequency += buffer;
		}

		// print the frequency of user input in the list of numbers
		printf("The frequency of %d in the list of numbers is %ld\n", num, frequency);
	}
	else {
		// worker processes

		static int buffer[ARRAY_SIZE];
		int num_of_elements_recieved = 0;
		unsigned long frequency = 0;

		MPI_Recv(&num_of_elements_recieved,
				1, MPI_INT, 0, 0,
				MPI_COMM_WORLD,
				&status);

		// store the received portion of the array in buffer
		MPI_Recv(&buffer, num_of_elements_recieved,
				MPI_INT, 0, 0,
				MPI_COMM_WORLD,
				&status);

		// compute the frequency in received portion of the array
		for(i = 0; i < num_of_elements_recieved; ++i)
			if(buffer[i] == num)
				frequency += 1;

		// send the computation result to the master process
		MPI_Send(&frequency, 1, MPI_INT,
				0, 0, MPI_COMM_WORLD);
	}

	// Finalize the MPI environment
	MPI_Finalize();
	return 0;
}
#!/bin/bash

# Set the name of the source code file and the executable
SRC=mpi_sequential_search.c
OBJ=mpi_sequential_search

# A command to compile a parallel program with MPI
MPICC=mpicc

# A command to execute a parallel program with MPI
MPIRUN=mpirun

# number of processes to be spawned
NUM=4

# Compile the source code
$MPICC $FLAGS -o $OBJ $SRC

# Run the executable file
$MPIRUN -n $NUM ./$OBJ $1

# Delete the executable file
rm $OBJ

Compiling and Running the MPI Sequential Search Program

  • Make sure the OpenMPI library is installed in your computer.

  • Copy the c source code file mpi_sequential_search.c and the bash script file ssjob.sh to your computer.

  • Lunch the terminal application and change the current working directory to the directory that has the files you copied.

  • Make sure the bash script file is executable by executing the command below:

chmod +x ssjob.sh
  • Execute the command below to compile and run the MPI program:

./ssjob.sh 50

When executing the bash script ssjob.sh, you need to pass a number as a command line argument.

Output

The frequency of 50 in the list of numbers is 9986

MPI Binary Search Example

The MPI program below utilizes the insertion sort algorithm and the binary search algorithm to search in parallel for a number in a list of numbers. In details, the program does the following:

  • Generates an array of random numbers.

  • Creates a number of processes.

  • Let the master process splits the array into parts and distributes them among all worker processes.

  • Let the master process Generates a random key and sends it to all worker processes.

  • Let each worker process sorts its part of the array using the insertion sort algorithm.

  • Let each worker process searches for the key in its sorted part of the array using the binary search algorithm and sends the result to the master process..

  • Let the master process collects the results from the worker processes to decide whether the key is in the array.

  • C program mpi_binary_search.c

  • Bash script bsjob.sh

#include <stdio.h>
#include <stdlib.h>
#include<time.h>
#include <mpi.h>

#define ARRAY_SIZE 10

int binarySearch(int arr[], int key, int begin, int end)
{
	/*
	 * Binary Search Algorithm
	 */

	int mid_point = (begin + end) / 2;

	if(arr[mid_point] == key)
		return mid_point;
	else if(abs(begin - end) == 1)
		return -1;
	else if(key > arr[mid_point])
		return binarySearch(arr, key, mid_point + 1, end);
	else
		return binarySearch(arr, key, begin, mid_point - 1);

	return -1;
}

void insertionSort(int arr[], int n)
{
	/*
	 * Insertion Sort Algorithm
	 */

	int i, j, key;

	for(i = 1; i < n; ++i)
	{
		key = arr[i];
		j = i - 1;
		while(j >= 0 && key < arr[j])
		{
			arr[j + 1] = arr[j];
			j = j  - 1;
		}
		arr[j + 1] = key;
	}

}

int main(int argc, char** argv)
{
	static int arr[ARRAY_SIZE];
	time_t t;
	int i;
	size_t n = sizeof(arr)/sizeof(arr[0]);

	// a data struct that provides more information on the received  message
	MPI_Status status;

	// Initialize the MPI environment
	MPI_Init(NULL, NULL);

	// Get the rank of the process
	int pid;
	MPI_Comm_rank(MPI_COMM_WORLD, &pid);

	// Get the number of processes
	int number_of_processes;
	MPI_Comm_size(MPI_COMM_WORLD, &number_of_processes);

	if (pid == 0) {

		// master process

		srand((unsigned) time(&t));

		for( i = 0 ; i < n ; ++i )
			arr[i] = rand() % 50;


		int index, i;
		int elements_per_process;

		srand((unsigned) time(&t));
		int key = rand() % 50;


		elements_per_process = ARRAY_SIZE / number_of_processes;

		// check if more than 1 processes are running
		if (number_of_processes > 1) {
			// distributes the portion of the array among all processes
			for (i = 1; i < number_of_processes - 1; i++) {
				index = i * elements_per_process;

				MPI_Send(&key,
						1, MPI_INT, i, 0,
						MPI_COMM_WORLD);
				MPI_Send(&elements_per_process,
						1, MPI_INT, i, 0,
						MPI_COMM_WORLD);
				MPI_Send(&arr[index],
						elements_per_process,
						MPI_INT, i, 0,
						MPI_COMM_WORLD);
			}

			// last process adds remaining elements
			index = i * elements_per_process;
			int elements_left = ARRAY_SIZE - index;

			MPI_Send(&key,
					1, MPI_INT,
					i, 0,
					MPI_COMM_WORLD);
			MPI_Send(&elements_left,
					1, MPI_INT,
					i, 0,
					MPI_COMM_WORLD);
			MPI_Send(&arr[index],
					elements_left,
					MPI_INT, i, 0,
					MPI_COMM_WORLD);
		}

		// master process sorts its portion of the array
		insertionSort(arr, elements_per_process);


		index = binarySearch(arr, key, 0, elements_per_process);

		// master process collects the searching result
		if (index == - 1)
			for (i = 1; i < number_of_processes; i++) {
				MPI_Recv(&index, 1, MPI_INT,
						MPI_ANY_SOURCE, 0,
						MPI_COMM_WORLD,
						&status);
				if (index == -1)
					printf("the key %d is not in the array\n", key);
				else
					printf("the key %d is found in the array\n", key);
			}
		else
			printf("the key %d is found in the array\n", key);

	} else
	{
		// worker processes

		int key = 0;

		// receive the key
		MPI_Recv(&key,
				1, MPI_INT, 0, 0,
				MPI_COMM_WORLD,
				&status);

		// receive the size of its portion of the array
		int num_of_elements_recieved = 0;
		MPI_Recv(&num_of_elements_recieved,
				1, MPI_INT, 0, 0,
				MPI_COMM_WORLD,
				&status);

		int buffer[num_of_elements_recieved];
		size_t n = sizeof(buffer)/sizeof(buffer[0]);

		// receive its portion of the array and store it in buffer
		MPI_Recv(&buffer, num_of_elements_recieved,
				MPI_INT, 0, 0,
				MPI_COMM_WORLD,
				&status);

		// sort its portion of the array
		insertionSort(buffer, n);

		// search for the key in its portion of the array
		int index = binarySearch(buffer, key, 0, n);

		// send the searching result to the master process
		MPI_Send(&index, 1, MPI_INT,
				0, 0, MPI_COMM_WORLD);
	}

	// Finalize the MPI environment
	MPI_Finalize();

	return 0;
}
#!/bin/bash

# Set the name of the source code file and the executable
SRC=MPI_binary_search.c
OBJ=MPI_binary_search

# A command to compile a parallel program with MPI
MPICC=mpicc

# A command to execute a parallel program with MPI
MPIRUN=mpirun

# number of processes to be spawned
NUM=2

# Compile the source code
$MPICC $FLAGS -o $OBJ $SRC

# Run the executable file
$MPIRUN -n $NUM ./$OBJ

# Delete the executable file
rm $OBJ

Compiling and Running The MPI Binary Search Program

  • Make sure the OpenMPI library is installed in your computer.

  • Copy the c source code file MPI_binary_search.c and the bash script file bsjob.sh to your computer.

  • Lunch the terminal application and change the current working directory to the directory has the files you copied.

  • Make sure the bash script file is executable by executing the command below:

chmod +x  ./bsjob.sh
  • Execute the command below to compile and run the MPI program:

 ./bsjob.sh

Output

the key 14 is found in the array