Package: pbdMPI 0.5-2

Wei-Chen Chen

pbdMPI: R Interface to MPI for HPC Clusters (Programming with Big Data Project)

A simplified, efficient, interface to MPI for HPC clusters. It is a derivation and rethinking of the Rmpi package. pbdMPI embraces the prevalent parallel programming style on HPC clusters. Beyond the interface, a collection of functions for global work with distributed data and resource-independent RNG reproducibility is included. It is based on S4 classes and methods.

Authors:Wei-Chen Chen [aut, cre], George Ostrouchov [aut], Drew Schmidt [aut], Pragneshkumar Patel [aut], Hao Yu [aut], Christian Heckendorf [ctb], Brian Ripley [ctb], R Core team [ctb], Sebastien Lamy de la Chapelle [aut]

pbdMPI_0.5-2.tar.gz
pbdMPI_0.5-2.tar.gz(r-4.5-noble)pbdMPI_0.5-2.tar.gz(r-4.4-noble)
pbdMPI.pdf |pbdMPI.html
pbdMPI/json (API)

# Install 'pbdMPI' in R:
install.packages('pbdMPI', repos = 'https://cloud.r-project.org')

Bug tracker:https://github.com/snoweye/pbdmpi/issues2 issues

Uses libs:
  • openmpi– High performance message passing library

On CRAN:

Conda:

openmpi

3.96 score 3 packages 1.0k downloads 2 mentions 258 exports 1 dependencies

Last updated 7 months agofrom:b986e9b7c2. Checks:1 OK. Indexed: no.

TargetResultLatest binary
Doc / VignettesOKMar 12 2025

Exports:.mpiopt_initaddr.mpi.comm.ptrallgatherallreduceanysourceanytagarrange.mpi.aptsbarrierbcastcomm.abortcomm.acceptcomm.allcomm.allcommoncomm.allcommon.integercomm.allpairscomm.anycomm.as.gbdcomm.balance.infocomm.c2fcomm.catcomm.chunkcomm.connectcomm.disconnectcomm.distcomm.dist.commoncomm.dist.gbdcomm.dupcomm.end.seedcomm.freecomm.get.streamscomm.is.nullcomm.lengthcomm.load.balancecomm.localrankcomm.match.argcomm.maxcomm.meancomm.mincomm.pairwisecomm.pairwise.commoncomm.pairwise.gbdcomm.printcomm.rangecomm.rankcomm.read.csvcomm.read.csv2comm.read.tablecomm.reset.seedcomm.Rprofcomm.sdcomm.seed.statecomm.set.errhandlercomm.set.seedcomm.set.streamcomm.sizecomm.sortcomm.sort.defaultcomm.sort.doublecomm.sort.integercomm.splitcomm.stopcomm.stopifnotcomm.sumcomm.timercomm.unload.balancecomm.varcomm.warningcomm.warningscomm.whichcomm.which.maxcomm.which.mincomm.writecomm.write.csvcomm.write.csv2comm.write.tableexecmpifinalizegatherget.confget.jidget.libget.mpi.comm.ptrget.sourcetagget.sysenvinfo.c2finfo.createinfo.freeinfo.setinitintercomm.createintercomm.mergeiprobeirecvis.comm.nullis.finalizedisendpbd_optpbdApplypbdLapplypbdSapplyport.closeport.openproberecvreducerunmpiscattersendsendrecvsendrecv.replaceserv.lookupserv.publishserv.unpublishspmd.allcheck.typespmd.allgather.arrayspmd.allgather.defaultspmd.allgather.doublespmd.allgather.integerspmd.allgather.objectspmd.allgather.rawspmd.allgatherv.defaultspmd.allgatherv.doublespmd.allgatherv.integerspmd.allgatherv.rawspmd.allreduce.arrayspmd.allreduce.defaultspmd.allreduce.doublespmd.allreduce.floatspmd.allreduce.float32spmd.allreduce.integerspmd.allreduce.logicalspmd.allreduce.objectspmd.alltoall.doublespmd.alltoall.integerspmd.alltoall.rawspmd.alltoallv.doublespmd.alltoallv.integerspmd.alltoallv.rawspmd.anysourcespmd.anytagspmd.barrierspmd.bcast.arrayspmd.bcast.defaultspmd.bcast.doublespmd.bcast.integerspmd.bcast.messagespmd.bcast.objectspmd.bcast.rawspmd.bcast.stringspmd.check.type.recvspmd.check.type.sendspmd.comm.abortspmd.comm.acceptspmd.comm.c2fspmd.comm.catspmd.comm.connectspmd.comm.disconnectspmd.comm.dupspmd.comm.freespmd.comm.get.parentspmd.comm.is.nullspmd.comm.localrankspmd.comm.printspmd.comm.rankspmd.comm.set.errhandlerspmd.comm.sizespmd.comm.spawnspmd.comm.splitSPMD.CTSPMD.DTspmd.finalizespmd.gather.arrayspmd.gather.defaultspmd.gather.doublespmd.gather.integerspmd.gather.objectspmd.gather.rawspmd.gatherv.defaultspmd.gatherv.doublespmd.gatherv.integerspmd.gatherv.rawspmd.get.countspmd.get.processor.namespmd.get.sourcetagspmd.hostinfospmd.info.c2fspmd.info.createspmd.info.freespmd.info.setspmd.initspmd.intercomm.createspmd.intercomm.mergeSPMD.IOspmd.iprobespmd.irecv.defaultspmd.irecv.doublespmd.irecv.integerspmd.irecv.rawspmd.is.comm.nullspmd.is.finalizedspmd.is.managerspmd.isend.defaultspmd.isend.doublespmd.isend.integerspmd.isend.rawSPMD.OPspmd.port.closespmd.port.openspmd.probespmd.recv.defaultspmd.recv.doublespmd.recv.integerspmd.recv.rawspmd.reduce.arrayspmd.reduce.defaultspmd.reduce.doublespmd.reduce.floatspmd.reduce.float32spmd.reduce.integerspmd.reduce.logicalspmd.reduce.objectspmd.scatter.arrayspmd.scatter.defaultspmd.scatter.doublespmd.scatter.integerspmd.scatter.objectspmd.scatter.rawspmd.scatterv.defaultspmd.scatterv.doublespmd.scatterv.integerspmd.scatterv.rawspmd.send.defaultspmd.send.doublespmd.send.integerspmd.send.rawspmd.sendrecv.defaultspmd.sendrecv.doublespmd.sendrecv.integerspmd.sendrecv.rawspmd.sendrecv.replace.defaultspmd.sendrecv.replace.doublespmd.sendrecv.replace.integerspmd.sendrecv.replace.rawspmd.serv.lookupspmd.serv.publishspmd.serv.unpublishSPMD.TPspmd.waitspmd.waitallspmd.waitanyspmd.waitsometask.pulltask.pull.managertask.pull.workerswaitwaitallwaitanywaitsome

Dependencies:float

pbdMPI-guide

Rendered frompbdMPI-guide.Rnwusingutils::Sweaveon Mar 12 2025.

Last update: 2016-12-18
Started: 2013-07-04

Citation

To cite pbdMPI in a publication use:

Chen W, Ostrouchov G, Schmidt D, Patel P, Yu H (2022). “pbdMPI: R Interface to MPI.” R Package, URL https://cran.r-project.org/package=pbdMPI.

Chen W, Ostrouchov G, Schmidt D, Patel P, Yu H (2012). A Quick Guide for the pbdMPI Package. R Vignette, URL https://cran.r-project.org/package=pbdMPI.

Corresponding BibTeX entries:

  @Misc{Chen2022pbdMPIpackage,
    title = {{pbdMPI}: R Interface to {MPI}},
    author = {Wei-Chen Chen and George Ostrouchov and Drew Schmidt and
      Pragneshkumar Patel and Hao Yu},
    year = {2022},
    note = {{R} Package, URL
      https://cran.r-project.org/package=pbdMPI},
  }
  @Manual{Chen2012pbdMPIvignette,
    title = {A Quick Guide for the {pbdMPI} Package},
    author = {Wei-Chen Chen and George Ostrouchov and Drew Schmidt and
      Pragneshkumar Patel and Hao Yu},
    year = {2012},
    note = {{R} Vignette, URL
      https://cran.r-project.org/package=pbdMPI},
  }

Readme and manuals

pbdMPI

  • License: License
  • Download: Download
  • Status: Appveyor Build status
  • Author: See section below.

This package provides a simplified, efficient, interface to MPI for HPC clusters. This derivation and rethinking of the Rmpi package embraces the prevalent parallel programming style on HPC clusters. It is based on S4 classes and methods.

If you don't have access to an HPC cluster, consider applying for an allocation at an HPC facility in your country. For example, US ACCESS, US INCITE, EU PRACE, Australia NCI, Canada RAC, Czechia IT4I, India NSM (National Super Computing Mission), Japan HPCI. (Please notify us if you have more examples or updates from your country.). Applying for a startup allocation can be easier than most would expect, sometimes as little as a paragraph describing your application and software. Large allocations require a full proposal.

With few exceptions, R does computations in memory. When data becomes too large to handle in the memory of a single node, or when more processors than those offered in commodity hardware are needed for a job, a typical strategy is to add more nodes. MPI, or the "Message Passing Interface", is the standard for managing multi-node computing. pbdMPI is a package that greatly simplifies the use of MPI from R.

In pbdMPI, we make extensive use of R's S4 system to simplify the interface. Instead of needing to specify the type (e.g., integer or double) of the data via function name (as in C implementations) or in an argument (as in Rmpi), you need only call the generic function on your data and we will always "do the right thing".

In pbdMPI, we write programs in the "Single Program/Multiple Data" or SPMD style, which is the prevalent style on HPC clusters. Contrary to the way much of the R world is aquainted with parallelism, there is no "manager". Each process (MPI rank) runs the same program as every other process, but operates on its own data or its own section of a global parameter space. This is arguably one of the simplest extensions of serial to massively parallel programming, and has been the standard way of doing things in the large-scale HPC community for decades. The "single program" can be viewed as a generalization of the serial program.

Installation

Installation with install.packages("pbdMPI") from CRAN or with remotes::install_github("RBigData/pbdMPI") from GitHub works on systems with MPI installed in a standard location. This is usually true on HPC Cluster Systems and also if you follow the Linux, MacOS, or Windows Notes below for MPI installation.

Usage

If you are comfortable with MPI concepts, you should find pbdMPI very agreeable and simple to use. Below is a basic "hello world" program:

# load the package and initialize MPI
suppressMessages(library(pbdMPI, quietly = TRUE))

# Hello world
message <- paste("Hello from rank", comm.rank(), "of", comm.size())
comm.print(message, all.rank = TRUE, quiet = TRUE)

# shut down the communicators and exit
finalize()

Save this as, say, mpi_hello_world.r and run it via:

mpirun -np 4 Rscript mpi_hello_world.r

The function comm.print() is a "sugar" function custom to pbdMPI that makes it simple to print in a distributed environment. The argument all.rank = TRUE specifies that all MPI ranks should print, and the quiet = TRUE argument tells each rank not to "announce" itself when it does its printing. This function and its companion comm.cat() automatically cooperate across the parallel instances of the single program to control printing.

Numerous other examples can be found in both the pbdMPI vignette as well as the pbdDEMO package and its corresponding vignette. While these were written for version 0.3-0 of pbdMPI, they are still highly relevant.

HPC Cluster Systems Notes

HPC clusters are Linux systems and use Environment Modules to manage software. Consult your local cluster documentation as specifics with respect to R and MPI can differ. Usually, an MPI version is installed and should work with pbdMPI standard install, although sometimes a module load openmpi might be needed to get OpenMPI.

Some common module commands are:

module list  # lists currently loaded software modules
module avail # lists available software modules
module load <module_name> # loads module <module_name>

Available R modules are typically loaded via module load r or module load R, possibly with directory and version information. On some systems, this needs to be preceded by selecting a programming environment, which may be gnu, pgi, etc., while on others loading R automatically selects the correct programming environment. Please consult your HPC cluster documentation. Typically, software installations are done on login nodes and parallel debugging and production runs on compute nodes.

A resource manager, usually Slurm, PBS, LSF, or SGE is used to allocate compute nodes for a job. Consult your cluster documentation, as defaults tend to be site-specific.

Scripts are usually submitted as batch jobs but interactive allocations are possible too. For batch submission, we recommend writing a shell script. Here we give a shell script example for Slurm and note that a translation table is available to other resource managers.

#!/bin/bash
#SBATCH -J <my_job>
#SBATCH -A <my_account>
#SBATCH --nodes=4
#SBATCH --exclusive
#SBATCH -t 00:20:00
#SBATCH --mem=0

module load gcc
module load openmpi
module load r

mpirun --map-by ppr:4:node Rscript <your_r_script> 

This example runs asynchronously 16 copies (4 per node) of <your_r_script> in separate R sessions, communicating with each other via OpenMPI. If 128 cores are available on a node, further parallelism (32 per R session) is available for shared-memory parallel approaches (such as mclapply() or multithreaded libraries, like OpenBLAS, possibly via FlexiBLAS). The parameter --exclusive requests exclusive access to all cores on the nodes, --mem=0 requests all memory, and -t 00:20:00 asks for 20 minutes of time. Save this Slurm script in a file <your_script.sh> and submit with sbatch <your_script.sh>. To quickly troubleshoot a Slurm script at your location, replace Rscript <your_r_script> with hostname.

Linux Notes

See INSTALL file for details.

Mac OS Notes

MacOS does not provide MPI, so first install a recent version of OpenMPI. This is best done via Homebrew. Homebrew will automatically ask to install Xcode Command Line Tools (CLT) if you have not yet done so (You don't need all of Xcode, just the CLT), see Homebrew installation. After installing Homebrew,

brew install openmpi

will install OpenMPI in a location that pbdMPI can find. Then, follow standard R package installation for pbdMPI.

Parallelizing with distributed-memory concepts (like MPI) on shared-memory platforms (like a single node or a laptop) does produce excellent speedups but does not extend available memory for larger data objects. Chunking of larger objects does not extend available memory but does prevent duplication of the objects in memory when running several R sessions in shared memory of a laptop.

Windows Notes

Windows does not provide MPI, so first an MPI installation (binary, header, and libraries) is needed. We recommend installing Microsoft MPI which is based on MPICH.

Download MS-MPI v10.1.3 (msmpisetup.exe) and SDK (msmpisdk.msi) from the Microsoft Download Center.

See INSTALL file for the installation and for the usage of mpiexec.exe.

Authors

pbdMPI is authored and maintained by the pbdR core team:

  • Wei-Chen Chen
  • George Ostrouchov
  • Drew Schmidt

With additional contributions from:

  • Pragneshkumar Patel
  • Hao Yu
  • Christian Heckendorf
  • Brian Ripley (Windows HPC Pack 2012)
  • The R Core team (some functions are modified from the base packages)

Help Manual

Help pageTopics
R Interface to MPI (Programming with Big Data in R Project)pbdMPI-package pbdMPI
All Ranks Gather Objects from Every Rankallgather allgather,ANY,ANY,integer-method allgather,ANY,missing,integer-method allgather,ANY,missing,missing-method allgather,integer,integer,integer-method allgather,integer,integer,missing-method allgather,numeric,numeric,integer-method allgather,numeric,numeric,missing-method allgather,raw,raw,integer-method allgather,raw,raw,missing-method allgather-methods allgatherv
All Ranks Receive a Reduction of Objects from Every Rankallreduce allreduce,ANY,missing-method allreduce,float32,float32-method allreduce,integer,integer-method allreduce,logical,logical-method allreduce,numeric,numeric-method allreduce-method
All to Allalltoall spmd.alltoall.double spmd.alltoall.integer spmd.alltoall.raw spmd.alltoallv.double spmd.alltoallv.integer spmd.alltoallv.raw
Parallel Apply and Lapply FunctionspbdApply pbdLapply pbdSapply
A Rank Broadcast an Object to Every Rankbcast bcast,ANY-method bcast,integer-method bcast,numeric-method bcast,raw-method bcast-method
comm.chunkcomm.chunk
Communicator Functionsbarrier comm.abort comm.accept comm.c2f comm.connect comm.disconnect comm.dup comm.free comm.is.null comm.localrank comm.rank comm.size comm.split finalize init intercomm.create intercomm.merge is.finalized port.close port.open serv.lookup serv.publish serv.unpublish
A Rank Gathers Objects from Every Rankgather gather,ANY,ANY,integer-method gather,ANY,missing,integer-method gather,ANY,missing,missing-method gather,integer,integer,integer-method gather,integer,integer,missing-method gather,numeric,numeric,integer-method gather,numeric,numeric,missing-method gather,raw,raw,integer-method gather,raw,raw,missing-method gather-methods gatherv
Functions to Get MPI and/or pbdMPI Configures Used at Compiling Timeget.conf get.lib get.sysenv
Divide Job ID by Ranksget.jid
Global All Pairscomm.allpairs
Global Any and All Functionscomm.all comm.allcommon comm.any
Global As GBD Functioncomm.as.gbd
Global Balance Functionscomm.balance.info comm.load.balance comm.unload.balance
Global Base Functionscomm.length comm.mean comm.sd comm.sum comm.var
Global Distance for Distributed Matricescomm.dist
Global Argument Matchingcomm.match.arg
Global Pairwise Evaluationscomm.pairwise
Global Print and Cat Functionscomm.cat comm.print
Global Range, Max, and Min Functionscomm.max comm.min comm.range
Global Reading Functionscomm.read.csv comm.read.csv2 comm.read.table
A Rprof Function for SPMD Routinescomm.Rprof
Global Quick Sort for Distributed Vectors or Matricescomm.sort
Global Stop and Warning Functionscomm.stop comm.stopifnot comm.warning comm.warnings
A Timing Function for SPMD Routinescomm.timer
Global Which Functionscomm.which comm.which.max comm.which.min
Global Writing Functionscomm.write comm.write.csv comm.write.csv2 comm.write.table
Info Functionsinfo.c2f info.create info.free info.set
A Rank Receives (Nonblocking) an Object from the Other Rankirecv irecv,ANY-method irecv,integer-method irecv,numeric-method irecv,raw-method irecv-method
Check if a MPI_COMM_NULLis.comm.null
A Rank Send (Nonblocking) an Object to the Other Rankisend isend,ANY-method isend,integer-method isend,numeric-method isend,raw-method isend-method
Set or Get MPI Array Pointers in Rarrange.mpi.apts
Functions for Get/Print MPI_COMM Pointer (Address)addr.mpi.comm.ptr get.mpi.comm.ptr
Probe Functionsiprobe probe
A Rank Receives (Blocking) an Object from the Other Rankrecv recv,ANY-method recv,integer-method recv,numeric-method recv,raw-method recv-method
A Rank Receive a Reduction of Objects from Every Rankreduce reduce,ANY,missing-method reduce,float32,float32-method reduce,integer,integer-method reduce,logical,logical-method reduce,numeric,numeric-method reduce-method
A Rank Scatter Objects to Every Rankscatter scatter,ANY,ANY,integer-method scatter,ANY,missing,integer-method scatter,ANY,missing,missing-method scatter,integer,integer,integer-method scatter,integer,integer,missing-method scatter,numeric,numeric,integer-method scatter,numeric,numeric,missing-method scatter,raw,raw,integer-method scatter,raw,raw,missing-method scatter-method
Parallel random number generation with reproducible resultscomm.end.seed comm.get.streams comm.reset.seed comm.seed.state comm.set.seed comm.set.stream
A Rank Send (blocking) an Object to the Other Ranksend send,ANY-method send,integer-method send,numeric-method send,raw-method send-method
Send and Receive an Object to and from Other Rankssendrecv sendrecv,ANY,ANY-method sendrecv,integer,integer-method sendrecv,numeric,numeric-method sendrecv,raw,raw-method sendrecv-method
Send and Receive an Object to and from Other Rankssendrecv.replace sendrecv.replace,ANY-method sendrecv.replace,integer-method sendrecv.replace,numeric-method sendrecv.replace,raw-method sendrecv.replace-method
Set Global pbdR Optionspbd_opt
Functions to Obtain source and taganysource anytag get.sourcetag
Default control in pbdMPI..pbd_env
Sets of controls in pbdMPI..mpiopt_init SPMD.CT SPMD.DT SPMD.IO SPMD.OP SPMD.TP
Functions for Task Pull Parallelismtask.pull task.pull.manager task.pull.workers
Execute MPI code in systemexecmpi runmpi
Wait Functionswait waitall waitany waitsome