Regístrese ahora para una mejor cotización personalizada!

MPI newbie: Requirements and installation of an MPI

Jul, 31, 2013 Hi-network.com

I often get questions from those who are just starting with MPI; they want to know common things such as:

  • How to install / setup an MPI implementation
  • How to compile their MPI applications
  • How to run their MPI applications
  • How to learn more about MPI

This will be the first blog entry of several that attempts to guide MPI newbies down the parallelization path.

There's a few generic steps:

  1. Get a parallel computing system
  2. Install MPI
  3. Setup your environment

1. Get a parallel computing system

There are many, many kinds of parallel computing systems; they range from a single laptop to high-end, dedicated supercomputers.

For the purposes of this blog entry, let's assume that you have a newly-installed cluster of Linux-based hosts. Hopefully you also have some kind of cluster management system - it's very painful, and doesn't scale at all, to try to maintain each individual host!  But this is outside the scope of this blog entry.

2. Install MPI

Once you have a newly-installed Linux cluster, you need to install MPI on it.  First, pick an MPI implementation.  There are two notable open source MPI implementations (proprietary, closed source implementations are also available - Google around):

  1. Open MPI from an open source community, including vendors, researchers, and academic institutions
  2. MPICH from Argonne National Labs (MVAPICH is an InfiniBand-based MPI that is derived from MPICH)

All of these MPI implementations have their own strengths and drawbacks.  Many HPC clusters have multiple MPI implementations installed and let their users choose which one to use.

Sidenote: most HPC-oriented clusters have some kind of network file storage.  NFS server software is included in all Linux distributions, and isn't that hard to setup (Google around, you'll find lots of tutorials).  To be clear, you don'thaveto have your favorite network filesystems installed (e.g., NFS), but it certainly makes your cluster aloteasier to use.

In general, your MPI installation needs to be availableon every host in your cluster.  "Available" typically means one of two things:

  1. Installed locally on each host
  2. Available via network filesystem on each host

Approach#1 has the advantage of not using any network bandwidth for the filesystem when you launch an MPI application.  However:

  • Unless you're operating at very large scale (e.g., running an MPI application across hundreds or thousands of hosts), the bandwidth consumed by the network filesystem loading MPI executables and libraries is fairly negligible.
  • It has the significant disadvantage (IMNSHO) of potentially adding complications for when you want to upgrade / maintain your MPI implementation installation.

For my mid-sized clusters (under 256 hosts or so), I use network filesystem installs.  This lets me upgrade and/or tweak the MPI implementation whenever I want to - whatever change I make is instantly available across the whole cluster.

Regardless of which approach you use, you almost certainly want to ensure that the MPI implementation files are available in the same location on each host.  Some common examples (I use Open MPI because of my obvious bias as an Open MPI developer, but the same general principle applies to all MPI installations):

  • Install Open MPI locally to/opt/openmpion all hosts
  • Install Open MPI in an NFS-exported directory that is mounted on/cluster/apps/openmpi on every host
  • Install Open MPI in your NFS-exported home directory

All of these are valid possibilities.

3. Setup your environment

All MPI implementations have both system administrator- and user-tweakable settings.  Many of these have to do with performance, and are unique to each MPI implementation.

But two things are common to most MPI implementations:

  1. Set your PATH (and possibly LD_LIBRARY_PATH) in your shell startup files to point to the MPI implementation that you want to use.  For example, you should add/opt/openmpi/binto the PATH setting in your$HOME/.basrhcfile (assuming Open MPI is installed in/opt/openmpi), and also add/opt/openmpi/libto your LD_LIBRARY_PATH. Two common mistakes that people make:

    • Not setting PATH/LD_LIBRARY_PATH in your shell startup files. You need these paths to be set for every new shell that you invoke (e.g., shells on remote hosts); you likely need to edit your .bashrc or .cshrc (or whatever the startup files are for your particular shell). Your sysadmin may have setup some convenience utilities for this kind of thing, such as environment modules, and/or tools tools to (effectively) automatically edit your shell startup files for you - consult the documentation available for your cluster.
    • Not setting PATH/LD_LIBRARY_PATH fornon-interactive logins to remote hosts. Some shell startup files make the distinction between interactive and non-interactive logins; you need to be sure that your PATH/LD_LIBRARY_PATH is set properly for non-interactive logins. Try running "ssh REMOTE_HOST env | grep PATH| grep PATH" and see what your PATH is set to on the remote host. If the MPI path is not included in there, check your shell startup files.

  2. Depending on your cluster, you may need to setup SSH keys for passwordless-logins to each host.  This allows MPI to launch processes on remote servers. Google around; there are many tutorials available on how to setup SSH passwordless-logins. If your cluster is using a resource manager such as SLURM or Torque, setting up passwordless SSH logins may not be necessary. But it certainly doesn't hurt do to so.

Yes, this is a lot of information, and most of what I said here is a fairly high-level. Hopefully, this will guide you in the right direction to get this all setup.

A future blog entry will continue down the list of MPI newbie tasks and talk about how to build / compile / link MPI applications.


tag-icon Etiquetas calientes: HPC mpi MPI newbie

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.