A C++17 message passing library based on MPI

Overview

MPL - A message passing library

MPL is a message passing library written in C++17 based on the Message Passing Interface (MPI) standard. Since the C++ API has been dropped from the MPI standard in version 3.1, it is the aim of MPL to provide a modern C++ message passing library for high performance computing.

MPL will neither bring all functions of the C language MPI-API to C++ nor provide a direct mapping of the C API to some C++ functions and classes. The library's focus lies on the MPI core message passing functions, ease of use, type safety, and elegance. The aim of MPL is to provide an idiomatic C++ message passing library without introducing a significant overhead compared to utilizing MPI via its plain C-API. This library is most useful for developers who have at least some basic knowledge of the Message Passing Interface standard and would like to utilize it via a more user-friendly interface in modern C++. Unlike (Boost.MPI)[https://www.boost.org/doc/libs/1_77_0/doc/html/mpi.html], MPL does not rely on an external serialization library and has a negligible run-time overhead.

Supported features

MPL assumes that the underlying MPI implementation supports the version 3.1 of the Message Passing Interface standard. Future versions of MPL may also employ features of the new version 4.0 or later MPI versions.

MPL gives currently access via a convenient C++ interface to the following features of the Message Passing Interface standard:

  • environmental management (implicit initialization and finalization, timers, but no error handling).
  • point-to-point communication (blocking and non-blocking),
  • collective communication (blocking and non-blocking),
  • derived data types (happens automatically for many custom data types or via the base_struct_builder helper class and the layout classes of MPL),
  • communicator- and group-management and
  • process topologies (cartesian and graph topologies),

Currently, the following MPI features are not yet supported by MPL:

  • inter-communicators (planed for v0.2)
  • error handling,
  • process creation and management,
  • one-sided communication and
  • I/O.

Although MPL covers a subset of the MPI functionality only, it has probably the largest MPI-feature coverage among all alternative C++ interfaces to MPI.

Installation

MPL is built on MPI. An MPI implementation needs to be installed as a prerequisite, e.g., Open MPI or MPICH. As MPL is a header-only library, it suffices to download the source and copy the mpl directory, which contains all header files to a place, where the compiler will find it, e.g., /usr/local/include on a typical Unix/Linux system.

For convenience and better integration into various IDEs, MPL also comes with CMake support. To install MPL via CMake get the sources and create a new build folder in the MPL source directory, e.g.,

[email protected]:~/mpl$ mkdir build
[email protected]:~/mpl$ cd build

Then, call the CMake tool to detect all dependencies and to generate the project configuration for your build system or IDE, e.g.

[email protected]:~/mpl/build$ cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local ..

The option -DCMAKE_INSTALL_PREFIX:PATH specifies the installation path. Cmake can also be utilized to install the MPL header files. Just call CMake a second time and specify the --install option now, e.g.,

[email protected]:~/mpl/build$ cmake --install .

A set of unit tests and a collection of examples that illustrate the usage of MPL can be complied via CMake, too, if required. To build the MPL unit tests add the option -DBUILD_TESTING=ON to the initial CMake call. Similarly, -DMPL_BUILD_EXAMPLES=ON enables building example codes. Thus,

[email protected]:~/mpl/build$ cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local -DBUILD_TESTING=ON -DMPL_BUILD_EXAMPLES=ON ..

enables both, building unit tests and examples. MPL unit tests utilize the Boost.Test framework. Finally, build the unit tests and/or the example code via

[email protected]:~/mpl/build$ cmake --build .

After the unit test have been build successfully, they can be run conveniently by utilizing the CTest tool, i.e., via

[email protected]:~/mpl/build$ ctest
Test project /home/user/mpl/build
      Start  1: test_communicator
 1/27 Test  #1: test_communicator ........................   Passed    0.19 sec
      Start  2: test_cartesian_communicator
 2/27 Test  #2: test_cartesian_communicator ..............   Passed    0.11 sec
      Start  3: test_graph_communicator
 3/27 Test  #3: test_graph_communicator ..................   Passed    0.07 sec
      Start  4: test_dist_graph_communicator
 4/27 Test  #4: test_dist_graph_communicator .............   Passed    0.11 sec
      Start  5: test_communicator_send_recv
 5/27 Test  #5: test_communicator_send_recv ..............   Passed    0.11 sec
      Start  6: test_communicator_isend_irecv
 6/27 Test  #6: test_communicator_isend_irecv ............   Passed    0.12 sec
      Start  7: test_communicator_init_send_init_recv
 7/27 Test  #7: test_communicator_init_send_init_recv ....   Passed    0.11 sec
      Start  8: test_communicator_sendrecv
 8/27 Test  #8: test_communicator_sendrecv ...............   Passed    0.11 sec
      Start  9: test_communicator_probe
 9/27 Test  #9: test_communicator_probe ..................   Passed    0.11 sec
      Start 10: test_communicator_mprobe_mrecv
10/27 Test #10: test_communicator_mprobe_mrecv ...........   Passed    0.11 sec
      Start 11: test_communicator_barrier
11/27 Test #11: test_communicator_barrier ................   Passed    0.11 sec
      Start 12: test_communicator_bcast
12/27 Test #12: test_communicator_bcast ..................   Passed    0.10 sec
      Start 13: test_communicator_gather
13/27 Test #13: test_communicator_gather .................   Passed    0.10 sec
      Start 14: test_communicator_gatherv
14/27 Test #14: test_communicator_gatherv ................   Passed    0.06 sec
      Start 15: test_communicator_allgather
15/27 Test #15: test_communicator_allgather ..............   Passed    0.11 sec
      Start 16: test_communicator_allgatherv
16/27 Test #16: test_communicator_allgatherv .............   Passed    0.14 sec
      Start 17: test_communicator_scatter
17/27 Test #17: test_communicator_scatter ................   Passed    0.12 sec
      Start 18: test_communicator_scatterv
18/27 Test #18: test_communicator_scatterv ...............   Passed    0.12 sec
      Start 19: test_communicator_alltoall
19/27 Test #19: test_communicator_alltoall ...............   Passed    0.11 sec
      Start 20: test_communicator_alltoallv
20/27 Test #20: test_communicator_alltoallv ..............   Passed    0.15 sec
      Start 21: test_communicator_reduce
21/27 Test #21: test_communicator_reduce .................   Passed    0.13 sec
      Start 22: test_communicator_allreduce
22/27 Test #22: test_communicator_allreduce ..............   Passed    0.13 sec
      Start 23: test_communicator_reduce_scatter_block
23/27 Test #23: test_communicator_reduce_scatter_block ...   Passed    0.12 sec
      Start 24: test_communicator_reduce_scatter
24/27 Test #24: test_communicator_reduce_scatter .........   Passed    0.08 sec
      Start 25: test_communicator_scan
25/27 Test #25: test_communicator_scan ...................   Passed    0.05 sec
      Start 26: test_communicator_exscan
26/27 Test #26: test_communicator_exscan .................   Passed    0.05 sec
      Start 27: test_displacements
27/27 Test #27: test_displacements .......................   Passed    0.02 sec

100% tests passed, 0 tests failed out of 27

Total Test time (real) =   2.86 sec

or via your IDE if it features support for CTest.

Alternatively, MPL may be installed via the Spack package manager. This will install the library headers ony but not compile the unit tests and the examples.

Usually, CMake will find the required MPI installation as well as the Boost Test library automatically. Depending on the local setup, however, CMake may need some hints to find these dependencies. See the CMake documentation on FindMPI and FindBoost for further details.

Hello parallel world

MPL is built on top of the Message Passing Interface (MPI) standard. Therefore, MPL shares many concepts known from the MPI standard, e.g., the concept of a communicator. Communicators manage the message exchange between different processes, i.e., messages are sent and received with the help of a communicator.

The MPL environment provides a global default communicator comm_world, which will be used in the following Hello-World program. The program prints out some information about each process:

  • its rank,
  • the total number of processes and
  • the computer's name the process is running on.

If there are two or more processes, a message is sent from process 0 to process 1, which is also printed.

#include <cstdlib>
#include <iostream>
// include MPL header file
#include <mpl/mpl.hpp>

int main() {
  // get a reference to communicator "world"
  const mpl::communicator &comm_world{mpl::environment::comm_world()};
  // each process prints a message containing the processor name, the rank
  // in communicator world and the size of communicator world
  // output may depend on the underlying MPI implementation
  std::cout << "Hello world! I am running on \"" << mpl::environment::processor_name()
            << "\". My rank is " << comm_world.rank() << " out of " << comm_world.size()
            << " processes.\n";
  // if there are two or more processes send a message from process 0 to process 1
  if (comm_world.size() >= 2) {
    if (comm_world.rank() == 0) {
      std::string message{"Hello world!"};
      comm_world.send(message, 1);  // send message to rank 1
    } else if (comm_world.rank() == 1) {
      std::string message;
      comm_world.recv(message, 0);  // receive message from rank 0
      std::cout << "got: \"" << message << "\"\n";
    }
  }
  return EXIT_SUCCESS;
}

Documentation

For further documentation see the Doxygen-generated documentation, the blog posts

the presentation

the book

and the files in the examples directory of the source package.

Issues
  • Visibility is low due to name conflicting boost-mpl.

    Visibility is low due to name conflicting boost-mpl.

    Hello,

    this is probably the most up to date MPI C++ interface, but finding it among hundreds of other libraries were impossible until you mentioned it in the MPI forum. This is because MPL is a very common abbreviation and even has correspondents in C++. Do you have any plans to change this?

    While I'm at it; do you have any plans to support the vcpkg build system https://github.com/microsoft/vcpkg which is the de-facto standard package manager for C++?

    opened by acdemiralp 7
  • MPI_Intercomm_create functionality

    MPI_Intercomm_create functionality

    Hi,

    Is MPI_Intercomm_create supported by your library? I couldn't see it. Is it something that could be added?

    Is there a recommended way to do it if I've already used:

    mpl::communicator group(mpl::communicator::split, comm_world, index);
    

    How would I make the call to MPI_Intercomm_create using group?

    Any help really appreciated, Thanks, Andy

    feature request 
    opened by a-jp 6
  • const problems with native handle

    const problems with native handle

        const mpl::communicator &comm =
          mpl::environment::comm_world();
        MPI_Comm
          world_extract = comm.native_handle(),
    

    gives error: 'this' argument to member function 'native_handle' has type 'const mpl::communicator', but function is not marked const

    but without const

        mpl::communicator &comm =
          mpl::environment::comm_world();
    

    gives error: binding reference of type 'mpl::communicator' to value of type 'const mpl::communicator' drops 'const' qualifier

    bug api design 
    opened by VictorEijkhout 4
  • Obtain raw MPI_Comm from mpl::communicator - Version 2

    Obtain raw MPI_Comm from mpl::communicator - Version 2

    Leading on from our discussion on #23 with the requirements for this given in #23 and #22, would the following be more appealing for inclusion into MPL (implemented against 5264b90)?

    diff --git a/mpl/comm_group.hpp b/mpl/comm_group.hpp
    index a01fd30..1049c8d 100644
    --- a/mpl/comm_group.hpp
    +++ b/mpl/comm_group.hpp
    @@ -279,7 +279,34 @@ namespace mpl {
       protected:
         MPI_Comm comm_{MPI_COMM_NULL};
     
    +    /// \brief Obtain access to underlying mpi communicator
    +    /// \return raw MPI_Comm communicator
    +    const MPI_Comm& get_mpi_comm() const
    +    {
    +        return comm_;
    +    }
    +
       public:
    +
    +    /// \brief Allow raw MPI commands to be run 
    +    /// that need access to the MPI_Comm from this
    +    /// \param userCode a functor provided by the user taking a const MPI_Comm&
    +    template<typename UserCode>
    +    void execute_raw_mpi(const UserCode& userCode) const
    +    {
    +        userCode(get_mpi_comm());
    +    }
    +
    +    /// \brief Allow raw MPI commands to be run 
    +    /// that need access to the MPI_Comm from this, and another MPI_Comm from other
    +    /// \param userCode a functor provided by the user taking a const MPI_Comm& (from this as the first argument) and a const MPI_Comm& from o
    +    /// \param other another communicator whoes MPI_Comm is passed as the second argument
    +    template<typename UserCode>
    +    void execute_raw_mpi(const UserCode& userCode, const communicator& other) const
    +    {
    +        userCode(get_mpi_comm(), other.get_mpi_comm());
    +    }
    +
         /// \brief Equality types for communicator comparison.
         enum class equality_type {
           /// communicators are identical, i.e., communicators represent the same communication
    

    As an example, let's assume you hadn't implemented MPI_Comm_Size in your library (I know you have), then the use case for execute_raw_mpi(const UserCode& userCode) would be (assuming comm_world is an mpl::communicator):

            int checkSize = -1;
            const auto CheckSize = [&checkSize, &comm_world](const MPI_Comm &world)
            {
                const auto result = MPI_Comm_size(world, &checkSize);
                if (result != MPI_SUCCESS)
                {
                    std::cout << "mpl::communicator::execute_raw_mpi failed with mpi_error " << result << ", global rank: " << comm_world.rank() << std::endl;
                    comm_world.abort(EXIT_FAILURE);
                }  
            };
            comm_world.execute_raw_mpi(CheckSize);
            assert(checkSize == comm_world.size());
    

    This gives an example for the single argument version of execute_raw_mpi allowing users to call raw MPI for functions that take one MPI_Comm. For the use case that interests me (see #23 and #22), that is to say when two MPI_Comm's are required, then the two argument version of execute_raw_mpi can be called. Here is an example (where comm_world and groupcomm are mpl::communicators):

            const int intercomm_create_tag = 99;
            const auto Create = [this, &comm_world](const MPI_Comm &world, const MPI_Comm &group)
            {
                const auto result = MPI_Intercomm_create(group, 0, world, remoteleader, intercomm_create_tag, &intercomm);
                if (result != MPI_SUCCESS)
                {
                    std::cout << "mpl::communicator::execute_raw_mpi failed with mpi_error " << result << ", global rank: " << comm_world.rank() << std::endl;
                    comm_world.abort(EXIT_FAILURE);
                }
            };
            comm_world.execute_raw_mpi(Create, groupcomm);
    

    Here we can see that the stored comm_ is never actually returned to the user which was the case in my previous suggestion (see #23), which looked like this:

    MPI_Intercomm_create(groupcomm.get_mpi_comm(), 0, comm_world.get_mpi_comm(), remoteleader, intercomm_create_tag, &intercomm);
    

    I believe the intent is clear; that is to allow execution of arbitrary code that requires access to the underlying MPI_Comm. That being said, it can still be hijacked, see for example (where comm_world is an mpl::communicator):

    
            MPI_Comm stealComm = MPI_COMM_NULL;
            const auto Steal = [&stealComm](const MPI_Comm &world)
            {
                stealComm = world;
            };
            comm_world.execute_raw_mpi(Steal);
            int checkSizeSteal = -1;
            MPI_Comm_size(stealComm, &checkSizeSteal);
            assert(checkSizeSteal == comm_world.size());
    

    Clearly, one can use these mechanics to obtain the raw MPI_Comm as per the last code-stub, although it does appear more obvious that it's wrong to do so.

    What do you think?

    api design feature request 
    opened by a-jp 4
  • Obtain raw MPI_Comm from mpl::communicator

    Obtain raw MPI_Comm from mpl::communicator

    In reference to #22 I am trying to use the raw c-api from MPI:

    int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader,
                             MPI_Comm peer_comm, int remote_leader, int tag, MPI_Comm * newintercomm)
    

    How can I obtain the raw MPI communicator for comm_world, and the raw MPI communicator from my group, if the group was constructed from:

    mpl::communicator group(mpl::communicator::split, comm_world, index);
    

    where comm_world is an mpl type. I need these raw MPI variables for local_comm and peer_comm. I suppose I'm asking is there a function:

    MPI_Comm mpl::communicator::GetRawMPIComm()
    {
        return comm_;
    }
    

    Within the same code base how can I interact with the MPI library such as using MPI_Intercomm_create (see #22 for reasons why) and the MPL library if I need to get access to the underlying MPI "types/variables" from MPL?

    Any help and or advice welcome (and work arounds as I need to get this working).

    Many thanks,

    opened by a-jp 4
  • "on ramp" for legacy code and interop

    @rabauke thanks for writing a great library. I'd like to make use of it in some of my projects, but I face the issue that I have legacy code that expect MPI_Comm to be passed around and make use of other third party C libraries that expect the same. In some sense there is no "on ramp" for me to use to convert my code to mpl.

    Based on my understanding of the discussion here and here you have concerns about the user getting access to the raw MPI_Comm object for two reasons:

    1. Ownership may be unclear
    2. You don't want to enable mixing of MPI/MPL code because you would like MPL to be a complete self contained ecosystem

    Although I understand the sentiment I would like make the following comments on those two points and ask you to reconsider extracting the raw MPI_Comm handle.

    In terms of ownership, MPI_Comm when extracted from MPL has pointer semantics. And in modern C++ people expect that a naked pointer means a non owning pointer. Because, if you intend to transfer ownership you will use a smart pointer. In the case of the communicator handle which isn't a pointer so it will not be nullptr but MPI_COMM_NULL returning a reference has clear semantics that most C++ programmers should not have trouble with around object lifetime and scope.

    In terms of code mixing I understand that you want to maintain the cleanliness and purity of your library (probably partially why it is so nice). But, this makes mpl usage a bit of a walled garden that prevents me from slowly bringing it onboard to my C++ code, or interfacing with third party libraries over which I have no control.

    I understand why you have chosen the current design and respect that. However, I wanted to bring up the issue again as I'm sick of writing my own MPI wrappers...

    feature request 
    opened by jacobmerson 3
  • Cannot figure out waitany

    Cannot figure out waitany

    The crucial lines in my code are:

      if (procno==nprocs-1) {
        mpl::irequest_pool recv_requests;
        vector<int> recv_buffer(nprocs-1);
        for (int p=0; p<nprocs-1; p++) {
          recv_requests.push( comm_world.irecv( recv_buffer[p], p ) );
        }
        printf("Outstanding request #=%d\n",recv_requests.size());
        for (int p=0; p<nprocs-1; p++) {
          auto [success,index] = recv_requests.waitany();
    

    This gives on the waitany call:

    Assertion failed in file ./src/include/mpir_request.h at line 313: ((req))->ref_count >= 0
    0   libpmpi.12.dylib                    0x000000010a7b44de backtrace_libc + 62
    1   libpmpi.12.dylib                    0x000000010a7b4495 MPL_backtrace_show + 21
    2   libpmpi.12.dylib                    0x000000010a7502f4 MPIR_Assert_fail + 36
    3   libmpi.12.dylib                     0x000000010a579445 MPI_Waitany + 2469
    4   irecvsource                         0x000000010a4ea4f0 main + 672
    5   libdyld.dylib                       0x00007fff7319c3d5 start + 1
    

    Do you immediately see what I'm doing wrong or do I need to supply a fully functioning reprodcuer?

    opened by VictorEijkhout 3
  • Including mpl/mpl.hpp throwing errors

    Including mpl/mpl.hpp throwing errors

    Hello!

    I am trying to write a script with MPL. For now, I have written a simple hello world program but I have included the header file.

    Code:
    #include <iostream>
    #include <mpl/mpl.hpp>
    
    using namespace std;
    
    int main() {
        cout << "Hello, World!\n";
        return 0;
    }
    

    I am using the following command to compile it: g++ -std=c++17 -I./mpl hello_world.cpp

    However, I am getting the following errors:

    In file included from ./mpl/mpl/mpl.hpp:49,
                     from hello_world.cpp:3:
    ./mpl/mpl/layout.hpp:378:17: error: expected unqualified-id before ‘[’ token
         null_layout([[maybe_unused]] const null_layout &l) noexcept : null_layout() {}
                     ^
    ./mpl/mpl/layout.hpp:378:17: error: expected ‘)’ before ‘[’ token
         null_layout([[maybe_unused]] const null_layout &l) noexcept : null_layout() {}
                    ~^
                     )
    ./mpl/mpl/layout.hpp:380:17: error: expected unqualified-id before ‘[’ token
         null_layout([[maybe_unused]] null_layout &&l) noexcept : null_layout() {}
                     ^
    ./mpl/mpl/layout.hpp:380:17: error: expected ‘)’ before ‘[’ token
         null_layout([[maybe_unused]] null_layout &&l) noexcept : null_layout() {}
                    ~^
                     )
    In file included from ./mpl/mpl/mpl.hpp:54,
                     from hello_world.cpp:3:
    ./mpl/mpl/comm_group.hpp:3939:27: error: expected unqualified-id before ‘[’ token
         explicit communicator([[maybe_unused]] comm_collective_tag comm_collective,
                               ^
    ./mpl/mpl/comm_group.hpp:3939:27: error: expected ‘)’ before ‘[’ token
         explicit communicator([[maybe_unused]] comm_collective_tag comm_collective,
                              ~^
                               )
    ./mpl/mpl/comm_group.hpp:3952:27: error: expected unqualified-id before ‘[’ token
         explicit communicator([[maybe_unused]] group_collective_tag group_collective,
                               ^
    ./mpl/mpl/comm_group.hpp:3952:27: error: expected ‘)’ before ‘[’ token
         explicit communicator([[maybe_unused]] group_collective_tag group_collective,
                              ~^
                               )
    ./mpl/mpl/comm_group.hpp:3968:27: error: expected unqualified-id before ‘[’ token
         explicit communicator([[maybe_unused]] split_tag split, const communicator &other,
                               ^
    ./mpl/mpl/comm_group.hpp:3968:27: error: expected ‘)’ before ‘[’ token
         explicit communicator([[maybe_unused]] split_tag split, const communicator &other,
                              ~^
                               )
    ./mpl/mpl/comm_group.hpp:3987:27: error: expected unqualified-id before ‘[’ token
         explicit communicator([[maybe_unused]] split_shared_memory_tag split_shared_memory,
                               ^
    ./mpl/mpl/comm_group.hpp:3987:27: error: expected ‘)’ before ‘[’ token
         explicit communicator([[maybe_unused]] split_shared_memory_tag split_shared_memory,
                              ~^
                               )
    

    Can you please tell me how I can resolve this issue?

    Thank you for your time!

    question 
    opened by adarsh200496 2
  • Support for C++20 ranges

    Support for C++20 ranges

    It would be convenient to be able to transmit std::ranges view (C++20), e.g. a filtered or transformed sequence. I think there's a current limitation when passing iterators in that begin() and end() must be of the same type. This is usually not the case for ranges. Until C++20 I use range-v3 but currently need to copy into a STL vector before mpi communication, see here:

    https://github.com/mlund/faunus/blob/00ab258465bf09c02593fa66d85a405fc07b2a19/src/move.cpp#L391

    (Update: I realise that the example code is not representative as the buffer is here replaced)

    C++20 feature request 
    opened by mlund 2
  • send a std::variant with isend/irecv

    send a std::variant with isend/irecv

    Hi,

    Very new to your library, it's really nice. For reference I only found out about it due to this thread I'm following:

    mpi-cxx-features

    I'd like to send a std::variant. At the moment I get: class mpl::struct_builder<std::variant<Class1, Class2> >' has no member named 'type'.

    Both Class1 and Class2 have been added via the MPL_REFLECTION macro. Before I debug, it occurred to me to check that it's possible to send a std::variant?

    Thanks, Andy

    opened by a-jp 2
  • bug in sendrecv?

    bug in sendrecv?

    Some overloads in comm_group.hpp (the ones using iterators I think) such as this one:

    https://github.com/rabauke/mpl/blob/c4c83cfcf1f970bc1af6e5ba2667b40ebd97730e/mpl/comm_group.hpp#L1028-L1041

    Ignore the "source" argument and instead always pass dest twice to the underlying version.

    Is this by design? If so, could you give me some insight on why?

    Thanks!

    opened by RaulPPelaez 2
  • Support std::span

    Support std::span

    The C++20 std::span should be supported in point-to-point communication operations in a similar way as std::vector is already supported. Note that std::span has no resize, which must accounted for in the implementation and usage of recv and irecv.

    enhancement C++20 
    opened by rabauke 0
  • Incosistend integer types in api

    Incosistend integer types in api

    For historical reasons MPI uses int, MPI_Aint, MPI_Offset, and MPI_Count, which may have different ranges and signedness. Integer usage should be harmonized for MPL:

    • size_t and ssize_t (when negative values are permissible) should be the only integer types.
    • It must be checked for narrowing when passing integer values to underlying MPI library functions.

    Edit: MPI 4.0 will introduce large-count support via new functions, see https://eurompi.github.io/assets/papers/2020-09-eurompi2020-mpi4.pdf

    api design MPI 4.0 
    opened by rabauke 2
  • MPI_ERRORS_RETURN

    MPI_ERRORS_RETURN

    My program bombs with

    Fatal error in MPI_Waitall: See the MPI_ERROR field in MPI_Status for the error code
    

    Undoubtedly a programming error by me. But I can not query that status because the code has exited. What is the MPL equivalent of MPI_Comm_set_errhandler(comm,MPI_ERRORS_RETURN)?

    api design 
    opened by VictorEijkhout 5
Owner
Heiko Bauke
expert in computational physics, digital imaging and high-performance computing
Heiko Bauke
Parallel implementation of Dijkstra's shortest path algorithm using MPI

Parallel implementation of Dijkstra's shortest path algorithm using MPI

Alex Diop 1 Jan 21, 2022
Termite-jobs - Fast, multiplatform fiber based job dispatcher based on Naughty Dogs' GDC2015 talk.

NOTE This library is obsolete and may contain bugs. For maintained version checkout sx library. until I rip it from there and make a proper single-hea

Sepehr Taghdisian 35 Jan 9, 2022
A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

Fiber Tasking Lib This is a library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies. Dependenc

RichieSams 774 Jul 27, 2022
OpenCL based GPU accelerated SPH fluid simulation library

libclsph An OpenCL based GPU accelerated SPH fluid simulation library Can I see it in action? Demo #1 Demo #2 Why? Libclsph was created to explore the

null 47 Jul 27, 2022
C++20 coroutines-based cooperative multitasking library

?? Coop Coop is a C++20 coroutines-based library to support cooperative multitasking in the context of a multithreaded application. The syntax will be

Jeremy Ong 73 Jul 3, 2022
A C++20 coroutine library based off asyncio

kuro A C++20 coroutine library, somewhat modelled on Python's asyncio Requirements Kuro requires a C++20 compliant compiler and a Linux OS. Tested on

null 17 Jul 19, 2022
C++20 Coroutine-Based Synchronous Parser Combinator Library

This library contains a monadic parser type and associated combinators that can be composed to create parsers using C++20 Coroutines.

null 42 Jun 10, 2022
DwThreadPool - A simple, header-only, dependency-free, C++ 11 based ThreadPool library.

dwThreadPool A simple, header-only, dependency-free, C++ 11 based ThreadPool library. Features C++ 11 Minimal Source Code Header-only No external depe

Dihara Wijetunga 26 May 29, 2022
C++14 coroutine-based task library for games

SquidTasks Squid::Tasks is a header-only C++14 coroutine-based task library for games. Full project and source code available at https://github.com/we

Tim Ambrogi Saxon 58 Jul 25, 2022
checkedthreads: no race condition goes unnoticed! Simple API, automatic load balancing, Valgrind-based checking

checkedthreads checkedthreads is a fork-join parallelism framework for C and C++ providing: Automated race detection using debugging schedulers and Va

Yossi Kreinin 276 Jul 27, 2022
SymQEMU: Compilation-based symbolic execution for binaries

SymQEMU This is SymQEMU, a binary-only symbolic executor based on QEMU and SymCC. It currently extends QEMU 4.1.1 and works with the most recent versi

null 206 Jul 29, 2022
RocketOS is a Unix based OS that uses legacy BIOS and GRUB and is written in C17. It is being developed for educational purposes primarily, but it still is a serious project. It is currently in its infancy.

RocketOS What is RocketOS? RocketOS is a Unix based OS that uses legacy BIOS and GRUB and is written in C17. It is being developed for educational pur

null 29 Jul 17, 2022
C++-based high-performance parallel environment execution engine for general RL environments.

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With

Sea AI Lab 571 Aug 5, 2022
High Performance Linux C++ Network Programming Framework based on IO Multiplexing and Thread Pool

Kingpin is a C++ network programming framework based on TCP/IP + epoll + pthread, aims to implement a library for the high concurrent servers and clie

null 16 Jul 16, 2022
Arcana.cpp - Arcana.cpp is a collection of helpers and utility code for low overhead, cross platform C++ implementation of task-based asynchrony.

Arcana.cpp Arcana is a collection of general purpose C++ utilities with no code that is specific to a particular project or specialized technology are

Microsoft 62 Jul 14, 2022
Lucy job system - Fiber-based job system with extremely simple API

Lucy Job System This is outdated compared to Lumix Engine. Use that instead. Fiber-based job system with extremely simple API. It's a standalone versi

Mikulas Florek 78 Mar 11, 2022
Elle - The Elle coroutine-based asynchronous C++ development framework.

Elle, the coroutine-based asynchronous C++ development framework Elle is a collection of libraries, written in modern C++ (C++14). It contains a rich

Infinit 463 Jul 29, 2022
A modern thread pool implementation based on C++20

thread-pool A simple, functional thread pool implementation using pure C++20. Features Built entirely with C++20 Enqueue tasks with or without trackin

Paul T 104 Aug 5, 2022
Bolt is a C++ template library optimized for GPUs. Bolt provides high-performance library implementations for common algorithms such as scan, reduce, transform, and sort.

Bolt is a C++ template library optimized for heterogeneous computing. Bolt is designed to provide high-performance library implementations for common

null 355 Jun 27, 2022