Header-only C++20 wrapper for MPI 4.0.

Related tags

Utilities mpi
Overview

MPI

Modern C++20 message passing interface wrapper.

Examples

  • Initialization:
  mpi::environment environment;
  const auto& communicator = mpi::world_communicator;
  • Transmitting basic types:
  std::int32_t data = 0;
  if (communicator.rank() == 0)
  {
    data = 42;
    communicator.send(data, 1);
  }
  if (communicator.rank() == 1)
  {
    communicator.receive(data, 0);
  }
  • Transmitting a container of basic types:
  std::vector
   int32_t> 
   data_container(
   3);
  
   if (communicator.rank() == 
   0)
  {
    data_container = {
   1, 
   2, 
   3};
    communicator.
   send(data_container, 
   1);
  }
  
   if (communicator.rank() == 
   1)
  {
    communicator.
   receive(data_container, 
   0);
  }
  
  • Transmitting user-defined aggregates:
  struct user_type
  {
    std::int32_t         id      ;
    std::array<float, 3> position;
  };

  user_type user_object;
  if (communicator.rank() == 0)
  {
    user_object = {42, {0.0f, 1.0f, 2.0f}};
    communicator.send(user_object, 1);
  }
  if (communicator.rank() == 1)
  {
    communicator.receive(user_object, 0);
  }
  • Transmitting a container of user-defined aggregates:
  std::vector 
   user_object_container(
   2);
  
   if (communicator.rank() == 
   0)
  {
    user_object_container[
   0] = {
   42, {
   0.
   0f, 
   1.
   0f, 
   2.
   0f}};
    user_object_container[
   1] = {
   84, {
   3.
   0f, 
   4.
   0f, 
   5.
   0f}};
    communicator.
   send(user_object_container, 
   1);
  }
  
   if (communicator.rank() == 
   1)
  {
    communicator.
   receive(user_object_container, 
   0);
  }
  
  • See the tests for more.

Usage Notes

  • Define MPI_USE_EXCEPTIONS to check the return values of all viable functions against MPI_SUCCESS and throw an exception otherwise.
  • Define MPI_USE_RELAXED_TRAITS to prevent the library from checking the types of aggregate elements and triggering static asserts for non-aggregates (useful for e.g. testing).
  • Compliant types (satisfying mpi::is_compliant) are types whose corresponding mpi::data_type can be automatically generated:
    • Arithmetic types (satisfying std::is_arithmetic), enumerations (satisfying std::is_enum), specializations of std::complex are compliant types.
    • C-style arrays, std::array, std::pair, std::tuple, and aggregate types (satisfying std::is_aggregate) consisting of other compliant types are also compliant types.
    • If your type is none of the above, you can specialize template <> struct mpi::type_traits { static data_type get_data_type() { return DATA_TYPE; } }; manually.
  • The MPI functions accepting buffers may be used with:
    • Compliant types.
    • Contiguous sequential containers (i.e. std::string, std::span, std::valarray, std::vector) of compliant types.
  • Extension functions (starting with MPIX) are not included as they are often implementation-specific. You can nevertheless use them with the wrapper via the native handle getters.

Design Notes

Coverage (list from https://www.open-mpi.org/doc/v4.1/)

  • Constants
  • MPI_Abort
  • MPI_Accumulate
  • MPI_Add_error_class
  • MPI_Add_error_code
  • MPI_Add_error_string
  • MPI_Address
  • MPI_Aint_add
  • MPI_Aint_diff
  • MPI_Allgather
  • MPI_Allgatherv
  • MPI_Alloc_mem
  • MPI_Allreduce
  • MPI_Alltoall
  • MPI_Alltoallv
  • MPI_Alltoallw
  • MPI_Attr_delete
  • MPI_Attr_get
  • MPI_Attr_put
  • MPI_Barrier
  • MPI_Bcast
  • MPI_Bsend
  • MPI_Bsend_init
  • MPI_Buffer_attach
  • MPI_Buffer_detach
  • MPI_Cancel
  • MPI_Cart_coords
  • MPI_Cart_create
  • MPI_Cart_get
  • MPI_Cart_map
  • MPI_Cart_rank
  • MPI_Cart_shift
  • MPI_Cart_sub
  • MPI_Cartdim_get
  • MPI_Close_port
  • MPI_Comm_accept
  • MPI_Comm_c2f
  • MPI_Comm_call_errhandler
  • MPI_Comm_compare
  • MPI_Comm_connect
  • MPI_Comm_create
  • MPI_Comm_create_errhandler
  • MPI_Comm_create_group
  • MPI_Comm_create_keyval
  • MPI_Comm_delete_attr
  • MPI_Comm_disconnect
  • MPI_Comm_dup
  • MPI_Comm_dup_with_info
  • MPI_Comm_f2c
  • MPI_Comm_free
  • MPI_Comm_free_keyval
  • MPI_Comm_get_attr
  • MPI_Comm_get_errhandler
  • MPI_Comm_get_info
  • MPI_Comm_get_name
  • MPI_Comm_get_parent
  • MPI_Comm_group
  • MPI_Comm_idup
  • MPI_Comm_join
  • MPI_Comm_rank
  • MPI_Comm_remote_group
  • MPI_Comm_remote_size
  • MPI_Comm_set_attr
  • MPI_Comm_set_errhandler
  • MPI_Comm_set_info
  • MPI_Comm_set_name
  • MPI_Comm_size
  • MPI_Comm_spawn
  • MPI_Comm_spawn_multiple
  • MPI_Comm_split
  • MPI_Comm_split_type
  • MPI_Comm_test_inter
  • MPI_Compare_and_swap
  • MPI_Dims_create
  • MPI_Dist_graph_create
  • MPI_Dist_graph_create_adjacent
  • MPI_Dist_graph_neighbors
  • MPI_Dist_graph_neighbors_count
  • MPI_Errhandler_create
  • MPI_Errhandler_free
  • MPI_Errhandler_get
  • MPI_Errhandler_set
  • MPI_Error_class
  • MPI_Error_string
  • MPI_Exscan
  • MPI_Fetch_and_op
  • MPI_File_c2f
  • MPI_File_call_errhandler
  • MPI_File_close
  • MPI_File_create_errhandler
  • MPI_File_delete
  • MPI_File_f2c
  • MPI_File_get_amode
  • MPI_File_get_atomicity
  • MPI_File_get_byte_offset
  • MPI_File_get_errhandler
  • MPI_File_get_group
  • MPI_File_get_info
  • MPI_File_get_position
  • MPI_File_get_position_shared
  • MPI_File_get_size
  • MPI_File_get_type_extent
  • MPI_File_get_view
  • MPI_File_iread
  • MPI_File_iread_all
  • MPI_File_iread_at
  • MPI_File_iread_at_all
  • MPI_File_iread_shared
  • MPI_File_iwrite
  • MPI_File_iwrite_all
  • MPI_File_iwrite_at
  • MPI_File_iwrite_at_all
  • MPI_File_iwrite_shared
  • MPI_File_open
  • MPI_File_preallocate
  • MPI_File_read
  • MPI_File_read_all
  • MPI_File_read_all_begin
  • MPI_File_read_all_end
  • MPI_File_read_at
  • MPI_File_read_at_all
  • MPI_File_read_at_all_begin
  • MPI_File_read_at_all_end
  • MPI_File_read_ordered
  • MPI_File_read_ordered_begin
  • MPI_File_read_ordered_end
  • MPI_File_read_shared
  • MPI_File_seek
  • MPI_File_seek_shared
  • MPI_File_set_atomicity
  • MPI_File_set_errhandler
  • MPI_File_set_info
  • MPI_File_set_size
  • MPI_File_set_view
  • MPI_File_sync
  • MPI_File_write
  • MPI_File_write_all
  • MPI_File_write_all_begin
  • MPI_File_write_all_end
  • MPI_File_write_at
  • MPI_File_write_at_all
  • MPI_File_write_at_all_begin
  • MPI_File_write_at_all_end
  • MPI_File_write_ordered
  • MPI_File_write_ordered_begin
  • MPI_File_write_ordered_end
  • MPI_File_write_shared
  • MPI_Finalize
  • MPI_Finalized
  • MPI_Free_mem
  • MPI_Gather
  • MPI_Gatherv
  • MPI_Get
  • MPI_Get_accumulate
  • MPI_Get_address
  • MPI_Get_count
  • MPI_Get_elements
  • MPI_Get_elements_x
  • MPI_Get_library_version
  • MPI_Get_processor_name
  • MPI_Get_version
  • MPI_Graph_create
  • MPI_Graph_get
  • MPI_Graph_map
  • MPI_Graph_neighbors
  • MPI_Graph_neighbors_count
  • MPI_Graphdims_get
  • MPI_Grequest_complete
  • MPI_Grequest_start
  • MPI_Group_c2f
  • MPI_Group_compare
  • MPI_Group_difference
  • MPI_Group_excl
  • MPI_Group_f2c
  • MPI_Group_free
  • MPI_Group_incl
  • MPI_Group_intersection
  • MPI_Group_range_excl
  • MPI_Group_range_incl
  • MPI_Group_rank
  • MPI_Group_size
  • MPI_Group_translate_ranks
  • MPI_Group_union
  • MPI_Iallgather
  • MPI_Iallgatherv
  • MPI_Iallreduce
  • MPI_Ialltoall
  • MPI_Ialltoallv
  • MPI_Ialltoallw
  • MPI_Ibarrier
  • MPI_Ibcast
  • MPI_Ibsend
  • MPI_Iexscan
  • MPI_Igather
  • MPI_Igatherv
  • MPI_Improbe
  • MPI_Imrecv
  • MPI_Ineighbor_allgather
  • MPI_Ineighbor_allgatherv
  • MPI_Ineighbor_alltoall
  • MPI_Ineighbor_alltoallv
  • MPI_Ineighbor_alltoallw
  • MPI_Info_c2f
  • MPI_Info_create
  • MPI_Info_delete
  • MPI_Info_dup
  • MPI_Info_env
  • MPI_Info_f2c
  • MPI_Info_free
  • MPI_Info_get
  • MPI_Info_get_nkeys
  • MPI_Info_get_nthkey
  • MPI_Info_get_valuelen
  • MPI_Info_set
  • MPI_Init
  • MPI_Init_thread
  • MPI_Initialized
  • MPI_Intercomm_create
  • MPI_Intercomm_merge
  • MPI_Iprobe
  • MPI_Irecv
  • MPI_Ireduce
  • MPI_Ireduce_scatter
  • MPI_Ireduce_scatter_block
  • MPI_Irsend
  • MPI_Is_thread_main
  • MPI_Iscan
  • MPI_Iscatter
  • MPI_Iscatterv
  • MPI_Isend
  • MPI_Issend
  • MPI_Keyval_create
  • MPI_Keyval_free
  • MPI_Lookup_name
  • MPI_Message_c2f
  • MPI_Message_f2c
  • MPI_Mprobe
  • MPI_Mrecv
  • MPI_Neighbor_allgather
  • MPI_Neighbor_allgatherv
  • MPI_Neighbor_alltoall
  • MPI_Neighbor_alltoallv
  • MPI_Neighbor_alltoallw
  • MPI_Op_c2f
  • MPI_Op_commutative
  • MPI_Op_create
  • MPI_Op_f2c
  • MPI_Op_free
  • MPI_Open_port
  • MPI_Pack
  • MPI_Pack_external
  • MPI_Pack_external_size
  • MPI_Pack_size
  • MPI_Pcontrol
  • MPI_Probe
  • MPI_Publish_name
  • MPI_Put
  • MPI_Query_thread
  • MPI_Raccumulate
  • MPI_Recv
  • MPI_Recv_init
  • MPI_Reduce
  • MPI_Reduce_local
  • MPI_Reduce_scatter
  • MPI_Reduce_scatter_block
  • MPI_Register_datarep
  • MPI_Request_c2f
  • MPI_Request_f2c
  • MPI_Request_free
  • MPI_Request_get_status
  • MPI_Rget
  • MPI_Rget_accumulate
  • MPI_Rput
  • MPI_Rsend
  • MPI_Rsend_init
  • MPI_Scan
  • MPI_Scatter
  • MPI_Scatterv
  • MPI_Send
  • MPI_Send_init
  • MPI_Sendrecv
  • MPI_Sendrecv_replace
  • MPI_Sizeof
  • MPI_Ssend
  • MPI_Ssend_init
  • MPI_Start
  • MPI_Startall
  • MPI_Status_c2f
  • MPI_Status_f2c
  • MPI_Status_set_cancelled
  • MPI_Status_set_elements
  • MPI_Status_set_elements_x
  • MPI_T_category_changed
  • MPI_T_category_get_categories
  • MPI_T_category_get_cvars
  • MPI_T_category_get_info
  • MPI_T_category_get_num
  • MPI_T_category_get_pvars
  • MPI_T_cvar_get_info
  • MPI_T_cvar_get_num
  • MPI_T_cvar_handle_alloc
  • MPI_T_cvar_handle_free
  • MPI_T_cvar_read
  • MPI_T_cvar_write
  • MPI_T_enum_get_info
  • MPI_T_enum_get_item
  • MPI_T_finalize
  • MPI_T_init_thread
  • MPI_T_pvar_get_info
  • MPI_T_pvar_get_num
  • MPI_T_pvar_handle_alloc
  • MPI_T_pvar_handle_free
  • MPI_T_pvar_read
  • MPI_T_pvar_readreset
  • MPI_T_pvar_reset
  • MPI_T_pvar_session_create
  • MPI_T_pvar_session_free
  • MPI_T_pvar_start
  • MPI_T_pvar_stop
  • MPI_T_pvar_write
  • MPI_Test
  • MPI_Test_cancelled
  • MPI_Testall
  • MPI_Testany
  • MPI_Testsome
  • MPI_Topo_test
  • MPI_Type_c2f
  • MPI_Type_commit
  • MPI_Type_contiguous
  • MPI_Type_create_darray
  • MPI_Type_create_f90_complex
  • MPI_Type_create_f90_integer
  • MPI_Type_create_f90_real
  • MPI_Type_create_hindexed
  • MPI_Type_create_hindexed_block
  • MPI_Type_create_hvector
  • MPI_Type_create_indexed_block
  • MPI_Type_create_keyval
  • MPI_Type_create_resized
  • MPI_Type_create_struct
  • MPI_Type_create_subarray
  • MPI_Type_delete_attr
  • MPI_Type_dup
  • MPI_Type_extent
  • MPI_Type_f2c
  • MPI_Type_free
  • MPI_Type_free_keyval
  • MPI_Type_get_attr
  • MPI_Type_get_contents
  • MPI_Type_get_envelope
  • MPI_Type_get_extent
  • MPI_Type_get_extent_x
  • MPI_Type_get_name
  • MPI_Type_get_true_extent
  • MPI_Type_get_true_extent_x
  • MPI_Type_hindexed
  • MPI_Type_hvector
  • MPI_Type_indexed
  • MPI_Type_lb
  • MPI_Type_match_size
  • MPI_Type_set_attr
  • MPI_Type_set_name
  • MPI_Type_size
  • MPI_Type_size_x
  • MPI_Type_struct
  • MPI_Type_ub
  • MPI_Type_vector
  • MPI_Unpack
  • MPI_Unpack_external
  • MPI_Unpublish_name
  • MPI_Wait
  • MPI_Waitall
  • MPI_Waitany
  • MPI_Waitsome
  • MPI_Win_allocate
  • MPI_Win_allocate_shared
  • MPI_Win_attach
  • MPI_Win_c2f
  • MPI_Win_call_errhandler
  • MPI_Win_complete
  • MPI_Win_create
  • MPI_Win_create_dynamic
  • MPI_Win_create_errhandler
  • MPI_Win_create_keyval
  • MPI_Win_delete_attr
  • MPI_Win_detach
  • MPI_Win_f2c
  • MPI_Win_fence
  • MPI_Win_flush
  • MPI_Win_flush_all
  • MPI_Win_flush_local
  • MPI_Win_flush_local_all
  • MPI_Win_free
  • MPI_Win_free_keyval
  • MPI_Win_get_attr
  • MPI_Win_get_errhandler
  • MPI_Win_get_group
  • MPI_Win_get_info
  • MPI_Win_get_name
  • MPI_Win_lock
  • MPI_Win_lock_all
  • MPI_Win_post
  • MPI_Win_set_attr
  • MPI_Win_set_errhandler
  • MPI_Win_set_info
  • MPI_Win_set_name
  • MPI_Win_shared_query
  • MPI_Win_start
  • MPI_Win_sync
  • MPI_Win_test
  • MPI_Win_unlock
  • MPI_Win_unlock_all
  • MPI_Win_wait
  • MPI_Wtick
  • MPI_Wtime
  • MPI_Allgather_init
  • MPI_Allgatherv_init
  • MPI_Allreduce_init
  • MPI_Alltoall_init
  • MPI_Alltoallv_init
  • MPI_Alltoallw_init
  • MPI_Barrier_init
  • MPI_Bcast_init
  • MPI_Comm_create_from_group
  • MPI_Comm_idup_with_info
  • MPI_Exscan_init
  • MPI_Gather_init
  • MPI_Gatherv_init
  • MPI_Group_from_session_pset
  • MPI_Info_create_env
  • MPI_Info_get_string
  • MPI_Intercomm_create_from_groups
  • MPI_Isendrecv
  • MPI_Isendrecv_replace
  • MPI_Neighbor_allgather_init
  • MPI_Neighbor_allgatherv_init
  • MPI_Neighbor_alltoall_init
  • MPI_Neighbor_alltoallv_init
  • MPI_Neighbor_alltoallw_init
  • MPI_Parrived
  • MPI_Pready
  • MPI_Pready_list
  • MPI_Pready_range
  • MPI_Precv_init
  • MPI_Psend_init
  • MPI_Reduce_init
  • MPI_Reduce_scatter_block_init
  • MPI_Reduce_scatter_init
  • MPI_Scan_init
  • MPI_Scatter_init
  • MPI_Scatterv_init
  • MPI_Session_call_errhandler
  • MPI_Session_create_errhandler
  • MPI_Session_c2f
  • MPI_Session_f2c
  • MPI_Session_finalize
  • MPI_Session_get_errhandler
  • MPI_Session_get_info
  • MPI_Session_get_nth_pset
  • MPI_Session_get_num_psets
  • MPI_Session_get_pset_info
  • MPI_Session_init
  • MPI_Session_set_errhandler
  • MPI_T_category_get_events
  • MPI_T_category_get_index
  • MPI_T_category_get_num_events
  • MPI_T_cvar_get_index
  • MPI_T_pvar_get_index
  • MPI_T_event_callback_get_info
  • MPI_T_event_callback_set_info
  • MPI_T_event_copy
  • MPI_T_event_get_num
  • MPI_T_event_get_info
  • MPI_T_event_get_index
  • MPI_T_event_get_source
  • MPI_T_event_get_timestamp
  • MPI_T_event_handle_alloc
  • MPI_T_event_handle_free
  • MPI_T_event_handle_get_info
  • MPI_T_event_handle_set_info
  • MPI_T_event_read
  • MPI_T_event_register_callback
  • MPI_T_event_set_dropped_handler
  • MPI_T_source_get_info
  • MPI_T_source_get_num
  • MPI_T_source_get_timestamp

Future Work

You might also like...
The Ultimate Raylib gaming library wrapper for Nim
The Ultimate Raylib gaming library wrapper for Nim

NimraylibNow! - The Ultimate Raylib wrapper for Nim The most idiomatic and up-to-date wrapper for Raylib gaming C library. Use this library if you wan

A wrapper around std::variant with some helper functions

A wrapper around std::variant with some helper functions

Stripped down version of BIGTREETECH-TouchScreenFirmware which only supports ST7920 emulation (Marlin Mode)

Stripped down version of BIGTREETECH-TouchScreenFirmware which only supports ST7920 emulation (Marlin Mode). This project only uses peripheral drivers supplied by the screen manufacturer and uses it's own library to parse the ST7920 commands.

You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery
You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery

YOLT You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery As of 24 October 2018 YOLT has been superceded by SIMRDWN YOLT is an

A Windows only library for allowing C++ to feel like C#

System What is this? C++ System is a header and Windows only library for interacting with the machine, in the way that C# does, but since C++ is not r

Dead simple C logging library contained in a single header (.h) file
Dead simple C logging library contained in a single header (.h) file

Seethe Logging so simple, you only need to include a single header file. seethe supports 6 different log levels (DEBUG, INFO, NOTICE, WARNING, ERROR,

RapidObj is an easy-to-use, single-header C++17 library that loads and parses Wavefront .obj files.

RapidObj About Integration Prerequisites Manual Integration CMake Integration API RapidObj Result Next Steps OS Support Third Party Tools and Resource

convert elf file to single c/c++ header file

elf-to-c-header Split ELF to single C/C++ header file

Open MPI main development repository

Open MPI The Open MPI Project is an open source Message Passing Interface (MPI) implementation that is developed and maintained by a consortium of aca

A C++17 message passing library based on MPI

MPL - A message passing library MPL is a message passing library written in C++17 based on the Message Passing Interface (MPI) standard. Since the C++

Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI
Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI

High-Performance-Computing-Experiments Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI 实验结

A program developed using MPI for distributed computation of Histogram for large data and their performance anaysis on multi-core systems
A program developed using MPI for distributed computation of Histogram for large data and their performance anaysis on multi-core systems

mpi-histo A program developed using MPI for distributed computation of Histogram for large data and their performance anaysis on multi-core systems. T

Parallel implementation of Dijkstra's shortest path algorithm using MPI

Parallel implementation of Dijkstra's shortest path algorithm using MPI

Header-only, event based, tiny and easy to use libuv wrapper in modern C++ - now available as also shared/static library!

Do you have a question that doesn't require you to open an issue? Join the gitter channel. If you use uvw and you want to say thanks or support the pr

A C++ header-only ZLib wrapper

A C++ ZLib wrapper This C++ header-only library enables the use of C++ standard iostreams to access ZLib-compressed streams. For input access (decompr

Header-only, event based, tiny and easy to use libuv wrapper in modern C++ - now available as also shared/static library!

Do you have a question that doesn't require you to open an issue? Join the gitter channel. If you use uvw and you want to say thanks or support the pr

A single file C++ header-only minizip wrapper library

cpp-zipper A single file C++ header-only minizip wrapper library This code is based on 'Making MiniZip Easier to Use' by John Schember. https://nachti

sdl4cpp, header-only C++ wrapper around SDL2.

sdl4cpp sdl4cpp, header-only C++ wrapper around SDL2 (heavily work-in-progress) An example: #include "sdl4cpp/sdl4.hpp" #include "sdl4cpp/timer.hpp" #

Header only wrapper around Hex-Rays API in C++20.

HexSuite HexSuite is a header only wrapper around Hex-Rays API in C++20 designed to simplify the use of Hex-Rays and IDA APIs by modern C++ features.

Comments
  • Incorporate futures, coroutines, and detach callbacks (for requests).

    Incorporate futures, coroutines, and detach callbacks (for requests).

    An MPI_Request is a handle for an MPI operation which will complete in the future. A std::future<T> is a handle for a C++ operation which will complete in the future.

    They are very much the same. Futures should be incorporated in the implementation of the request wrapper. Specifically std::future<T>::then() which is a part of the C++ concurrency proposal would enable chaining non-blocking operations with a clean syntax.

    Callbacks are an alternative to asynchrony; the user provides a function which is ran when the operation completes. Callbacks also enable chaining through nesting, today.

    opened by acdemiralp 1
  • Combined collectives for convenient communication of varying sizes.

    Combined collectives for convenient communication of varying sizes.

    Example: A user has to first MPI_Gather the sizes of each vector from each rank, and use that information to allocate space and fill in the sizes / displacements prior to calling MPI_Gatherv.

    Provide conveniences which combine such common collective operations into a single call.

    Blocked by #2 as chaining via futures or callbacks is necessary for immediate and persistent collectives.

    opened by acdemiralp 0
  • Design and implement a high-level library on top of this one.

    Design and implement a high-level library on top of this one.

    Potential primitives:

    • [x] Distributed FSIO:
      • Distributed reading/writing of partial/complete data to/from multiple processes without user intervention.
      • Perhaps just accept HDF5 as a de-facto standard and write a wrapper for it (it is already based on MPI-IO).
      • Inspiration: HDF5, NetCDF, ADIOS.
      • Resolution: Accepted HDF5. See https://github.com/acdemiralp/hdf for a modern interface.
    • [x] Sync Variables:
      • Variables which are synchronized across all processes (in a communicator?) without explicit calls to collectives.
      • Inspiration: Unity3D/Unreal/RakNet network SyncVars.
      • Resolution: Implemented. See mpi/extensions/shared_variable.hpp.
    • [ ] Sync Memory:
      • Abstraction of a memory region in which:
        • An object constructed or destroyed in one process is constructed or destroyed across all processes.
        • Members of all objects are kept in sync across all processes.
        • A function called in one process is called across all processes.
        • Objects can reference / point to each other (pointers are valid for use across processes - virtual pointers).
        • Builds on sync variables and RPCs.
        • Inspiration: Unity3D/Unreal maps with networking.
    • [ ] Distributed Shared Memory:
      • Treat the combined memory of all processes as one i.e. partitioned global address space (PGAS).
      • PGAS versions of STL containers.
      • PGAS versions of <algorithm>.
      • Probably using mpi::window.
      • See: DASH: Data Structures and Algorithms with Support for Hierarchical Locality.
      • See: UPC++: A PGAS Extension for C++
      • Objection: I doubt that I can create something as elaborate as DASH. I should just accept DASH as a de-facto standard for PGAS in C++. It even supports C++17.
      • Objection to objection: DASH is bloated to the throat with scripts, makefiles, site-specifics, i.e. a workflow that distrupts the user's own workflow. The user has to take a break from programming his own work, and have to spend time building, installing and making sure that DASH works. I can just provide a few STL style headers to achieve PGAS. Specifically:
        • <pgas/algorithm>
        • <pgas/array>
        • <pgas/iterator>
        • <pgas/numeric>
    • [ ] Task Graphs:
      • Define streamlined inputs and outputs (i.e. resources) for each point-to-point and collective operation (i.e. tasks).
      • Allow user to construct a directed acyclic graph of resources and tasks.
      • Automatically place barriers for resources referred by multiple tasks.
      • This would achieve complex communication operations with understandable code.
      • Inspiration: Intel TBB
    • [ ] Remote Procedure Calls:
    • [ ] Socket-like Communication:
      • Real-time (always on, passive), bidirectional, low-latency communication across all processes (in a communicator?).
      • Subscribe to a group/topic with a key and a callback which gets called whenever a message with the key is received.
      • Perhaps a notion of "lobby" and "room" for ranks.
      • Request/reply, publish/subscribe, pipeline, exclusive pair patterns.
      • Inspiration: Socket.io, ZeroMQ.
      • See: MPI/RT-an emerging standard for high-performance real-time systems
    help wanted 
    opened by acdemiralp 0
  • Convert all viable functions to use MPI_Count once the functions ending with _c become widespread.

    Convert all viable functions to use MPI_Count once the functions ending with _c become widespread.

    Convert all the following functions to use std::int64_t once the big count proposal is available by majority of implementations.

    Reasoning for conversion rather than addition: We should not represent a memory address/size with a std::int32_t in the era of 64-bit computers.

    Including but not limited to:

    • [ ] MPIX_Accumulate_c
    • [ ] MPIX_Allgather_c
    • [ ] MPIX_Allgatherv_c
    • [ ] MPIX_Allreduce_c
    • [ ] MPIX_Alltoall_c
    • [ ] MPIX_Alltoallv_c
    • [ ] MPIX_Alltoallw_c
    • [ ] MPIX_Bcast_c
    • [ ] MPIX_Bcast_init_c
    • [ ] MPIX_Bsend_c
    • [ ] MPIX_Bsend_init_c
    • [ ] MPIX_Buffer_attach_c
    • [ ] MPIX_Buffer_detach_c
    • [ ] MPIX_Exscan_c
    • [ ] MPIX_Gather_c
    • [ ] MPIX_Gatherv_c
    • [ ] MPIX_Get_accumulate_c
    • [ ] MPIX_Get_c
    • [ ] MPIX_Get_count_c
    • [ ] MPIX_Get_elements_c
    • [ ] MPIX_Iallgather_c
    • [ ] MPIX_Iallgatherv_c
    • [ ] MPIX_Iallreduce_c
    • [ ] MPIX_Ialltoall_c
    • [ ] MPIX_Ialltoallv_c
    • [ ] MPIX_Ialltoallw_c
    • [ ] MPIX_Ibcast_c
    • [ ] MPIX_Ibsend_c
    • [ ] MPIX_Iexscan_c
    • [ ] MPIX_Igather_c
    • [ ] MPIX_Igatherv_c
    • [ ] MPIX_Imrecv_c
    • [ ] MPIX_Ineighbor_allgather_c
    • [ ] MPIX_Ineighbor_allgatherv_c
    • [ ] MPIX_Ineighbor_alltoall_c
    • [ ] MPIX_Ineighbor_alltoallv_c
    • [ ] MPIX_Ineighbor_alltoallw_c
    • [ ] MPIX_Irecv_c
    • [ ] MPIX_Ireduce_c
    • [ ] MPIX_Ireduce_scatter_block_c
    • [ ] MPIX_Ireduce_scatter_c
    • [ ] MPIX_Irsend_c
    • [ ] MPIX_Iscan_c
    • [ ] MPIX_Iscatter_c
    • [ ] MPIX_Iscatterv_c
    • [ ] MPIX_Isend_c
    • [ ] MPIX_Issend_c
    • [ ] MPIX_Mrecv_c
    • [ ] MPIX_Neighbor_allgather_c
    • [ ] MPIX_Neighbor_allgatherv_c
    • [ ] MPIX_Neighbor_alltoall_c
    • [ ] MPIX_Neighbor_alltoallv_c
    • [ ] MPIX_Neighbor_alltoallw_c
    • [ ] MPIX_Op_create_c
    • [ ] MPIX_Pack_c
    • [ ] MPIX_Pack_external_c
    • [ ] MPIX_Pack_external_size_c
    • [ ] MPIX_Pack_size_c
    • [ ] MPIX_Put_c
    • [ ] MPIX_Raccumulate_c
    • [ ] MPIX_Recv_c
    • [ ] MPIX_Recv_init_c
    • [ ] MPIX_Reduce_c
    • [ ] MPIX_Reduce_local_c
    • [ ] MPIX_Reduce_scatter_block_c
    • [ ] MPIX_Reduce_scatter_c
    • [ ] MPIX_Rget_accumulate_c
    • [ ] MPIX_Rget_c
    • [ ] MPIX_Rput_c
    • [ ] MPIX_Rsend_c
    • [ ] MPIX_Rsend_init_c
    • [ ] MPIX_Scan_c
    • [ ] MPIX_Scatter_c
    • [ ] MPIX_Scatterv_c
    • [ ] MPIX_Send_c
    • [ ] MPIX_Send_init_c
    • [ ] MPIX_Ssend_c
    • [ ] MPIX_Ssend_init_c
    • [ ] MPIX_Type_contiguous_c
    • [ ] MPIX_Type_create_darray_c
    • [ ] MPIX_Type_create_hindexed_block_c
    • [ ] MPIX_Type_create_hindexed_c
    • [ ] MPIX_Type_create_hvector_c
    • [ ] MPIX_Type_create_indexed_block_c
    • [ ] MPIX_Type_create_resized_c
    • [ ] MPIX_Type_create_struct_c
    • [ ] MPIX_Type_create_subarray_c
    • [ ] MPIX_Type_get_contents_c
    • [ ] MPIX_Type_get_envelope_c
    • [ ] MPIX_Type_get_extent_c
    • [ ] MPIX_Type_get_true_extent_c
    • [ ] MPIX_Type_indexed_c
    • [ ] MPIX_Type_size_c
    • [ ] MPIX_Type_vector_c
    • [ ] MPIX_Unpack_c
    • [ ] MPIX_Unpack_external_c
    • [ ] MPIX_Win_allocate_c
    • [ ] MPIX_Win_allocate_shared_c
    • [ ] MPIX_Win_create_c
    • [ ] MPIX_Win_shared_query_c

    See https://eurompi.github.io/assets/papers/2020-09-eurompi2020-mpi4.pdf for more functions.

    help wanted 
    opened by acdemiralp 0
Releases(1.3.0)
Owner
Ali Can Demiralp
Ali Can Demiralp
Small header-only C++ library that helps to initialize Vulkan instance and device object

Vulkan Extensions & Features Help, or VkExtensionsFeaturesHelp, is a small, header-only, C++ library for developers who use Vulkan API.

Adam Sawicki 11 Oct 12, 2022
match(it): A lightweight header-only pattern-matching library for C++17 with macro-free APIs.

match(it): A lightweight header-only pattern-matching library for C++17 with macro-free APIs. Features Easy to get started. Single header library. Mac

Bowen Fu 434 Dec 27, 2022
A Header-Only Engine that tries to use SFML in a deeper level

⚙️ SFML-Low-Level-Engine ⚙️ A header-only library that tries to use SFML at a deeper level ?? Instalation Download the source code and put the GLD fol

!Gustavo! 4 Aug 27, 2021
C++ header-only library for generic data validation.

C++ header-only library for generic data validation.

Evgeny Sidorov 36 Dec 6, 2022
a compile-time, header-only, dimensional analysis and unit conversion library built on c++14 with no dependencies.

UNITS A compile-time, header-only, dimensional analysis library built on c++14 with no dependencies. Get in touch If you are using units.h in producti

Nic Holthaus 809 Dec 29, 2022
A header only C++ library that provides type safety and user defined literals for physical units

SI - Type safety for physical units A header only c++ library that provides type safety and user defined literals for handling pyhsical values defined

Dominik Berner 399 Dec 25, 2022
Small Header only library to parse argv for flags

Small Header only library to parse argv for flags

Ben Wernicke 3 Nov 9, 2022
Header-only lock-free synchronization utilities (one writer, many readers).

stupid Header-only lock-free synchronization utilities (one writer, many readers). No queues Base functionality The base functionality of this library

Colugo 14 Nov 28, 2022
This is a Header-only c++ string implentation which specializes in dealing with small strings. 🧵

string-impl This is a Header-only c++ string implentation which specializes in dealing with small strings. ?? Usage ⌨ Instantiation A string can be in

null 1 Oct 26, 2022
Header only roguelike rendering library.

Header only roguelike rendering library. Support for Opengl33 and Raylib. Features Support for custom glyph atlasses with up to 65655 tiles of custom

Journeyman 9 Dec 15, 2022