C++-based high-performance parallel environment execution engine for general RL environments.

Overview

PyPI Read the Docs Unittest GitHub issues GitHub stars GitHub forks GitHub license

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With a curated design dedicated to the RL use case, we leverage techniques of a general asynchronous execution model, implemented with C++ thread pool on the environment execution.

Here are EnvPool's several highlights:

  • Compatible with OpenAI gym APIs and DeepMind dm_env APIs;
  • Manage a pool of envs, interact with the envs in batched APIs by default;
  • Synchronous execution API and asynchronous execution API;
  • Easy C++ developer API to add new envs;
  • 1 Million Atari frames per second simulation with 256 CPU cores, ~13x throughput of Python subprocess-based vector env;
  • ~3x throughput of Python subprocess-based vector env on low resource setup like 12 CPU cores;
  • Comparing with existing GPU-based solution (Brax / Isaac-gym), EnvPool is a general solution for various kinds of speeding-up RL environment parallelization;
  • Compatible with some existing RL libraries, e.g., Tianshou.

Installation

PyPI

EnvPool is currently hosted on PyPI. It requires Python >= 3.7.

You can simply install EnvPool with the following command:

$ pip install envpool

After installation, open a Python console and type

import envpool
print(envpool.__version__)

If no error occurs, you have successfully installed EnvPool.

From Source

Please refer to the guideline.

Documentation

The tutorials and API documentation are hosted on envpool.readthedocs.io.

The example scripts are under examples/ folder.

Supported Environments

We're in the progress of open-sourcing all available envs from our internal version, stay tuned.

  • Atari via ALE
  • Single/Multi players Vizdoom
  • Classic RL envs, including CartPole, MountainCar, ...

Benchmark Results

We perform our benchmarks with ALE Atari environment (with environment wrappers) on different hardware setups, including a TPUv3-8 virtual machine (VM) of 96 CPU cores and 2 NUMA nodes, and an NVIDIA DGX-A100 of 256 CPU cores with 8 NUMA nodes. Baselines include 1) naive Python for-loop; 2) the most popular RL environment parallelization execution by Python subprocess, e.g., gym.vector_env; 3) to our knowledge, the fastest RL environment executor Sample Factory before EnvPool.

We report EnvPool performance with sync mode, async mode and NUMA + async mode, compared with the baselines on different number of workers (i.e., number of CPU cores). As we can see from the results, EnvPool achieves significant improvements over the baselines on all settings. On the high-end setup, EnvPool achieves 1 Million frames per second on 256 CPU cores, which is 13.3x of the gym.vector_env baseline. On a typical PC setup with 12 CPU cores, EnvPool's throughput is 2.8x of gym.vector_env.

Our benchmark script is in examples/benchmark.py. The detail configurations of 4 types of system are:

  • Personal laptop: 12 core Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
  • TPU-VM: 96 core Intel(R) Xeon(R) CPU @ 2.00GHz
  • Apollo: 96 core AMD EPYC 7352 24-Core Processor
  • DGX-A100: 256 core AMD EPYC 7742 64-Core Processor
Highest FPS Laptop (12) TPU-VM (96) Apollo (96) DGX-A100 (256)
For-loop 4,876 3,817 4,053 4,336
Subprocess 18,249 42,885 19,560 79,509
Sample Factory 27,035 192,074 262,963 639,389
EnvPool (sync) 40,791 175,938 159,191 470,170
EnvPool (async) 50,513 352,243 410,941 845,537
EnvPool (numa+async) / 367,799 458,414 1,060,371

API Usage

The following content shows both synchronous and asynchronous API usage of EnvPool. You can also run the full script at examples/env_step.py

Synchronous API

import envpool
import numpy as np

# make gym env
env = envpool.make("Pong-v5", env_type="gym", num_envs=100)
# or use envpool.make_gym(...)
obs = env.reset()  # should be (100, 4, 84, 84)
act = np.zeros(100, dtype=int)
obs, rew, done, info = env.step(act)

Under the synchronous mode, envpool closely resembles openai-gym/dm-env. It has the reset and step function with the same meaning. There is one exception though, in envpool batch interaction is the default. Therefore, during creation of the envpool, there is a num_envs argument that denotes how many envs you like to run in parallel.

env = envpool.make("Pong-v5", env_type="gym", num_envs=100)

The first dimension of action passed to the step function should be equal to num_envs.

act = np.zeros(100, dtype=int)

You don't need to manually reset one environment when any of done is true, instead, all envs in envpool has enabled auto-reset by default.

Asynchronous API

import envpool
import numpy as np

# make asynchronous 
env = envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)
env.async_reset()  # send the initial reset signal to all envs
while True:
    obs, rew, done, info = env.recv()
    action = np.random.randint(batch_size, size=len(info.env_id))
    env.send(action, env_id)

In the asynchronous mode, the step function is splitted into two part, namely the send/recv functions. send takes two arguments, a batch of action, and the corresponding env_id that each action should be sent to. Unlike step, send does not wait for the envs to execute and return the next state, it returns immediately after the actions are fed to the envs. (The reason why it is called async mode).

env.send(action, env_id)

To get the "next states", we need to call the recv function. However, recv does not guarantee that you will get back the "next states" of the envs that you just called send on. Instead, whatever envs finishes execution first gets recved first.

state = env.recv()

Besides num_envs, there's one more argument batch_size. While num_envs defines how many envs in total is being managed by the envpool, batch_size defines the number of envs involved each time we interact with envpool. e.g. There're 64 envs executing in the envpool, send and recv each time interacts with a batch of 16 envs.

envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)

There are other configurable arguments with envpool.make, please check out envpool interface introduction.

Contributing

EnvPool is still under development. More environments are going to be added and we always welcome contributions to help EnvPool better. If you would like to contribute, please check out our contribution guideline.

License

EnvPool is under Apache2 license.

Other third party source-code and data are under their corresponding licenses.

We do not include their source-code and data in this repo.

Citing EnvPool

If you find EnvPool useful, please cite it in your publications.

[Coming soon!]

Disclaimer

This is not an official Sea Limited or Garena Online Private Limited product.

Issues
  • [Feature Request] Mujoco integration

    [Feature Request] Mujoco integration

    https://github.com/openai/gym/tree/master/gym/envs/mujoco

    Env List:

    • [x] Ant-v4 (#74)
    • [x] HalfCheetah-v4 (#75)
    • [x] Hopper-v4 (#76)
    • [x] Humanoid-v4 (#77)
    • [x] HumanoidStandup-v4 (#78)
    • [x] InvertedDoublePendulum-v4 (@Benjamin-eecs, #83)
    • [x] InvertedPendulum-v4 (#79)
    • [x] Pusher-v4 (#82)
    • [x] Reacher-v4 (#81)
    • [x] Swimmer-v4 (#80)
    • [x] Walker2d-v4 (@Benjamin-eecs, #86)
    • [x] add other options to align with gym (#93)

    Road Map:

    • [x] Get comfortable with current codebase, go through https://envpool.readthedocs.io/en/latest/pages/env.html and add a toy environment by yourself locally;
    • [x] Download Mujoco and run on your local machine [1] [5], try with different env settings and see the actual behavior;
    • [x] Go through their code [1] [2] (I think it's better to go through both openai and deepmind versions, but only use deepmind's solution as reference), understand their ctype APIs and what we can use to bind with EnvPool APIs [3];
    • [x] Integrate only one game and let it work;
    • [x] Add some unit tests (good to submit the first PR here);
    • [x] Integrate other environments (submit another PR) and related tests.

    Resources:

    1. https://github.com/openai/mujoco-py
    2. https://github.com/deepmind/dm_control/tree/master/dm_control/mujoco
    3. https://github.com/deepmind/mujoco/blob/main/doc/programming.rst
    4. It is quite similar with Atari games which we have already integrated: https://github.com/mgbellemare/Arcade-Learning-Environment
    5. First install gym and mujoco, then run with
    import gym
    env = gym.make("Ant-v3")
    env.reset()
    for _ in range(10):
      env.step(env.action_space.sample())
      env.render()
    
    1. https://github.com/ikostrikov/gym_dmc/blob/master/compare.py a checker
    enhancement 
    opened by Trinkle23897 10
  • [BUG] Reward is not deterministic after seeding the env

    [BUG] Reward is not deterministic after seeding the env

    Describe the bug

    I use envpool to make HalfCheeth-v3 with a fixed seed, but the rewards are not the same during several runs. Specifically, only the reward turned by the first env is not deterministic, other envs are good. And if the num_envs is small, this bug does not occur.

    To Reproduce

    import envpool
    import numpy as np
    
    def random_rollout():
        np.random.seed(0)
        n = 32
        envs = envpool.make_gym('HalfCheetah-v3', num_envs=n, seed=123)
        envs.reset()
        rew_sum = 0
        for _ in range(10):
            action = np.random.rand(n, envs.action_space.shape[0])
            obs, rew, done, info = envs.step(action)
            rew_sum += rew
        envs.close()
        return rew_sum
    
    
    if __name__ == "__main__":
        a = random_rollout()
        b = random_rollout()
        print(a - b)
    

    Output:

    [-0.01131058  0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.        ]
    

    Expected behavior

    The reward should be deterministic after seeding.

    System info

    Describe the characteristic of your environment:

    • envpool version: '0.6.0'
    • envpool is installed via pip
    • Python version: 3.8.10
    0.6.0 1.21.5 3.8.10 (default, Jun  4 2021, 15:09:15) 
    [GCC 7.5.0] linux
    

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by tongzhoumu 8
  • [Feature Request] ACME Integration

    [Feature Request] ACME Integration

    https://github.com/deepmind/acme

    Road Map:

    @TianyiSun316

    • [ ] Go through ACME codebase and integrate vector_env to the available algorithms;
    • [ ] Write Atari examples;
    • [ ] Check Atari performance: Pong and Breakout;
    • [ ] Submit PR;

    @LeoGuo98

    • [ ] Do some experiments with sample efficiency (actually you can try out with different libraries, either ACME, tianshou, or sb3, this doesn't depend on the previous item)

    Resources:

    tianshou: #51 stable-baselines3: #39 cleanrl: #48 #53

    cc @zhongwen

    enhancement 
    opened by Trinkle23897 7
  • Atari option for repeat_action_probability

    Atari option for repeat_action_probability

    The -v5 Gym Atari environments have sticky actions enabled by default (with repeat_action_probability=0.25, see here). This makes it impossible to replicate the original results from several key papers, especially the DQN Nature paper.

    Would it be possible to add an option to the Atari environment options that lets the user change repeat_action_probability to a different value? I believe that internally this can be accomplished by forwarding the argument to either gym.make or the ALE constructor.

    enhancement 
    opened by brett-daley 7
  • [BUG] Can't install on python 3.9 / 3.10 on macOS

    [BUG] Can't install on python 3.9 / 3.10 on macOS

    I tried to install envpool on python 3.9.5 and 3.10.0 with pip install envpool and got the following in both cases:

    ERROR: Could not find a version that satisfies the requirement envpool (from versions: none)
    ERROR: No matching distribution found for envpool
    

    I haven't checked other python versions though.

    question 
    opened by nico-bohlinger 6
  • [BUG] no Acc-v3 environment

    [BUG] no Acc-v3 environment

    Describe the bug

    A clear and concise description of what the bug is.

    When using tianshou, there's no Acc-v3 environment.

    File "test_dqn_acc.py", line 246, in Acc_tain() File "test_dqn_acc.py", line 112, in Acc_tain args.task, num_envs=args.training_num, env_type="gym" File "/home/zhulin/.conda/envs/mytorch/lib/python3.7/site-packages/envpool/registration.py", line 43, in make f"{task_id} is not supported, envpool.list_all_envs() may help." AssertionError: Acc-v3 is not supported, envpool.list_all_envs() may help.

    question 
    opened by 963141377 5
  • [BUG] Breakout-v5 Performance Regression

    [BUG] Breakout-v5 Performance Regression

    Describe the bug

    PPO can no longer reproduce 400 game scores in the Breakout-v5 given 10M steps of training (same hyperparameters) as it can in BreakoutNoFrameskip-v4.

    image

    To Reproduce

    Run the https://wandb.ai/costa-huang/cleanRL/runs/26k4q5jo/code?workspace=user-costa-huang to reproduce envpool's results and https://wandb.ai/costa-huang/cleanRL/runs/1ngqmz96/code?workspace=user-costa-huang to reproduce BreakoutNoFrameskip-v4 results.

    Expected behavior

    PPO should obtain 400 game scores in the Breakout-v5 given 10M steps of training

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    >>> import envpool, numpy, sys
    __, numpy.__version__, sys.version, sys.platform)
    >>> print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    0.4.3 1.21.5 3.9.5 (default, Jul 19 2021, 13:27:26) 
    [GCC 10.3.0] linux
    

    Reason and Possible fixes

    I ran the gym's ALE/Breakout-v5 as well and got a regression as well as shown below, but looking into it was because ALE/Breakout-v5 by default uses the full action space (14 discrete actions), whereas the Breakout-v5 has the minimal 4 discrete actions. So I have no idea why the regression happens with envpool...

    image

    Checklist

    • [ x ] I have checked that there is no similar issue in the repo (required)
    • [ x ] I have read the documentation (required)
    • [ x ] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by vwxyzjn 5
  • Add CleanRL examples: PPO solve Pong in 5 mins

    Add CleanRL examples: PPO solve Pong in 5 mins

    Kudos to this repo! This PR adds CleanRL example. Interestingly, after increasing num_envs=32, I was able to solve Pong in 10 mins :D

    image

    See the tracked experiment in costa-huang/cleanRL/runs/3rx432mj

    See also https://github.com/vwxyzjn/cleanrl/pull/100

    opened by vwxyzjn 5
  • [BUG] Terminal observation missing

    [BUG] Terminal observation missing

    Describe the bug

    Unless I'm mistaken, the env resets automatically when an episode is over, which means the terminal observation is not accessible to the agent which prevent from doing proper bootstrapping for infinite horizon problems.

    See https://github.com/openai/gym/pull/2484 and https://github.com/openai/gym/pull/1632

    Expected behavior

    https://github.com/openai/gym/pull/2484

    Reason and Possible fixes

    Add terminal_observation key to the info dict, as already done for the timelimit truncation.

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [ ] I have provided a minimal working example to reproduce the bug (required)
    documentation 
    opened by araffin 5
  • How to compile new environment into EnvPool

    How to compile new environment into EnvPool

    opened by quark2019 5
  • Add acme JAX R2D2 example

    Add acme JAX R2D2 example

    Description

    add acme DQN example

    Motivation and Context

    #60

    Types of changes

    What types of changes does your code introduce? Put an x in all the boxes that apply:

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds core functionality)
    • [ ] New environment (non-breaking change which adds 3rd-party environment)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation (update in the documentation)
    • [x] Example (update in the folder of example)

    Checklist

    Go over all the following points, and put an x in all the boxes that apply. If you are unsure about any of these, don't hesitate to ask. We are here to help!

    • [x] I have read the CONTRIBUTION guide (required)
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the tests accordingly (required for a bug fix or a new feature).
    • [x] I have updated the documentation accordingly.
    • [x] I have reformatted the code using make format (required)
    • [x] I have checked the code using make lint (required)
    • [x] I have ensured make bazel-test pass. (required)
    opened by TianyiSun316 4
  • [Feature Request] Comparison with ELF

    [Feature Request] Comparison with ELF

    Motivation

    ELF hosts multiple games in parallel with C++ threading and it says any game with C/C++ interface can be plugged into this framework by writing a simple wrapper, which can serve as an baseline on atari scenario.

    Resource

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    enhancement 
    opened by Benjamin-eecs 0
  • [Feature Request] SMAC Integration

    [Feature Request] SMAC Integration

    Motivation

    A typical large-scale multi-agent RL environment for large pretrained model learning

    Resource

    https://github.com/oxwhirl/smac

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    enhancement 
    opened by Benjamin-eecs 2
  • [Feature Request] Template for private env that uses envpool as a dependency

    [Feature Request] Template for private env that uses envpool as a dependency

    Motivation

    It is desirable that one can develop their own env without having to work under envpool's code base. While still have the access to register the env with envpool and use the make function to create it. It seems already possible with our current code base, we just need a template repo.

    Solution

    import envpool
    import my_private_env
    my_env = envpool.make("MyPrivateEnv")
    

    Where my_private_env is developed as another package.

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)

    cc @zhongwen

    enhancement 
    opened by mavenlin 2
  • [BUG] Segfault when batch size is larger than 255 on Atari environments

    [BUG] Segfault when batch size is larger than 255 on Atari environments

    Describe the bug

    Segfault when batch size is larger than 255 on Atari environments

    MuJoCo environment seems to work well.

    To Reproduce

    Steps to reproduce the behavior.

    import time
    
    import envpool
    import numpy as np
    
    batch_size = 256  # set to 255 works
    
    env = envpool.make_gym("Breakout-v5",
                            stack_num=1,
                            
                            num_envs=batch_size * 2,
                            batch_size=batch_size,
    
                            use_inter_area_resize=False,
    
                            img_width=88,
                            img_height=88,
                            
                            num_threads=0,
                            thread_affinity_offset=0)
    action = np.array(
        [env.action_space.sample() for _ in range(batch_size)]
    )
    
    counter = 0
    
    env.async_reset()
    
    last_time = time.time()
    while True:
        obs, rew, done, info = env.recv()
    
        env_id = info["env_id"]
        env.send(action, env_id)
    
        counter += batch_size
        if counter >= 100000:
            cur_time = time.time()
            print("TPS", counter / (cur_time - last_time))
    
            counter = 0
            last_time = cur_time
    
    
    [1]    2959596 segmentation fault (core dumped)  python test_envpool.py
    

    Expected behavior

    Can run with large batch size, like 1024, 2048, etc.

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    0.6.1.post1 1.21.2 3.8.12 (default, Oct 12 2021, 13:49:34) 
    [GCC 7.5.0] linux
    

    Additional context

    Set batch size to 1024 works / segfaults randomly

    1024
    TPS 49611.30131772514
    TPS 57661.12695997062
    TPS 52648.235412990536
    TPS 52059.6945247295
    [1]    2971074 segmentation fault (core dumped)  python test_envpool.py
    

    Reason and Possible fixes

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by imoneoi 5
  • [Feature Request] Google Research Football Integration

    [Feature Request] Google Research Football Integration

    Motivation

    A typical large-scale single/multi-agent RL environment in need of speedup

    Resource

    https://github.com/google-research/football

    Road Map

    • [ ] add game_engine into third_party
    • [ ] load assets
    • [ ] design class interface
    • [ ] align with gfootball

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    enhancement 
    opened by Benjamin-eecs 0
Releases(v0.6.2)
Kokkos C++ Performance Portability Programming EcoSystem: The Programming Model - Parallel Execution and Memory Abstraction

Kokkos: Core Libraries Kokkos Core implements a programming model in C++ for writing performance portable applications targeting all major HPC platfor

Kokkos 1k Jun 25, 2022
Powerful multi-threaded coroutine dispatcher and parallel execution engine

Quantum Library : A scalable C++ coroutine framework Quantum is a full-featured and powerful C++ framework build on top of the Boost coroutine library

Bloomberg 440 Jun 25, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous tasks programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, a

Taskflow 6.9k Jun 23, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous task programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, an

Taskflow 7k Jul 3, 2022
A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

Fiber Tasking Lib This is a library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies. Dependenc

RichieSams 768 Jun 25, 2022
SymQEMU: Compilation-based symbolic execution for binaries

SymQEMU This is SymQEMU, a binary-only symbolic executor based on QEMU and SymCC. It currently extends QEMU 4.1.1 and works with the most recent versi

null 197 Jun 30, 2022
High Performance Linux C++ Network Programming Framework based on IO Multiplexing and Thread Pool

Kingpin is a C++ network programming framework based on TCP/IP + epoll + pthread, aims to implement a library for the high concurrent servers and clie

null 14 Jun 19, 2022
Exploration of x86-64 ISA using speculative execution.

Haruspex /həˈrʌspeks/ A religious official in ancient Rome who predicted the future or interpreted the meaning of events by examining the insides of b

Can Bölük 273 Jun 10, 2022
Bolt is a C++ template library optimized for GPUs. Bolt provides high-performance library implementations for common algorithms such as scan, reduce, transform, and sort.

Bolt is a C++ template library optimized for heterogeneous computing. Bolt is designed to provide high-performance library implementations for common

null 356 Jun 17, 2022
Concurrency Kit 2.1k Jun 18, 2022
A C++17 thread pool for high-performance scientific computing.

We present a modern C++17-compatible thread pool implementation, built from scratch with high-performance scientific computing in mind. The thread pool is implemented as a single lightweight and self-contained class, and does not have any dependencies other than the C++17 standard library, thus allowing a great degree of portability

Barak Shoshany 774 Jun 30, 2022
Thread-pool-cpp - High performance C++11 thread pool

thread-pool-cpp It is highly scalable and fast. It is header only. No external dependencies, only standard library needed. It implements both work-ste

Andrey Kubarkov 531 May 27, 2022
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

ArrayFire 3.9k Jul 2, 2022
An optimized C library for math, parallel processing and data movement

PAL: The Parallel Architectures Library The Parallel Architectures Library (PAL) is a compact C library with optimized routines for math, synchronizat

Parallella 295 Jun 10, 2022
Material for the UIBK Parallel Programming Lab (2021)

UIBK PS Parallel Systems (703078, 2021) This repository contains material required to complete exercises for the Parallel Programming lab in the 2021

null 12 May 6, 2022
Shared-Memory Parallel Graph Partitioning for Large K

KaMinPar The graph partitioning software KaMinPar -- Karlsruhe Minimal Graph Partitioning. KaMinPar is a shared-memory parallel tool to heuristically

Karlsruhe High Quality Graph Partitioning 13 Jun 14, 2022
Parallel algorithms (quick-sort, merge-sort , enumeration-sort) implemented by p-threads and CUDA

程序运行方式 一、编译程序,进入sort-project(cuda-sort-project),输入命令行 make 程序即可自动编译为可以执行文件sort(cudaSort)。 二、运行可执行程序,输入命令行 ./sort 或 ./cudaSort 三、删除程序 make clean 四、指定线程

Fu-Yun Wang 3 May 30, 2022
Partr - Parallel Tasks Runtime

Parallel Tasks Runtime A parallel task execution runtime that uses parallel depth-first (PDF) scheduling [1]. [1] Shimin Chen, Phillip B. Gibbons, Mic

null 32 Jun 14, 2022
Cpp-taskflow - Modern C++ Parallel Task Programming Library

Cpp-Taskflow A fast C++ header-only library to help you quickly write parallel programs with complex task dependencies Why Cpp-Taskflow? Cpp-Taskflow

null 4 Mar 30, 2021