C++-based high-performance parallel environment execution engine for general RL environments.

Overview

PyPI Read the Docs Unittest GitHub issues GitHub stars GitHub forks GitHub license

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With a curated design dedicated to the RL use case, we leverage techniques of a general asynchronous execution model, implemented with C++ thread pool on the environment execution.

Here are EnvPool's several highlights:

  • Compatible with OpenAI gym APIs and DeepMind dm_env APIs;
  • Manage a pool of envs, interact with the envs in batched APIs by default;
  • Synchronous execution API and asynchronous execution API;
  • Easy C++ developer API to add new envs;
  • 1 Million Atari frames per second simulation with 256 CPU cores, ~13x throughput of Python subprocess-based vector env;
  • ~3x throughput of Python subprocess-based vector env on low resource setup like 12 CPU cores;
  • Comparing with existing GPU-based solution (Brax / Isaac-gym), EnvPool is a general solution for various kinds of speeding-up RL environment parallelization;
  • Compatible with some existing RL libraries, e.g., Tianshou.

Installation

PyPI

EnvPool is currently hosted on PyPI. It requires Python >= 3.7.

You can simply install EnvPool with the following command:

$ pip install envpool

After installation, open a Python console and type

import envpool
print(envpool.__version__)

If no error occurs, you have successfully installed EnvPool.

From Source

Please refer to the guideline.

Documentation

The tutorials and API documentation are hosted on envpool.readthedocs.io.

The example scripts are under examples/ folder.

Supported Environments

We're in the progress of open-sourcing all available envs from our internal version, stay tuned.

  • Atari via ALE
  • Single/Multi players Vizdoom
  • Classic RL envs, including CartPole, MountainCar, ...

Benchmark Results

We perform our benchmarks with ALE Atari environment (with environment wrappers) on different hardware setups, including a TPUv3-8 virtual machine (VM) of 96 CPU cores and 2 NUMA nodes, and an NVIDIA DGX-A100 of 256 CPU cores with 8 NUMA nodes. Baselines include 1) naive Python for-loop; 2) the most popular RL environment parallelization execution by Python subprocess, e.g., gym.vector_env; 3) to our knowledge, the fastest RL environment executor Sample Factory before EnvPool.

We report EnvPool performance with sync mode, async mode and NUMA + async mode, compared with the baselines on different number of workers (i.e., number of CPU cores). As we can see from the results, EnvPool achieves significant improvements over the baselines on all settings. On the high-end setup, EnvPool achieves 1 Million frames per second on 256 CPU cores, which is 13.3x of the gym.vector_env baseline. On a typical PC setup with 12 CPU cores, EnvPool's throughput is 2.8x of gym.vector_env.

Our benchmark script is in examples/benchmark.py. The detail configurations of 4 types of system are:

  • Personal laptop: 12 core Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
  • TPU-VM: 96 core Intel(R) Xeon(R) CPU @ 2.00GHz
  • Apollo: 96 core AMD EPYC 7352 24-Core Processor
  • DGX-A100: 256 core AMD EPYC 7742 64-Core Processor
Highest FPS Laptop (12) TPU-VM (96) Apollo (96) DGX-A100 (256)
For-loop 4,876 3,817 4,053 4,336
Subprocess 18,249 42,885 19,560 79,509
Sample Factory 27,035 192,074 262,963 639,389
EnvPool (sync) 40,791 175,938 159,191 470,170
EnvPool (async) 50,513 352,243 410,941 845,537
EnvPool (numa+async) / 367,799 458,414 1,060,371

API Usage

The following content shows both synchronous and asynchronous API usage of EnvPool. You can also run the full script at examples/env_step.py

Synchronous API

import envpool
import numpy as np

# make gym env
env = envpool.make("Pong-v5", env_type="gym", num_envs=100)
# or use envpool.make_gym(...)
obs = env.reset()  # should be (100, 4, 84, 84)
act = np.zeros(100, dtype=int)
obs, rew, done, info = env.step(act)

Under the synchronous mode, envpool closely resembles openai-gym/dm-env. It has the reset and step function with the same meaning. There is one exception though, in envpool batch interaction is the default. Therefore, during creation of the envpool, there is a num_envs argument that denotes how many envs you like to run in parallel.

env = envpool.make("Pong-v5", env_type="gym", num_envs=100)

The first dimension of action passed to the step function should be equal to num_envs.

act = np.zeros(100, dtype=int)

You don't need to manually reset one environment when any of done is true, instead, all envs in envpool has enabled auto-reset by default.

Asynchronous API

import envpool
import numpy as np

# make asynchronous 
env = envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)
env.async_reset()  # send the initial reset signal to all envs
while True:
    obs, rew, done, info = env.recv()
    action = np.random.randint(batch_size, size=len(info.env_id))
    env.send(action, env_id)

In the asynchronous mode, the step function is splitted into two part, namely the send/recv functions. send takes two arguments, a batch of action, and the corresponding env_id that each action should be sent to. Unlike step, send does not wait for the envs to execute and return the next state, it returns immediately after the actions are fed to the envs. (The reason why it is called async mode).

env.send(action, env_id)

To get the "next states", we need to call the recv function. However, recv does not guarantee that you will get back the "next states" of the envs that you just called send on. Instead, whatever envs finishes execution first gets recved first.

state = env.recv()

Besides num_envs, there's one more argument batch_size. While num_envs defines how many envs in total is being managed by the envpool, batch_size defines the number of envs involved each time we interact with envpool. e.g. There're 64 envs executing in the envpool, send and recv each time interacts with a batch of 16 envs.

envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)

There are other configurable arguments with envpool.make, please check out envpool interface introduction.

Contributing

EnvPool is still under development. More environments are going to be added and we always welcome contributions to help EnvPool better. If you would like to contribute, please check out our contribution guideline.

License

EnvPool is under Apache2 license.

Other third party source-code and data are under their corresponding licenses.

We do not include their source-code and data in this repo.

Citing EnvPool

If you find EnvPool useful, please cite it in your publications.

[Coming soon!]

Disclaimer

This is not an official Sea Limited or Garena Online Private Limited product.

Comments
  • [BUG] Processing slows down with 96 threads

    [BUG] Processing slows down with 96 threads

    Describe the bug

    I have a test case that runs different number of threads on a 96-core server and I get roughly these throughput numbers:

    # python run_slowdown.py
    1 thread: 17849.3 SPS
    2 threads: 22744.9 SPS
    4 threads: 30579.2 SPS
    8 threads: 39536.7 SPS
    16 threads: 46167.0 SPS
    32 threads: 48889.3 SPS
    64 threads: 40827.2 SPS
    96 threads: 17439.2 SPS
    

    The throughput at 96 threads (with 960 envs) is essentially the same as with just 1 thread (and 10 envs).

    To Reproduce

    I have a repro in the slowdown branch; see slowdown_envpool.h and run_slowdown.py for reproduction.

    Expected behavior

    The throughput at 96 threads (with 960 envs) should ideally be nearly 90x faster than with just 1 thread (and 10 envs).

    Screenshots

    When running with 96 threads, only one core is running at 100% and other cores are almost idle:

    image

    System info

    Describe the characteristic of your environment:

    • version 0.6.6 compiled from source ((505e669)[https://github.com/jseppanen/envpool/commit/505e6692facfdbf0dbfcac338924e11bce51cdc5])
    • Envpool version 0.6.6
    • Python version 3.8.13
    • Numpy version 1.22.4
    • Ubuntu 20.04.4 LTS
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    0.6.6 1.22.4 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) 
    [GCC 10.3.0] linux
    

    Reason and Possible fixes

    The env contains about 1.5 KB of state, split across 45 arrays. I suspect that with 96 threads, the main thread is bottlenecked at copying the states, and all workers are idling because of that.

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by jseppanen 12
  • [Feature Request] Mujoco integration

    [Feature Request] Mujoco integration

    https://github.com/openai/gym/tree/master/gym/envs/mujoco

    Env List:

    • [x] Ant-v4 (#74)
    • [x] HalfCheetah-v4 (#75)
    • [x] Hopper-v4 (#76)
    • [x] Humanoid-v4 (#77)
    • [x] HumanoidStandup-v4 (#78)
    • [x] InvertedDoublePendulum-v4 (@Benjamin-eecs, #83)
    • [x] InvertedPendulum-v4 (#79)
    • [x] Pusher-v4 (#82)
    • [x] Reacher-v4 (#81)
    • [x] Swimmer-v4 (#80)
    • [x] Walker2d-v4 (@Benjamin-eecs, #86)
    • [x] add other options to align with gym (#93)

    Road Map:

    • [x] Get comfortable with current codebase, go through https://envpool.readthedocs.io/en/latest/pages/env.html and add a toy environment by yourself locally;
    • [x] Download Mujoco and run on your local machine [1] [5], try with different env settings and see the actual behavior;
    • [x] Go through their code [1] [2] (I think it's better to go through both openai and deepmind versions, but only use deepmind's solution as reference), understand their ctype APIs and what we can use to bind with EnvPool APIs [3];
    • [x] Integrate only one game and let it work;
    • [x] Add some unit tests (good to submit the first PR here);
    • [x] Integrate other environments (submit another PR) and related tests.

    Resources:

    1. https://github.com/openai/mujoco-py
    2. https://github.com/deepmind/dm_control/tree/master/dm_control/mujoco
    3. https://github.com/deepmind/mujoco/blob/main/doc/programming.rst
    4. It is quite similar with Atari games which we have already integrated: https://github.com/mgbellemare/Arcade-Learning-Environment
    5. First install gym and mujoco, then run with
    import gym
    env = gym.make("Ant-v3")
    env.reset()
    for _ in range(10):
      env.step(env.action_space.sample())
      env.render()
    
    1. https://github.com/ikostrikov/gym_dmc/blob/master/compare.py a checker
    enhancement 
    opened by Trinkle23897 10
  • [BUG] `Atlantis-v5` does not reset life counter.

    [BUG] `Atlantis-v5` does not reset life counter.

    Describe the bug

    When all the lives are exhausted in Atlantis-v5, making an additional step does not reset the life counter, whereas in Breakout-v5 it does. I am not sure if this is something particular with the Atlantis environment though.

    To Reproduce

    Steps to reproduce the behavior.

    Please try to provide a minimal example to reproduce the bug. Error messages and stack traces are also helpful.

    Please use the markdown code blocks for both code and stack traces.

    import envpool
    import numpy as np
    
    num_envs = 1
    print("making Atlantis-v5")
    envs = envpool.make(
        "Atlantis-v5",
        env_type="gym",
        num_envs=num_envs,
        episodic_life=True,
        reward_clip=True,
    )
    envs.reset()
    for i in range(10000):
        _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
        if info["lives"].sum() == 0:
            print(f"step={i}, lives is", info["lives"].sum())
            break
    
    _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
    print(f"step={i+1}, lives is", info["lives"].sum())
    print(f"notice how step={i+1} does not reset the life counter in Atlantis")
    
    print("making Atlantis-v5")
    envs = envpool.make(
        "Breakout-v5",
        env_type="gym",
        num_envs=num_envs,
        episodic_life=True,
        reward_clip=True,
    )
    envs.reset()
    for i in range(10000):
        _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
        if info["lives"].sum() == 0:
            print(f"step={i}, lives is", info["lives"].sum())
            break
    
    _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
    print(f"step={i+1}, lives is", info["lives"].sum())
    print(f"notice how step={i+1} does reset the life counter in Breakout")
    
    making Atlantis-v5
    step=1148, lives is 0
    step=1149, lives is 0
    notice how step=1149 does not reset the life counter in Atlantis
    making Atlantis-v5
    step=123, lives is 0
    step=124, lives is 5
    notice how step=124 does reset the life counter in Breakout
    

    Expected behavior

    If all the lives are exhausted, making an additional step should reset the life counter.

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    >>> print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    0.6.2.post2 1.23.1 3.8.11 (default, Oct  9 2021, 12:06:05) 
    [GCC 10.3.0] linux
    

    Reason and Possible fixes

    Maybe this is

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by vwxyzjn 9
  • [BUG] Can't install on python 3.9 / 3.10 on macOS

    [BUG] Can't install on python 3.9 / 3.10 on macOS

    I tried to install envpool on python 3.9.5 and 3.10.0 with pip install envpool and got the following in both cases:

    ERROR: Could not find a version that satisfies the requirement envpool (from versions: none)
    ERROR: No matching distribution found for envpool
    

    I haven't checked other python versions though.

    question 
    opened by nico-bohlinger 9
  • [BUG] Make bazel-test error on main branch

    [BUG] Make bazel-test error on main branch

    Describe the bug

    Make bazel-test error when clone the main branch of envpool

    To Reproduce

    $ git clone https://github.com/sail-sg/envpool.git
    $ make bazel-test
    
    ./envpool/core/xla.h:20:10: fatal error: cuda_runtime_api.h: No such file or directory
       20 | #include <cuda_runtime_api.h>
    

    Expected behavior

    No error.

    Screenshots

    envpool_bug_0

    System info

    0.6.3.post1 1.23.1 3.8.10 (default, Jun 22 2022, 20:18:18) 
    [GCC 9.4.0] linux
    

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by Benjamin-eecs 8
  • [BUG] Reward is not deterministic after seeding the env

    [BUG] Reward is not deterministic after seeding the env

    Describe the bug

    I use envpool to make HalfCheeth-v3 with a fixed seed, but the rewards are not the same during several runs. Specifically, only the reward turned by the first env is not deterministic, other envs are good. And if the num_envs is small, this bug does not occur.

    To Reproduce

    import envpool
    import numpy as np
    
    def random_rollout():
        np.random.seed(0)
        n = 32
        envs = envpool.make_gym('HalfCheetah-v3', num_envs=n, seed=123)
        envs.reset()
        rew_sum = 0
        for _ in range(10):
            action = np.random.rand(n, envs.action_space.shape[0])
            obs, rew, done, info = envs.step(action)
            rew_sum += rew
        envs.close()
        return rew_sum
    
    
    if __name__ == "__main__":
        a = random_rollout()
        b = random_rollout()
        print(a - b)
    

    Output:

    [-0.01131058  0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.        ]
    

    Expected behavior

    The reward should be deterministic after seeding.

    System info

    Describe the characteristic of your environment:

    • envpool version: '0.6.0'
    • envpool is installed via pip
    • Python version: 3.8.10
    0.6.0 1.21.5 3.8.10 (default, Jun  4 2021, 15:09:15) 
    [GCC 7.5.0] linux
    

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by tongzhoumu 8
  • [Feature Request] ACME Integration

    [Feature Request] ACME Integration

    https://github.com/deepmind/acme

    Road Map:

    @TianyiSun316

    • [ ] Go through ACME codebase and integrate vector_env to the available algorithms;
    • [ ] Write Atari examples;
    • [ ] Check Atari performance: Pong and Breakout;
    • [ ] Submit PR;

    @LeoGuo98

    • [ ] Do some experiments with sample efficiency (actually you can try out with different libraries, either ACME, tianshou, or sb3, this doesn't depend on the previous item)

    Resources:

    tianshou: #51 stable-baselines3: #39 cleanrl: #48 #53

    cc @zhongwen

    enhancement 
    opened by Trinkle23897 7
  • Atari option for repeat_action_probability

    Atari option for repeat_action_probability

    The -v5 Gym Atari environments have sticky actions enabled by default (with repeat_action_probability=0.25, see here). This makes it impossible to replicate the original results from several key papers, especially the DQN Nature paper.

    Would it be possible to add an option to the Atari environment options that lets the user change repeat_action_probability to a different value? I believe that internally this can be accomplished by forwarding the argument to either gym.make or the ALE constructor.

    enhancement 
    opened by brett-daley 7
  • [BUG] Segfault when batch size is larger than 255 on Atari environments

    [BUG] Segfault when batch size is larger than 255 on Atari environments

    Describe the bug

    Segfault when batch size is larger than 255 on Atari environments

    MuJoCo environment seems to work well.

    To Reproduce

    Steps to reproduce the behavior.

    import time
    
    import envpool
    import numpy as np
    
    batch_size = 256  # set to 255 works
    
    env = envpool.make_gym("Breakout-v5",
                            stack_num=1,
                            
                            num_envs=batch_size * 2,
                            batch_size=batch_size,
    
                            use_inter_area_resize=False,
    
                            img_width=88,
                            img_height=88,
                            
                            num_threads=0,
                            thread_affinity_offset=0)
    action = np.array(
        [env.action_space.sample() for _ in range(batch_size)]
    )
    
    counter = 0
    
    env.async_reset()
    
    last_time = time.time()
    while True:
        obs, rew, done, info = env.recv()
    
        env_id = info["env_id"]
        env.send(action, env_id)
    
        counter += batch_size
        if counter >= 100000:
            cur_time = time.time()
            print("TPS", counter / (cur_time - last_time))
    
            counter = 0
            last_time = cur_time
    
    
    [1]    2959596 segmentation fault (core dumped)  python test_envpool.py
    

    Expected behavior

    Can run with large batch size, like 1024, 2048, etc.

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    0.6.1.post1 1.21.2 3.8.12 (default, Oct 12 2021, 13:49:34) 
    [GCC 7.5.0] linux
    

    Additional context

    Set batch size to 1024 works / segfaults randomly

    1024
    TPS 49611.30131772514
    TPS 57661.12695997062
    TPS 52648.235412990536
    TPS 52059.6945247295
    [1]    2971074 segmentation fault (core dumped)  python test_envpool.py
    

    Reason and Possible fixes

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
    bug 
    opened by imoneoi 6
  • examples now support gym 0.26

    examples now support gym 0.26

    Description

    Handle interface changes from gym >= 0.26 based on #205 .

    Motivation and Context

    Some of the examples break with gym >= 0.26

    Types of changes

    What types of changes does your code introduce? Put an x in all the boxes that apply:

    • [x] Example (update in the folder of example)

    Implemented Tasks

    • [x] update files in examples: xla_step.py, acme_examples/, cleanrl_examples/ and ppo_atari/

    Checklist

    Go over all the following points, and put an x in all the boxes that apply. If you are unsure about any of these, don't hesitate to ask. We are here to help!

    • [x] I have read the CONTRIBUTION guide (required)
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the tests accordingly (required for a bug fix or a new feature).
    • [ ] I have updated the documentation accordingly.
    • [x] I have reformatted the code using make format (required)
    • [x] I have checked the code using make lint (required)
    • [x] I have ensured make bazel-test pass. (required)
    opened by 51616 5
  • [BUG] no Acc-v3 environment

    [BUG] no Acc-v3 environment

    Describe the bug

    A clear and concise description of what the bug is.

    When using tianshou, there's no Acc-v3 environment.

    File "test_dqn_acc.py", line 246, in Acc_tain() File "test_dqn_acc.py", line 112, in Acc_tain args.task, num_envs=args.training_num, env_type="gym" File "/home/zhulin/.conda/envs/mytorch/lib/python3.7/site-packages/envpool/registration.py", line 43, in make f"{task_id} is not supported, envpool.list_all_envs() may help." AssertionError: Acc-v3 is not supported, envpool.list_all_envs() may help.

    question 
    opened by 963141377 5
  • [Feature Request] support for Python environments such as minigrid

    [Feature Request] support for Python environments such as minigrid

    Motivation

    Currently python environments are first converted to c++ code before being supported. This isn't very scalable given all the environments which need to be translated, plus writing environment code in python has its advantages such as readability and easier to modify.

    We are wondering if it is possible to directly support running python code. It's a bit tricky because the current architecture seems to rely on a threadpool. There may need to be support for a process pool before this can happen?

    Solution

    Support python environments directly.

    enhancement 
    opened by le-horizon 0
  • Windows Migration

    Windows Migration

    Description

    I attempt to migrate envpool to the Windows environment.

    Motivation and Context

    Why is this change required? What problem does it solve?

    See details of my progress and Windows Environment setup in WINDOWS.md

    If it fixes an open issue, please link to the issue here. close #168 You can use the syntax close #233 if this solves the issue #233

    • [ ] I have raised an issue to propose this change (required for new features and bug fixes)

    Types of changes

    What types of changes does your code introduce? Put an x in all the boxes that apply:

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds core functionality)
    • [ ] New environment (non-breaking change which adds 3rd-party environment)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [x] Documentation (update in the documentation)
    • [ ] Example (update in the folder of example)

    Implemented Tasks

    Details in WINDOWS.md

    Checklist

    Go over all the following points, and put an x in all the boxes that apply. If you are unsure about any of these, don't hesitate to ask. We are here to help!

    • [ ] I have read the CONTRIBUTION guide (required)
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the tests accordingly (required for a bug fix or a new feature).
    • [ ] I have updated the documentation accordingly.
    • [ ] I have reformatted the code using make format (required)
    • [ ] I have checked the code using make lint (required)
    • [ ] I have ensured make bazel-test pass. (required)
    opened by peilinrao 1
  • Change arXiv to official NeurIPS publication

    Change arXiv to official NeurIPS publication

    Description

    Closes #204

    Motivation and Context

    • [x] I have raised an issue to propose this change (required for new features and bug fixes)

    Types of changes

    What types of changes does your code introduce? Put an x in all the boxes that apply:

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds core functionality)
    • [ ] New environment (non-breaking change which adds 3rd-party environment)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [x] Documentation (update in the documentation)
    • [ ] Example (update in the folder of example)
    opened by vwxyzjn 1
  • [Feature Request] XLA interface speed comparison

    [Feature Request] XLA interface speed comparison

    Motivation

    If I understand correctly, the speed up of envpool comes from c++ implementation as supposed to python. So, I wonder if the XLA interface will provide anymore speed up when jitted by jax, which can then be further fused with NN policy using jax.lax.scan. It would be nice to have a benchmark for the XLA version of the environments. I suppose different hardware would yield different results, which is useful for deciding when to jit or not to jit the environment.

    Solution

    Benchmark of non-jitted XLA vs jitted XLA vs the default c++ implementation

    Checklist

    • [ x ] I have checked that there is no similar issue in the repo (required)
    enhancement 
    opened by 51616 1
  • Change arXiv to official NeurIPS publication

    Change arXiv to official NeurIPS publication

    Motivation

    Change arXiv to official NeurIPS publication in README

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    enhancement 
    opened by zhongwen 0
  • Procgen Integration Draft

    Procgen Integration Draft

    Description

    Continued Procgen integration progress with @LeoGuo98 together.

    A few things:

    1. globalGameRegistry somehow doesn't successfully register each game while bazel build & compile. Temporary solution is just write a patch for each game to return a shared_ptr in this commit.

    2. The batch size for the StateBufferQueue seems to be 0, even if we set it explicitly, which leads to division by zero error. We haven't figured out what's the cause behind it. The env._recv() and env._reset() is the "crash entry point". See details comments in this commit. Trinkle points out that it's probably a memory allocation issue regard to array indexing.

    3. There is also some problem with loading the game background picture. Currently by setting game_->options.use_generated_assets = true; to avoid such loading. But not sure it it would have adverse effect later on.

    4. Currently when stepping, we see different RGB 64*64 observations, meaning the game character is indeed moving.

    5. We fix the problem with IsDone always being True. The game could works. And cv2.imwrite() we export the rgb 64x64x3 pictures for verification. It does look correct.

    6. We finished deterministic and alignment test on procgen raw environment and Envpool's environment with gym and dmc

    7. We supported build environment from scratch on Ubuntu 22.04 LTS (solved QT path dependency problem) (a raw new Ubuntu needs sudo apt update && sudo apt install qt5-default && sudo apt-get install qtdeclarative5-dev to install necessary QT dependency for Procgen compilation

    PROBLEM AT HAND:

    • Cannot pip install procgen in the Bazel sandbox, because original procgen folder has empty space in between folder name.

    Solution now:

    • Out sourcing the procgen repo to my own github fork and make modification there. see link

    Progress:

    • Able to build the me-modified procgen on new AWS Ubuntu 20.04 LTS instance.
    • Able to pip install procgen@git+https://github.com/YukunJ/procgen.git and enter Python to import this procgen library and make environments (both on AWS instance and my local MacOS)

    But, on the github testing machine, when building the project, it gives the error of links

    ...
    File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/0afe9dcfb3afec104db6df7ee3f95f3c/external/pypi__setuptools/setuptools/_distutils/dist.py", line 986, in run_command
              cmd_obj.run()
            File "/tmp/pip-wheel-3u_a5q_c/procgen_c19c3bc9b11045eb8ba9f6db8928ae4c/setup.py", line 65, in run
              import builder
            File "/tmp/pip-wheel-3u_a5q_c/procgen_c19c3bc9b11045eb8ba9f6db8928ae4c/procgen/builder.py", line 11, in <module>
              import gym3
          ModuleNotFoundError: No module named 'gym3'
    

    even if I add the requirements.txt

    ...
    gym3
    procgen@git+https://github.com/YukunJ/procgen.git
    

    But I don't have this problem building Envpool project from stratch on AWS Instance. I am able to execute the following command:

    // Start a new AWS Instance Ubuntu 22.04 LTS
    
    // Ubuntu Basic Build Tool Support
    sudo apt update &&
    sudo add-apt-repository ppa:ubuntu-toolchain-r/test &&
    sudo apt install -y gcc-9 g++-9 build-essential &&
    sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 60 --slave /usr/bin/g++ g++ /usr/bin/g++-9 &&
    sudo apt install -y python3-dev python3-pip &&
    sudo ln -sf /usr/bin/python3 /usr/bin/python &&
    sudo apt update && sudo apt install qt5-default && 
    sudo apt-get install qtdeclarative5-dev
    sudo apt install -y python3-dev python3-pip
    sudo ln -sf /usr/bin/python3 /usr/bin/python
    
    // Install my 3rd Procgen
    pip uninstall procgen
    pip install procgen@git+https://github.com/YukunJ/procgen.git
    
    // Run Procgen in Python
    $ python3
    >>> import gym
    >>> import procgen
    >>> import numpy as np
    >>> procgen_gym = gym.make("procgen:procgen-bigfish-v0", rand_seed=0, use_generated_assets=True)
    >>> procgen_gym.reset()
    >>> num_env, act_space = 1, procgen_gym.action_space
    >>> step_count = 0
    >>> raw_done = False
    >>> while (not raw_done):
    	action = np.array([act_space.sample() for _ in range(num_env)])
    	_, raw_reward, raw_done, _ = procgen_gym.step(action[0])
    	step_count += 1
    	print("Step=", step_count)
    
    opened by YukunJ 0
Releases(v0.7.0)
Kokkos C++ Performance Portability Programming EcoSystem: The Programming Model - Parallel Execution and Memory Abstraction

Kokkos: Core Libraries Kokkos Core implements a programming model in C++ for writing performance portable applications targeting all major HPC platfor

Kokkos 1.2k Jan 5, 2023
Powerful multi-threaded coroutine dispatcher and parallel execution engine

Quantum Library : A scalable C++ coroutine framework Quantum is a full-featured and powerful C++ framework build on top of the Boost coroutine library

Bloomberg 491 Dec 30, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous tasks programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, a

Taskflow 7.6k Dec 31, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous task programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, an

Taskflow 7.6k Dec 26, 2022
A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

Fiber Tasking Lib This is a library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies. Dependenc

RichieSams 796 Dec 30, 2022
SymQEMU: Compilation-based symbolic execution for binaries

SymQEMU This is SymQEMU, a binary-only symbolic executor based on QEMU and SymCC. It currently extends QEMU 4.1.1 and works with the most recent versi

null 224 Dec 21, 2022
High Performance Linux C++ Network Programming Framework based on IO Multiplexing and Thread Pool

Kingpin is a C++ network programming framework based on TCP/IP + epoll + pthread, aims to implement a library for the high concurrent servers and clie

null 23 Oct 19, 2022
Exploration of x86-64 ISA using speculative execution.

Haruspex /həˈrʌspeks/ A religious official in ancient Rome who predicted the future or interpreted the meaning of events by examining the insides of b

Can Bölük 281 Nov 21, 2022
Bolt is a C++ template library optimized for GPUs. Bolt provides high-performance library implementations for common algorithms such as scan, reduce, transform, and sort.

Bolt is a C++ template library optimized for heterogeneous computing. Bolt is designed to provide high-performance library implementations for common

null 360 Dec 27, 2022
Concurrency Kit 2.1k Jan 4, 2023
A C++17 thread pool for high-performance scientific computing.

We present a modern C++17-compatible thread pool implementation, built from scratch with high-performance scientific computing in mind. The thread pool is implemented as a single lightweight and self-contained class, and does not have any dependencies other than the C++17 standard library, thus allowing a great degree of portability

Barak Shoshany 1.1k Jan 4, 2023
Thread-pool-cpp - High performance C++11 thread pool

thread-pool-cpp It is highly scalable and fast. It is header only. No external dependencies, only standard library needed. It implements both work-ste

Andrey Kubarkov 542 Dec 17, 2022
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

ArrayFire 4k Dec 27, 2022
An optimized C library for math, parallel processing and data movement

PAL: The Parallel Architectures Library The Parallel Architectures Library (PAL) is a compact C library with optimized routines for math, synchronizat

Parallella 296 Dec 11, 2022
Material for the UIBK Parallel Programming Lab (2021)

UIBK PS Parallel Systems (703078, 2021) This repository contains material required to complete exercises for the Parallel Programming lab in the 2021

null 12 May 6, 2022
Shared-Memory Parallel Graph Partitioning for Large K

KaMinPar The graph partitioning software KaMinPar -- Karlsruhe Minimal Graph Partitioning. KaMinPar is a shared-memory parallel tool to heuristically

Karlsruhe High Quality Graph Partitioning 17 Nov 10, 2022
Parallel algorithms (quick-sort, merge-sort , enumeration-sort) implemented by p-threads and CUDA

程序运行方式 一、编译程序,进入sort-project(cuda-sort-project),输入命令行 make 程序即可自动编译为可以执行文件sort(cudaSort)。 二、运行可执行程序,输入命令行 ./sort 或 ./cudaSort 三、删除程序 make clean 四、指定线程

Fu-Yun Wang 3 May 30, 2022
Partr - Parallel Tasks Runtime

Parallel Tasks Runtime A parallel task execution runtime that uses parallel depth-first (PDF) scheduling [1]. [1] Shimin Chen, Phillip B. Gibbons, Mic

null 32 Jul 17, 2022