Thread-pool - Thread pool implementation using c++11 threads

Overview

Table of Contents

 Introduction
 Build instructions
 Thread pool
        Queue
        Submit function
        Thread worker
 Usage example
        Use case#1
        Use case#2
        Use case#3
 Future work
 References

Introduction:

A thread pool is a technique that allows developers to exploit the concurrency of modern processors in an easy and efficient manner. It's easy because you send "work" to the pool and somehow this work gets done without blocking the main thread. It's efficient because threads are not initialized each time we want the work to be done. Threads are initialized once and remain inactive until some work has to be done. This way we minimize the overhead.

There are many more Thread pool implementations in C++, many of them are probably better (safer, faster...) than mine. However,I believe my implementation are very straightforward and easy to understand.

Disclaimer: Please Do not use this project in a professional environment. It may contain bugs and/or not work as expected. I did this project to learn how C++11 Threads work and provide an easy way for other people to understand it too.

Build instructions:

This project has been developed using Netbeans and Linux but it should work on Windows, MAC OS and Linux. It can be easily build using CMake and different other generators. The following code can be used to generate the VS 2017 project files:

// VS 2017
cd <project-folder>
mkdir build
cd build/
cmake .. "Visual Studio 15 2017 Win64"

Then, from VS you can edit and execute the project. Make sure that main project is set up as the startup project

If you are using Linux, you need to change the generator (use the default) and execute an extra operation to actually make it executable:

// Linux
cd <project-folder>
mkdir build
cd build/
cmake ..
make

Thread pool

The way that I understand things better is with images. So, let's take a look at the image of thread pool given by Wikipedia:

As you can see, we have three important elements here:

  • Tasks Queue. This is where the work that has to be done is stored.
  • Thread Pool. This is set of threads (or workers) that continuously take work from the queue and do it.
  • Completed Tasks. When the Thread has finished the work we return "something" to notify that the work has finished.

Queue

We use a queue to store the work because it's the more sensible data structure. We want the work to be started in the same order that we sent it. However, this queue is a little bit special. As I said in the previous section, threads are continuously (well, not really, but let's assume that they are) querying the queue to ask for work. When there's work available, threads take the work from the queue and do it. What would happen if two threads try to take the same work at the same time? Well, the program would crash.

To avoid these kinds of problems, I implemented a wrapper over the standard C++ Queue that uses mutex to restrict the concurrent access. Let's see a small sample of the SafeQueue class:

void enqueue(T& t) {
	std::unique_lock<std::mutex> lock(m_mutex);
	m_queue.push(t);
}

To enqueue the first thing we do is lock the mutex to make sure that no one else is accessing the resource. Then, we push the element to the queue. When the lock goes out of scopes it gets automatically released. Easy, huh? This way, we make the Queue thread-safe and thus we don't have to worry many threads accessing and/or modifying it at the same "time".

Submit function

The most important method of the thread pool is the one responsible of adding work to the queue. I called this method submit. It's not difficult to understand how it works but its implementation can seem scary at first. Let's think about what should do and after that we will worry about how to do it. What:

  • Accept any function with any parameters.
  • Return "something" immediately to avoid blocking main thread. This returned object should eventually contain the result of the operation.

Cool, let's see how we can implement it.

Submit implementation

The complete submit functions looks like this:

// Submit a function to be executed asynchronously by the pool
template<typename F, typename...Args>
auto submit(F&& f, Args&&... args) -> std::future<decltype(f(args...))> {
	// Create a function with bounded parameters ready to execute
	std::function<decltype(f(args...))()> func = std::bind(std::forward<F>(f), std::forward<Args>(args)...);
	// Encapsulate it into a shared ptr in order to be able to copy construct / assign 
	auto task_ptr = std::make_shared<std::packaged_task<decltype(f(args...))()>>(func);

	// Wrap packaged task into void function
	std::function<void()> wrapper_func = [task_ptr]() {
	  (*task_ptr)(); 
	};

	// Enqueue generic wrapper function
	m_queue.enqueue(wrapperfunc);

	// Wake up one thread if its waiting
	m_conditional_lock.notify_one();

	// Return future from promise
	return task_ptr->get_future();
}

Nevertheless, we're going to inspect line by line what's going on in order to fully understand how it works.

Variadic template function

template<typename F, typename...Args>

This means that the next statement is templated. The first template parameter is called F (our function) and second one is a parameter pack. A parameter pack is a special template parameter that can accept zero or more template arguments. It is, in fact, a way to express a variable number of arguments in a template. A template with at least one parameter pack is called variadic template

Summarizing, we are telling the compiler that our submit function is going to take one generic parameter of type F (our function) and a parameter pack Args (the parameters of the function F).

Function declaration

auto submit(F&& f, Args&&... args) -> std::future<decltype(f(args...))> {

This may seem weird but, it's not. A function, in fact, can be declared using two different syntaxes. The following is the most well known:

return-type identifier ( argument-declarations... )

But, we can also declare the function like this:

auto identifier ( argument-declarations... ) -> return_type

Why two syntaxes? Well, imagine that you have a function that has a return type that depends on the input parameters of the function. Using the first syntax you can't declare that function without getting a compiler error since you would be using a variable in the return type that has not been declared yet (because the return type declaration goes before the parameters type declaration).

Using the second syntax you can declare the function to have return type auto then, using the -> you can declare the return type depending on the arguments of the functions that have been declared previously.

Now, let's inspect the parameters of the submit function. When the type of a parameter is declared as T&& for some deducted type T that parameter is a universal reference. This term was coined by Scott Meyers because T&& can also mean r-value reference. However, in the context of type deduction, it means that it can be bound to both l-values and r-values, unlike l-value references that can only be bound to non-const objects (they bind only to modifiable lvalues) and r-value references (they bind only to rvalues).

The return type of the function is of type std::future. An std::future is a special type that provides a mechanism to access the result of asynchronous operations, in our case, the result of executing a specific function. This makes sense with what we said earlier.

Finally, the template type of std::future is decltype(f(args...)). Decltype is a special C++ keyword that inspects the declared type of an entity or the type and value category of an expression. In our case, we want to know the return type of the function f, so we give decltype our generic function f and the parameter pack args.

Function body

// Create a function with bounded parameters ready to execute
std::function<decltype(f(args...))()> func = std::bind(std::forward<F>(f), std::forward<Args>(args)...);

There are many many things happening here. First of all, the std::bind(F, Args) is a function that creates a wrapper for F with the given Args. Caling this wrapper is the same as calling F with the Args that it has been bound. Here, we are simply calling bind with our generic function f and the parameter pack args but using another wrapper std::forward(t) for each parameter. This second wrapper is needed to achieve perfect forwarding of universal references. The result of this bind call is a std::function. The std::function is a C++ object that encapsulates a function. It allows you to execute the function as if it were a normal function calling the operator() with the required parameters BUT, because it is an object, you can store it, copy it and move it around. The template type of any std::function is the signature of that function: std::function< return-type (arguments)>. In this case, we already know how to get the return type of this function using decltype. But, what about the arguments? Well, because we bound all arguments args to the function f we just have to add an empty pair of parenthesis that represents an empty list of arguments: decltype(f(args...))().

// Encapsulate it into a shared ptr in order to be able to copy construct / assign 
auto task_ptr = std::make_shared<std::packaged_task<decltype(f(args...))()>>(func);

The next thing we do is we create a std::packaged_task(t). A packaged_task is a wrapper around a function that can be executed asynchronously. It's result is stored in a shared state inside an std::future object. The templated type T of an std::packaged_task(t) is the type of the function t that is wrapping. Because we said it before, the signature of the function f is decltype(f(args...))() that is the same type of the packaged_task. Then, we just wrap again this packaged task inside a std::shared_ptr using the initialize function std::make_shared.

// Wrap packaged task into void function
std::function<void()> wrapperfunc = [task_ptr]() {
  (*task_ptr)(); 
};

Again, we create a std:.function, but, note that this time its template type is void(). Independently of the function f and its parameters args this wrapperfunc the return type will always be void. Since all functions f may have different return types, the only way to store them in a container (our Queue) is wrapping them with a generic void function. Here, we are just declaring this wrapperfunc to execute the actual task taskptr that will execute the bound function func.

// Enqueue generic wrapper function
m_queue.enqueue(wrapperfunc);

We enqueue this wrapperfunc.

// Wake up one thread if its waiting
m_conditional_lock.notify_one();

Before finishing, we wake up one thread in case it is waiting.

// Return future from promise
return task_ptr->get_future();

And finally, we return the future of the packaged_task. Because we are returning the future that is bound to the packaged_task taskptr that, at the same time, is bound with the function func, executing this taskptr will automatically update the future. Because we wrapped the execution of the taskptr with a generic wrapper function, is the execution of wrapperfunc that, in fact, updates the future. Aaaaand. since we enqueued this wrapper function, it will be executed by a thread after being dequeued calling the operator().

Thread worker

Now that we understand how the submit method works, we're going to focus on how the work gets done. Probably, the simplest implementation of a thread worker could be using polling:

 Loop
	If Queue is not empty
		Dequeue work
		Do it

This looks alright but it's not very efficient. Do you see why? What would happen if there is no work in the Queue? The threads would keep looping and asking all the time: Is the queue empty?

The more sensible implementation is done by "sleeping" the threads until some work is added to the queue. As we saw before, as soon as we enqueue work, a signal notify_one() is sent. This allows us to implement a more efficient algorithm:

Loop
	If Queue is empty
		Wait signal
	Dequeue work
	Do it

This signal system is implemented in C++ with conditional variables. Conditional variables are always bound to a mutex, so I added a mutex to the thread pool class just to manage this. The final code of a worker looks like this:

void operator()() {
	std::function<void()> func;
	bool dequeued;
	while (!m_pool->m_shutdown) {
	{
		std::unique_lock<std::mutex> lock(m_pool->m_conditional_mutex);
		if (m_pool->m_queue.empty()) {
			m_pool->m_conditional_lock.wait(lock);
		}
		dequeued = m_pool->m_queue.dequeue(func);
	}
		if (dequeued) {
	  		func();
		}
	}	
}

The code is really easy to understand so I am not going to explain anything. The only thing to note here is that, func is our wrapper function declared as:

std::function<void()> wrapperfunc = [task_ptr]() {
  (*task_ptr)(); 
};

So, executing this function will automatically update the future.

Usage example

Creating the thread pool is as easy as:

// Create pool with 3 threads
ThreadPool pool(3);

// Initialize pool
pool.init();

When we want to shutdown the pool just call:

// Shutdown the pool, releasing all threads
pool.shutdown()

Ff we want to send some work to the pool, after we have initialized it, we just have to call the submit function:

pool.submit(work);

Depending on the type of work, I've distinguished different use-cases. Suppose that the work that we have to do is multiply two numbers. We can do it in many different ways. I've implemented the three most common ways to do it that I can imagine:

  • Use-Case #1. Function returns the result
  • Use-Case #2. Function updates by ref parameter with the result
  • Use-Case #3. Function prints the result

Note: This is just to show how the submit function works. Options are not exclusive

Use-Case #1

The multiply function with a return looks like this:

// Simple function that adds multiplies two numbers and returns the result
int multiply(const int a, const int b) {
  const int res = a * b;
  return res;
}

Then, the submit:

// The type of future is given by the return type of the function
std::future<int> future = pool.submit(multiply, 2, 3);

We can also use the auto keyword for convenience:

auto future = pool.submit(multiply, 2, 3);

Nice, when the work is finished by the thread pool we know that the future will get updated and we can retrieve the result calling:

const int result = future.get();
std::cout << result << std::endl;

The get() function of std::future always return the type T of the future. This type will always be equal to the return type of the function passed to the submit method. In this case, int.

Use-Case #2

The multiply function has a parameter passed by ref:

// Simple function that adds multiplies two numbers and updates the out_res variable passed by ref
void  multiply(int& out_res, const int a, const int b) {
	out_res = a * b;
}

Now, we have to call the submit function with a subtle difference. Because we are using templates and type deduction (universal references), the parameter passed by ref needs to be called using std::ref(param) to make sure that we are passing it by ref and not by value.

int result = 0;
auto future = pool.submit(multiply, std::ref(result), 2, 3);
// result is 0
future.get();
// result is 6
std::cout << result << std::endl;

In this case, what's the type of future? Well, as I said before, the return type will always be equal to the return type of the function passed to the submit method. Because this function is of type void, the future is std::future. Calling future.get() returns void. That's not very useful, but we still need to call .get() to make sure that the work has been done.

Use-Case #3

The last case is the easiest one. Our multiply function simply prints the result:

We have a simple function without output parameters. For this example I implemented the following multiplication function:

// Simple function that adds multiplies two numbers and prints the result
void multiply(const int a, const int b) {
  const int result = a * b;
  std::cout << result << std::endl;
}

Then, we can simply call:

auto future = pool.submit(multiply, 2, 3);
future.get();

In this case, we know that as soon as the multiplication is done it will be printed. If we care when this is done, we can wait for it calling future.get().

Checkout the main program for a complete example.

Future work

  • Make it more reliable and safer (exceptions)
  • Find a better way to use it with member functions (thanks to @rajenk)
  • Run benchmarks and improve performance if needed
  • Evaluate performance and impact of std::function in the heap and try alternatives if necessary. (thanks to @JensMunkHansen)

References

Issues
  • How to use C++ member function in thread pool

    How to use C++ member function in thread pool

    First of all thanks for your detailed write up to explain the code!

    I am trying to use the threadpool submit function to take a C++ class member function. I am not able to compile. I followed standard std::thread submitting a class member function method. Like the code shown below.

    class A
    {
       int i;
    public:
          A() { i =10; };
        ~A()
        void func1() { i++;};
    };
    
    A objA;
    
    ThreadPool tPool(10);
    
    tPool.submit(&A::func1,&objA);
    

    But I am getting the following error. error: no matching function for call to ‘ThreadPool::submit(void (A::)(), A)’

    Does the current submit() template function accept class member function? If not can you provide some tip as to how this function can be modified to accept class member function?

    Appreciate your help.

    Here is the std::thread method of passing a non static member function.

    class Task
    {
    public:
    	void execute(std::string command);
    };
    
    Task * taskPtr = new Task();
    
    // Create a thread using member function
    std::thread th(&Task::execute, taskPtr, "Sample Task");
    

    Thanks, Raj

    question 
    opened by rajenk 5
  • Thread pool affinity support on Solaris/Linux

    Thread pool affinity support on Solaris/Linux

    This is affinity support on Solaris/Linux. It is well-tested and increasing performance/decreasing latency of pool for similar tasks execution. To enable it just define AFFINITY (-DAFFINITY). Threads binds across online CPU-cores using Round-Robin. In my case, this reduces pool latency ~ three times.

    Also all int replaced to std::size_t due to portability reasons.

    enhancement 
    opened by yvoinov 3
  • the runtime is very slow when you have a lot of light functions

    the runtime is very slow when you have a lot of light functions

    first of all thank you for code sharing, Can you help me please, I tested your code in my projet, but i found the runtime is very slow, compared to the normal code (without threadpool) or compared normal thread. For exmeple my code like this: ////////////////////////////////////////////////// void myfunction(int k){ std::vector myvetcor; myvector.push_back (k*2.77+25/5); } int main() {

     {
      ThreadPool pool{ 8};
       for (int i = 0; i < 20000; ++i)
    		{
    			pool.enqueue([ i]() {
                                                    myfunction(i);
    			});
    		}
    	}
    

    return 0; } //////////////////////////////////////////////////// thank you

    help wanted 
    opened by NoureddineHsaini 2
  • duplicated mutex?

    duplicated mutex?

    Great implementation of thread pool! It is short and clear.

    I have a question:

    There is "m_conditional_mutex" in class ThreadPool And there is "m_mutex" in class SafeQueue

    Do these two mutex serve the same purpose i.e. to lock the queue? Queue is the shared resource that everybody try to get hold of. So we need a lock. But one lock is enough.

    Another potential issue is that, "m_shutdown" in class ThreadPool is also a shared resource. It should have a lock associated with it.

    These are the only two shared resources that existed in the system. Therefore, two locks are required in total.

    question 
    opened by gongfan99 1
  • Run README.md through a spell checker

    Run README.md through a spell checker

    Hi,

    I really appreciated finding your thread pool implementation. I found it very helpful.

    I noticed some spelling mistakes in the README, this PR fixes most of them.

    Thanks for making such a solid guide, MrLever

    opened by MrLever 0
  • Some polishing/improvements

    Some polishing/improvements

    Replaced for to range-for (to be more elegant and a bit less code generated) with sequence, replace all int to size_t (to more common), change comments style to /* */

    opened by yvoinov 0
  • fixup bug as description in #39

    fixup bug as description in #39

    I found that m_queue.size() isn't equal to zero when pool.shutdown() was runned, so some tasks do not finish when the thread pool closes.

    I fixed it up as follows:

    void operator()() {
      std::function<void()> func;
      bool dequeued;
      while (!m_pool->m_shutdown) {
        {
          std::unique_lock<std::mutex> lock(m_pool->m_conditional_mutex);
          if (m_pool->m_queue.empty()) {
            m_pool->m_conditional_lock.wait(lock);
          }
          dequeued = m_pool->m_queue.dequeue(func);
        }
        if (dequeued) {
          func();
        }
      }
    
      // If the task queue is not empty, continue obtain task from task queue, 
      // the multithread continues execution until the queue is empty
      while (!m_pool->m_queue.empty()) {
        {
          std::unique_lock<std::mutex> lock(m_pool->m_conditional_mutex);
          dequeued = m_pool->m_queue.dequeue(func);
          if (dequeued) {
            func();
          }
        }
      }
    }
    
    opened by muyuuuu 0
  • Bug report: When sub-task are performed quickly, the results are inaccurate

    Bug report: When sub-task are performed quickly, the results are inaccurate

    #include <iostream>
    
    #include "../include/ThreadPool.h"
    
    class TEST {
    private:
      std::mutex mtx;
      std::vector<int> v1[2];
      std::priority_queue<int> q;
    public:
      void init() {
        v1[0].push_back(1);
        v1[0].push_back(2);
        v1[0].push_back(3);
    
        v1[1].push_back(1);
        v1[1].push_back(3);
        v1[1].push_back(2);
      }
      
      void push(const int& index) {
        for (auto& i : v1[index]) {
          std::lock_guard<std::mutex> m{mtx};
          q.push(i);
        }
      }
    
      void run() {
        ThreadPool pool(2);
        pool.init();
        // https://github.com/mtrebi/thread-pool/issues/14
        // https://github.com/mtrebi/thread-pool/issues/7
        for (int i = 0; i < 100; i++) {
          for (int j = 0; j < 2; j++) {
            auto func = std::bind(&TEST::push, this, j);
            pool.submit(func);
          }
        }
        pool.shutdown();
    
        int tmp{-2};
        std::cout << q.size() << std::endl;
      }
    };
    
    int main(int argc, char *argv[]) {
      TEST t;
      t.init();
      t.run();
      return 0;
    }
    

    The output of the above code is not accurate.

    315 // sometimes
    291 // sometimes
    69 // sometimes
    
    opened by muyuuuu 4
  • 应用层,类的非静态函数不能使用,目前仅静态函数可用

    应用层,类的非静态函数不能使用,目前仅静态函数可用

    测试例程如下: class A { public: static int Afun(int n = 0) { std::cout << n << " hello, Afun ! " << std::this_thread::get_id() << std::endl; return n; }

    void Cfun(void) {
        std::cout <<  "  hello, Cfun !  "<< "mem_a :"<<mem_a <<"  " << std::this_thread::get_id() << std::endl;
        return ;
    }
    

    private: int mem_a=0; }; void test_thread_pool() { // Create pool with 3 threads,max_thr_num:100 //同时存活的线程的最小数量3,同时存活的线程的最大数量100 ThreadPool pool(1,2);

    // Initialize pool
    pool.init();
    
    std::cout << "main thread id :"<<std::this_thread::get_id()<< std::endl;
    
     std::future<int> gg = pool.submit(A::Afun, 9999);//静态函数成功
     A A_obj;
    std::future<int> hh = pool.submit(A_obj.Cfun,); //非静态函数不成功
    

    }

    : error: invalid use of non-static member function std::future hh = pool.submit(A_obj.Cfun); ^

    opened by dbdxnuliba 1
  • The output file doesn't output anything on compiling on Windows with MinGW-w64

    The output file doesn't output anything on compiling on Windows with MinGW-w64

    Let me first say that I'm a complete newbie to everything related to C++.

    I've tried compiling the example code on Windows using MinGW-w64. It compiles without throwing an error, but the executable doesn't output anything to the console upon running. Below in the complete log. Please help me figure out the problem, and if this is the expected output, help me understand how to wait for the thread pool to finish its jobs.

    F:\Programming\C++\thread-pool>mkdir build 
    
    F:\Programming\C++\thread-pool>cd build
    
    F:\Programming\C++\thread-pool\build>cmake .. -G "MinGW Makefiles"
    -- The C compiler identification is GNU 8.1.0
    -- The CXX compiler identification is GNU 8.1.0
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Check for working C compiler: C:/ProgramData/chocolatey/bin/gcc.exe - skipped
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Check for working CXX compiler: C:/ProgramData/chocolatey/bin/g++.exe - skipped
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Configuring done
    -- Generating done
    -- Build files have been written to: F:/Programming/C++/thread-pool/build
    
    F:\Programming\C++\thread-pool\build>make
    Scanning dependencies of target main
    [ 50%] Building CXX object CMakeFiles/main.dir/example/main.cpp.obj
    [100%] Linking CXX executable main.exe
    [100%] Built target main
    
    F:\Programming\C++\thread-pool\build>main
    
    F:\Programming\C++\thread-pool\build>
    
    opened by BlackXDragon 0
Owner
Mariano Trebino
AI Programmer @SmilegateBarcelona
Mariano Trebino
Thread pool - Thread pool using std::* primitives from C++17, with optional priority queue/greenthreading for POSIX.

thread_pool Thread pool using std::* primitives from C++11. Also includes a class for a priority thread pool. Requires concepts and C++17, including c

Tyler Hardin 74 Jun 15, 2022
Thread-pool-cpp - High performance C++11 thread pool

thread-pool-cpp It is highly scalable and fast. It is header only. No external dependencies, only standard library needed. It implements both work-ste

Andrey Kubarkov 531 May 27, 2022
ThreadPool - A simple C++11 Thread Pool implementation

ThreadPool A simple C++11 Thread Pool implementation. Basic usage: // create thread pool with 4 worker threads ThreadPool pool(4); // enqueue and sto

Jakob Progsch 5.6k Jun 24, 2022
A modern thread pool implementation based on C++20

thread-pool A simple, functional thread pool implementation using pure C++20. Features Built entirely with C++20 Enqueue tasks with or without trackin

Paul T 98 Jun 7, 2022
Pool is C++17 memory pool template with different implementations(algorithms)

Object Pool Description Pool is C++17 object(memory) pool template with different implementations(algorithms) The classic object pool pattern is a sof

KoynovStas 1 Feb 14, 2022
A novel technique to communicate between threads using the standard ETHREAD structure

??️ dearg-thread-ipc-stealth Usage There are two main exported methods, one to read from another thread, and another to serve the content to another t

Lloyd 79 Jun 19, 2022
Parallel algorithms (quick-sort, merge-sort , enumeration-sort) implemented by p-threads and CUDA

程序运行方式 一、编译程序,进入sort-project(cuda-sort-project),输入命令行 make 程序即可自动编译为可以执行文件sort(cudaSort)。 二、运行可执行程序,输入命令行 ./sort 或 ./cudaSort 三、删除程序 make clean 四、指定线程

Fu-Yun Wang 3 May 30, 2022
A easy to use multithreading thread pool library for C. It is a handy stream like job scheduler with an automatic garbage collector. This is a multithreaded job scheduler for non I/O bound computation.

A easy to use multithreading thread pool library for C. It is a handy stream-like job scheduler with an automatic garbage collector for non I/O bound computation.

Hyoung Min Suh 12 Jun 4, 2022
A C++17 thread pool for high-performance scientific computing.

We present a modern C++17-compatible thread pool implementation, built from scratch with high-performance scientific computing in mind. The thread pool is implemented as a single lightweight and self-contained class, and does not have any dependencies other than the C++17 standard library, thus allowing a great degree of portability

Barak Shoshany 774 Jun 30, 2022
An easy to use C++ Thread Pool

mvThreadPool (This library is available under a free and permissive license) mvThreadPool is a simple to use header only C++ threadpool based on work

Jonathan Hoffstadt 30 Jun 7, 2022
EOSP ThreadPool is a header-only templated thread pool writtent in c++17.

EOSP Threadpool Description EOSP ThreadPool is a header-only templated thread pool writtent in c++17. It is designed to be easy to use while being abl

null 1 Apr 22, 2022
High Performance Linux C++ Network Programming Framework based on IO Multiplexing and Thread Pool

Kingpin is a C++ network programming framework based on TCP/IP + epoll + pthread, aims to implement a library for the high concurrent servers and clie

null 14 Jun 19, 2022
Work Stealing Thread Pool

wstpool Work Stealing Thread Pool, Header Only, C++ Threads Consistent with the C++ async/future programming model. Drop-in replacement for 'async' fo

Yasser Asmi 3 Jul 20, 2021
MAN - Man is Thread Pool in C++17

Introduction MAN is a ThreadPool wrote in C++17. The name is chosen because, at least in France, it is said that men are not able to do several things

Antoine MORRIER 6 Mar 6, 2022
ThreadPool - A fastest, exception-safety and pure C++17 thread pool.

Warnings Since commit 468129863ec65c0b4ede02e8581bea682351a6d2, I move ThreadPool to C++17. (To use std::apply.) In addition, the rule of passing para

Han-Kuan Chen 121 Jun 4, 2022
CTPL - Modern and efficient C++ Thread Pool Library

CTPL Modern and efficient C++ Thread Pool Library A thread pool is a programming pattern for parallel execution of jobs, http://en.wikipedia.org/wiki/

null 1k Jun 20, 2022
Objectpool - Object pool implementation in C++11

Object pool allocator This is a C++11 implementation of an object pool allocator. For more information on object pool allocators and their purpose see

Cameron Hart 61 Jun 10, 2022
BabyCoin: mining pool

BabyCoin Pool Based on cryptonote-nodejs-pool cryptonote-nodejs-pool High performance Node.js (with native C addons) mining pool for CryptoNote based

null 1 May 15, 2022