Libgo - Go-style concurrency in C++11

Overview

libgo

Build Status

libgo -- a coroutine library and a parallel Programming Library

Libgo is a stackful coroutine library for collaborative scheduling written in C++ 11, and it is also a powerful and easy-to-use parallel programming library.

Three platforms are currently supported:

Linux

MacOSX

Windows (Win7 or above,x86 or x64,complie with VS2015/2017)

Using libgo to write multi-threaded programs, it can be developed as fast and logical as golang and Erlang concurrent languages, and has the performance advantages of C++ native.It make it happen that one can serve God and Mammon.

Libgo has the following characteristics:

  • 1.Provide golang's General powerful protocol, write code based on coroutine, can write simple code in a synchronous manner, while achieving asynchronous performance.

  • 2.Supporting massive coroutines, creating 1 million coroutines requires only 4.5 GB of physical memory. (data from real test, in no deliberately compressed stack situation.)

  • 3.Supporting multi-threaded scheduling protocols, providing efficient load balancing strategy and synchronization mechanism, it is easy to write efficient multi-threaded programs.

  • 4.The number of scheduled threads supports dynamic scaling, and there is no head blocking caused by slow scheduling.

  • 5.Use hook technology to make synchronous third-party libraries of linking processes become asynchronous calls, which greatly improves their performance. There's no need to worry that some DB authorities don't provide asynchronous drivers, such as hiredis and mysqlclient, which are client drivers that can be used directly and can achieve performance comparable to that of asynchronous drivers.

  • 6.Both dynamic links and full static links are supported, which makes it easy to generate executable files using C++ 11 static links and deploy them to low-level Linux systems.

  • 7.Provide Channel, Co_mutex, Co_rwmutex, timer and other features to help users write programs more easily.

  • 8.Supports local variables (CLS) of the process, and completely covers all scenarios of TLS (read the tutorial code sample13_cls.cpp for details).

  • From user feedback in the past two years, many developers have a project with an asynchronous non-blocking model (probably based on epoll, libuv or ASIO network libraries) and then need access to DBs such as MySQL that do not provide asynchronous driver. Conventional connection pool and thread pool schemes are intensive in high concurrency scenarios (each connection have to correspond to a thread for Best performance. Thousands of instruction cycles of thread context switching are intensive and too many active threads will lead to a sharp decline performance in OS scheduling capacity, which is unacceptable to many develops.

  • In this situation, there is no need to reconstruct the existing code if we want to use libgo to solve the problem of blocking operation in non-blocking model. The new libgo 3.0 has created three special tools for this scenario, which can solve this problem without intrusion: multi-scheduler with isolated running environment and easy interaction (read the tutorial code sample1_go.cpp for details), libggo can instead of the traditional thread pool scheme. (read tutorial code sample10_co_pool.cpp and sample11_connection_pool.cpp for details)

  • ** tutorial directory contains many tutorial codes, including detailed instructions, so that develop can learn libgo library step by step. **

  • If you find any bugs, good suggestions, or use ambiguities, you can submit a issue or contact the author directly: Email: [email protected]

compile and use libgo :

  • Vcpkg:

If you have installed vcpkg, you can install it directly using vcpkg: $ vcpkg install libgo

  • Linux:

    1.Use cmake to compile and install:

      $ mkdir build
      $ cd build
      $ cmake ..
    

    $ make debug #Skip it if you don`t want a debuggable versions. $ sudo make uninstall $ sudo make install

    2.Dynamic link to glibc: (put libgo at the front of link list)

      g++ -std=c++11 test.cpp -llibgo -ldl [-lother_libs]
    

    3.Full static link: (put libgo at the front of link list)

      g++ -std=c++11 test.cpp -llibgo -Wl,--whole-archive -lstatic_hook -lc -lpthread -Wl,--no-whole-archive [-lother_libs] -static
    
  • Windows: (3.0 is compatible with windows, just use master branch directly!)

    0.When using GitHub to download code on windows, we must pay attention to the problem of newline characters. Please install git correctly (using default options) and use git clone to download source code. (Do not download compressed packages)

    1.Use CMake to build project.

      #For example vs2015(x64):
      $ cmake .. -G"Visual Studio 14 2015 Win64"
    
      #For example vs2015(x86):
      $ cmake .. -G"Visual Studio 14 2015"
    

    2.If you want to execute the test code, please link the boost library. And set BOOST_ROOT in the cmake parameter:

      	For example:
      	$ cmake .. -G"Visual Studio 14 2015 Win64" -DBOOST_ROOT="e:\\boost_1_69_0"
    

performance

Like golang, libgo implements a complete scheduler (users only need to create a coroutine without concern for the execution, suspension and resource recovery of the coroutine). Therefore, libgo is qualified to compare the performance of single-threaded with golang (It is not qualified to do performance comparison in different ability).

Test environment: 2018 13-inch MAC notebook (CPU minimum) Operating System: Mac OSX CPU: 2.3 GHz Intel Core i5 (4 Core 8 Threads) Test script: $test/golang/test.sh thread_number

Matters needing attention(WARNING):

TLS or non-reentrant library functions that depend on TLS implementation should be avoided as far as possible. If it is unavoidable to use, we should pay attention to stop accessing the TLS data generated before handover after the process handover.

There are several kinds of behaviors that may cause the process switching:

  • The user calls co_yield to actively give up the cpu span.
  • Competitive Cooperative Lock, Channel Reading and Writing.
  • System Call of Sleep Series.
  • System calls waiting for events to trigger, such as poll, select, epoll_wait.
  • DNS-related system calls (gethostbyname series).
  • Connect, accept, data read-write operations on blocking sockets.
  • Data Read-Write Operation on Pipe.

System Call List of Hook on Linux System:

	connect   
	read      
	readv     
	recv      
	recvfrom  
	recvmsg   
	write     
	writev    
	send      
	sendto    
	sendmsg   
	poll      
	__poll
	select    
	accept    
	sleep     
	usleep    
	nanosleep
	gethostbyname                                                               
	gethostbyname2                                                              
	gethostbyname_r                                                             
	gethostbyname2_r                                                            
	gethostbyaddr                                                               
	gethostbyaddr_r

The above system calls are all possible blocking system calls. The whole thread is no longer blocked in the process. During the blocking waiting period, the CPU can switch to other processes to execute.System calls executed in native threads by HOOK are 100% consistent with the behavior of the original system calls without any change.

	socket
	socketpair
	pipe
	pipe2
	close     
	__close
	fcntl     
	ioctl     
	getsockopt
	setsockopt
	dup       
	dup2      
	dup3      

The above system calls will not cause blocking, although they are also Hook, but will not completely change their behavior, only for tracking socket options and status.

System Call List of Hook on Windows System:

	ioctlsocket                                                                        
	WSAIoctl                                                                           
	select                                                                             
	connect                                                                            
	WSAConnect                                                                         
	accept                                                                             
	WSAAccept                                                                          
	WSARecv                                                                            
	recv                                                                               
	recvfrom                                                                           
	WSARecvFrom                                                                        
	WSARecvMsg                                                                         
	WSASend                                                                            
	send                                                                               
	sendto                                                                             
	WSASendTo                                                                          
	WSASendMsg
Comments
  • 运行一段时间后,定时器异常

    运行一段时间后,定时器异常

    注意到timer 22575287, 应该等待2s,却立刻触发了,且 精度 -2s 具体日志如下: [2019-04-10 13:39:53.883513][05434][0000][000003]hook.cpp:81:(libgo_poll) task(id:3, file:/work/RLL/src/linkconf/syncer.cpp, line:19) hook libgo_poll(first-fd=8, nfds=1, timeout=2000, nonblocking=1). In coroutine. [2019-04-10 13:39:53.883519][05434][0000][000003]timer.h:445:(Dispatch) [id=1]Timer Dispatch mainloop=0 element=22575287 into completeSlot [2019-04-10 13:39:53.883525][05434][0000][000005]hook.cpp:793:(usleep) task(id:5, file:/work/mcu/server.cpp, line:32) hook usleep(microseconds=1000000). In coroutine. [2019-04-10 13:39:53.883529][05434][0000][000005]timer.h:445:(Dispatch) [id=1]Timer Dispatch mainloop=0 element=22575288 into completeSlot [2019-04-10 13:39:53.883544][05434][-001][000000]hook.cpp:171:(read_write_mode) task(nil) hook write(fd=4, buflen=147). Not in coroutine. [2019-04-10 13:39:53.883555][05434][-001][000000]hook.cpp:171:(read_write_mode) task(nil) hook write(fd=4, buflen=104). Not in coroutine. [2019-04-10 13:39:53.883608][05434][-001][000000]timer.h:402:(Trigger) [id=1]Timer trigger element=22575285 precision= -999881 us [2019-04-10 13:39:53.883621][05434][-001][000000]timer.h:402:(Trigger) [id=1]Timer trigger element=22575286 precision= -9999879 us [2019-04-10 13:39:53.883625][05434][-001][000000]timer.h:402:(Trigger) [id=1]Timer trigger element=22575287 precision= -1999893 us [2019-04-10 13:39:53.883629][05434][-001][000000]timer.h:402:(Trigger) [id=1]Timer trigger element=22575288 precision= -999899 us

    opened by dearbird 20
  • co_yield 异常crash

    co_yield 异常crash

    最近尝试迁移到3.0版本,程序老是跑一下就coredump,跟踪调试了下,把所有co_yield换成co_sleep(0),就不会有问题。

    另外写了测试代码验证了下,基本确定。 TEST(Coroutine, success2) { std::atomic<uint32_t> count{0};

    auto func = [&]() {
        co_yield; // crash
        co_sleep(100);
        count--;
    };
    
    auto test = [&]() {
        for (int i = 0; i < 10000; ++i)
        {
            count++;
            go func;
        }
    
        while (count > 0)
        {
            co_sleep(10);
        }
        // EXPECT_EQ(count, 0);
    
        co_sched.Stop();
    };
    
    go test;
    
    co_sched.Start();
    

    }

    Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffff693c700 (LWP 29310)] 0x00007ffff7b9ef66 in jump_fcontext () from /work/RLL/libs/libgo/lib/liblibgo.so

    opened by dearbird 13
  • 3.0 版本编译

    3.0 版本编译

    libgo版本: 3.0 分支

    boost 版本: 1.67.0

    系统环境: gcc: 4.8.3

    cat /etc/release CentOS Linux release 7.3.1611 (Core) Derived from Red Hat Enterprise Linux 7.3 (Source) NAME="CentOS Linux"

    CentOS Linux release 7.3.1611 (Core) CentOS Linux release 7.3.1611 (Core)

    编译条件: g++ -std=c++11 sample2_yield.cpp -I/home/huangtao/live-service/live-server/include/libgo -L/home/huangtao/live-service/live-server/libs -llibgo -L/usr/local/boost/lib/ -lboost_coroutine -lboost_context -lboost_thread -lboost_system -ldl -lpthread -static -static-libstdc++

    直接崩溃: Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () (gdb) bt #0 0x0000000000000000 in ?? () #1 0x0000000000415205 in staticInitialize () at /home/huangtao/libgo/libgo/common/config.cpp:34 #2 co::LibgoInitialize () at /home/huangtao/libgo/libgo/common/config.cpp:41 #3 0x000000000040b5de in co::Scheduler::Scheduler (this=0x783a00 co::Scheduler::getInstance()::obj) at /home/huangtao/libgo/libgo/scheduler/scheduler.cpp:30 #4 0x00000000004027fb in main ()

    另外编译到底以来 libc 还是 pthread库呢? 如果只依赖libc,编译会报: /home/huangtao/libgo/libgo/common/config.cpp:115: undefined reference to `pthread_self'

    opened by ht101996 12
  • libgo2.0设计上的一些讨论

    libgo2.0设计上的一些讨论

    @yyzybb537 先说windows吧,我在这个平台上看了一下代码: 因为libgo的网络处理是hook系统IO调用来实现阻塞socket到协程的转换,把阻塞socket转换成协程sleep实现的,所以它必须要保存sleep队列中的task能够得到充分地执行,就是把同步IO通过协程sleep方式变成异步(这样才不会阻塞物理线程),这样其实就会在线程中不停地查询sleep队列中的所有socket,通过不停地调用select来查询而且是每个socket都会不停地去查询,这样效率是比较低的。如果设计不用考虑第3方库的话就可以不用hook系统调用而使用库中自己的API比如:go_send,go_recv,因为这样做可以不用考虑高频率地处理sleep中的socket,所有的socket都放到同一个select/io_wait中去查询是否可操作,不用频繁调用select/io_wait(这里超时时间会决定sleep和timer的精度),如果这只是一个网络库最大的负载还是处理IO数据,高效的IO会是库的重点

    windows平台网络IO处理过程,协程中调用系统调用send,因为hook了一些系统调用所以会转到hook_send --> write_mode_hook: SetNonblocking(s, true)把socket变成非阻塞,R ret = fn(s, std::forward(args)...)先调用原始函数就是系统的send尝试发送数据如果成功了把socket还原成原来的非阻塞再返回,如果send失败表示可能需要等待执行select(1, NULL, &wfds, NULL, timeout ? &tm : NULL) --> hook_select:会循环调用safe_select(nfds, readfds, writefds, exceptfds)系统的select判断是否socket可以发送数据,如果socket不可写则调用g_Scheduler.SleepSwitch(delta_time)使用协程的sleep来切换协程,每次协程sleep唤醒后会检查是否超时如果超时则退出循环返回0,如果socket可写则返回的是系统select的值(>0),返回到wirte_mode_hook后会执行ret = fn(s, std::forward(args)...);系统send完成最终数据发送。

    opened by bigbao9494 11
  • libgo性能问题

    libgo性能问题

    你好,我在使用libgo做协程切换测试的时候使用性能分析工具分析了一下libgo的性能 发现如下现象:

    函数名 调用数 已用非独占时间百分比 已用独占时间百分比 平均已用非独占时间 平均已用独占时间 模块名 <lambda_90f6debc7fcf34860066f531e8cc5f6b>::operator() 1 99.97 8.92 12,601.52 1,124.35 switch.t.exe

    原因: 进入用户函数时只调用了一次Task::task_cb函数中的this->fn_()占用比却高达8.92,如果频繁地产生协程然后协程又快速结束这样岂不是会形成瓶颈

    根据这样推测进行了测试,发现性能出现极大的下降,下降到只有原来的 1/20: 我测试的环境是vs2015,win7

    //不停地创建协程,同时只有一个协程 void task_test() { ++counter[0]; go task_test; } int main() { go task_test; co_sched.RunLoop(); }

    opened by bigbao9494 11
  • 程序退出时会随机发生crash

    程序退出时会随机发生crash

    堆栈如下。 Stop() 能否实现成等待全部协程退出,或者额外提供该接口,以便实现优雅退出。

    #0 _M_find_before_node (__code=3, __k=, __n=3, this=0x9230f0 co::HookHelper::getInstance()::obj+144) at /usr/include/c++/4.8.2/bits/hashtable.h:1162 1162 __node_type* __p = static_cast<__node_type*>(__prev_p->_M_nxt); Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.3.x86_64 libgcc-4.8.5-36.el7.x86_64 libstdc++-4.8.5-36.el7.x86_64 (gdb) bt #0 _M_find_before_node (__code=3, __k=, __n=3, this=0x9230f0 co::HookHelper::getInstance()::obj+144) at /usr/include/c++/4.8.2/bits/hashtable.h:1162 #1 _M_find_node (__c=3, __key=, __bkt=3, this=0x9230f0 co::HookHelper::getInstance()::obj+144) at /usr/include/c++/4.8.2/bits/hashtable.h:604 #2 find (__k=, this=0x9230f0 co::HookHelper::getInstance()::obj+144) at /usr/include/c++/4.8.2/bits/hashtable.h:1025 #3 find (__x=, this=0x9230f0 co::HookHelper::getInstance()::obj+144) at /usr/include/c++/4.8.2/bits/unordered_map.h:543 #4 co::HookHelper::GetSlot (this=, fd=fd@entry=3) at /git/libgo/libgo/netio/unix/hook_helper.cpp:61 #5 0x00000000006093b6 in co::HookHelper::OnClose (this=, fd=fd@entry=3) at /git/libgo/libgo/netio/unix/hook_helper.cpp:36 #6 0x00000000005fb69f in fclose (fp=0x24db2f0) at /git/libgo/libgo/netio/unix/hook.cpp:1038 #7 0x00000000006229b9 in __gcov_close () #8 0x0000000000623571 in gcov_exit () #9 0x00007fd73ab64b69 in __run_exit_handlers () from /lib64/libc.so.6 #10 0x00007fd73ab64bb7 in exit () from /lib64/libc.so.6 #11 0x00007fd73ab4d3dc in __libc_start_main () from /lib64/libc.so.6 #12 0x00000000004ed7b9 in _start ()

    opened by dearbird 10
  • sleep 的唤醒时间存在一定的误差

    sleep 的唤醒时间存在一定的误差

    之前测试是在本地virtualbox 虚拟机上,换成实体机后,误差没那么大了,但是仍然会出现sleep 10ms,实际却20ms以上才唤醒的情况。


    实际测试 co_sleep(10),经常需要几十,甚至100ms以上才会唤醒。

    [2018-04-10 15:35:49.215056][27545][00]sleep_wait.cpp:17:(CoSwitch) task(1 :{file:/work/src/public/coroutine.cpp, line:33}) will sleep 10 ms [2018-04-10 15:35:49.215063][27545][00]sleep_wait.cpp:23:(SchedulerSwitch) task(1 :{file:/work/src/public/coroutine.cpp, line:33}) begin sleep 10 ms [2018-04-10 15:35:49.351129][27545][00]sleep_wait.cpp:42:(WaitLoop) enter timer callback 495 [2018-04-10 15:35:49.351154][27545][00]sleep_wait.cpp:53:(Wakeup) task(1 :{file:/work/src/public/coroutine.cpp, line:33}) wakeup [2018-04-10 15:35:49.351160][27545][00]sleep_wait.cpp:44:(WaitLoop) leave timer callback 495

    opened by dearbird 10
  • 支持在协程中启动协程吗?测试崩溃

    支持在协程中启动协程吗?测试崩溃

    co_chan<std::shared_ptr<sql::Connection>> ch(3);
                uint64_t tmp;
                {
                    std::lock_guard<std::mutex> guard(mu);
                    tmp = reqKey++;
                    connRequests[tmp] = ch;
                }
    
                go[&]
                {
                    sleep(2);
                    ch << nullptr;//崩溃
                };
                cout << "ci yield to get: " << tmp << endl;
                std::shared_ptr<sql::Connection> conn;
                ch >> conn;
    

    上述代码是在协程中运行,开了一个ch 接收数据,如果两秒没收到就返回结果,因此又开了一个协程sleep2s后返回空结果,结果每次都会崩溃

    opened by lyfunny 9
  • udp socket 收到数据无法返回

    udp socket 收到数据无法返回

    wireshark 抓包,确认包有到达本机 gdb attach 进去,确认epoll线程有在跑,同时另一个tcp的socket是可以收到数据的。

    查看 /proc/net/udp 866: 00000000:2328 00000000:0000 07 00000000:00201000 00:00000000 00000000 0 0 176774 2 ffff880139663740 7937300 可以发现tx_queue 已经满了,包都被丢弃了。

    怀疑epoll实现有bug, fd没有被放回epoll set中,或者相应的event没有设置

    另外,该udp socket是可以正常发包的。

    opened by dearbird 8
  • 是否可以增加一个对协程优先级的设定

    是否可以增加一个对协程优先级的设定

    最近一周对libgo的学习,感触颇多。 同时也想到了一个可能可以的新功能?

    场景: 海量协程需要共享一个有限的资源集合。 比如 一个数据库的链接池,且链接池容量远小于协程数量。因为,在运行中,只有获得了资源的协程才能继续运行,其他协程都需要等待资源被释放。所以,这就导致协程调度过程中,大部分时间被消耗在切换协程和等待中,实际运行时间很少(针对获取了资源的协程而言)。

    因此,我想是否可以对协程做一个优先级的调整,让调度器优先调用优先级高(已经获取了资源)的协程。

    (ps 我并不清楚这是否违反协程的应用场景)

    opened by Archer-Hidden 6
  • 是否可以增加协程切换hook? 已经在私有分支实现,如果ok,我提交pull request

    是否可以增加协程切换hook? 已经在私有分支实现,如果ok,我提交pull request

    场景描述: 直接hook 系统调用,支持第三方库异步化,是一个非常好的功能。 问题是,部分第三方库已经深度使用全局变量或者TLS,这使得这些第三方不可能和libgo一起使用。而这部分第三方库,可能没有源码,或者有源码也因各种原因不能修改。。。

    一个例子是我目前在做的,预备开源项目: phpgo,即在php上实现go routine。实现方式是以php扩展形式提供go 核心部分(基于libgo协程库,基本就是封装libgo包含的部分功能,感谢libgo!),问题是,php 5也好7也好,主程序部分代码都大量使用TLS,而我们是不可能轻易修改php主程序代码的。

    一个解决方案是: 在协程切入的时候,保留scheduler相关全局变量或TLS,切换成协程现场,在协程切出保留协程全局变量TLS,切换成scheduler现场,而我们也是这么做的。我们在Scheduler 的TaskListener增加两个hook: SwapIn hook和SwapOut hook, 在hook中做相应的保存,恢复现场操作。

    我们fork了libgo并做了相应改动并使用在phpgo项目中,目前phpgo小范围运行稳定。

    请求事项: 我们认为能增加协程切换hook可以解决相当一部分类似的问题, 是一个善莫大焉的事 :) 所以请求 在Scheduler::TaskListener增加两个hook: SwapIn hook和SwapOut hook。

    妥否? 恳请回复。

    opened by birdwyx 6
  • MacOS X M1 segmentation fault

    MacOS X M1 segmentation fault

    在 MacBook M1 上编译成功 (需要把 third_party 里的 boost.context 更新成最新的 boost1.81,之前是 2018 年的 boost, 那时候 Mac M1 芯片还没发布)。

    但是执行 test/golang/test.sh 失败,单独运行 libgo_test 报段错误: $ ./libgo_test std::atomic.add 10000000 9 ns/op 11059 w/s Segmentation fault: 11

    用vscode调试截图: image

    opened by akofer 0
  • 运行 sample10_co_pool.cpp  例子崩溃

    运行 sample10_co_pool.cpp 例子崩溃

    操作系统 centos7

    [lossv@relax 19:50:57 tutorial]$ gdb sample10_co_pool_t core.19029 
    GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-120.el7
    Copyright (C) 2013 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "x86_64-redhat-linux-gnu".
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>...
    Reading symbols from /data/home/lossv/test/libgo_redis/build/tutorial/sample10_co_pool_t...done.
    
    warning: core file may not match specified executable file.
    [New LWP 19029]
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/usr/lib64/libthread_db.so.1".
    Core was generated by `./sample10_co_pool_t'.
    Program terminated with signal 11, Segmentation fault.
    #0  0x0000000000000000 in ?? ()
    Missing separate debuginfos, use: debuginfo-install cyrus-sasl-lib-2.1.26-23.el7.x86_64 glibc-2.17-325.el7_9.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-51.el7_9.x86_64 libcom_err-1.42.9-19.el7.x86_64 libcurl-7.29.0-59.el7_9.1.x86_64 libgcc-4.8.5-44.el7.x86_64 libidn-1.28-4.el7.x86_64 libselinux-2.5-15.el7.x86_64 libssh2-1.8.0-3.el7.x86_64 libstdc++-4.8.5-44.el7.x86_64 nspr-4.32.0-1.el7_9.x86_64 nss-3.44.0-4.el7.x86_64 nss-softokn-freebl-3.67.0-3.el7_9.x86_64 nss-util-3.67.0-1.el7_9.x86_64 openldap-2.4.44-21.el7_6.x86_64 openssl-libs-1.0.2k-25.el7_9.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-19.el7_9.x86_64
    (gdb) bt
    #0  0x0000000000000000 in ?? ()
    #1  0x0000000000415133 in staticInitialize () at /data/home/lossv/download/libgo-3.1-stable/libgo/common/config.cpp:38
    #2  co::LibgoInitialize () at /data/home/lossv/download/libgo-3.1-stable/libgo/common/config.cpp:48
    #3  0x0000000000418f94 in co::Scheduler::Scheduler (this=0x1ab54e0) at /data/home/lossv/download/libgo-3.1-stable/libgo/scheduler/scheduler.cpp:59
    #4  0x00000000004191cc in co::Scheduler::Create () at /data/home/lossv/download/libgo-3.1-stable/libgo/scheduler/scheduler.cpp:50
    #5  0x000000000040ddf8 in co::AsyncCoroutinePool::AsyncCoroutinePool (this=0x1ab5010, maxCallbackPoints=128)
        at /data/home/lossv/download/libgo-3.1-stable/libgo/pool/async_coroutine_pool.cpp:75
    #6  0x000000000040e1d1 in co::AsyncCoroutinePool::Create (maxCallbackPoints=128)
        at /data/home/lossv/download/libgo-3.1-stable/libgo/pool/async_coroutine_pool.cpp:8
    #7  0x000000000040a8c1 in main () at /data/home/lossv/test/libgo_redis/tutorial/sample10_co_pool.cpp:50
    
    opened by lossv 3
  • windows-vs2015测试co_pool例子会崩溃

    windows-vs2015测试co_pool例子会崩溃

    void done() { printf("done.\n"); }

    int calc() { return 1024; }

    void callback(int val) { printf("calc result: %d\n", val); }

    void main() { co::AsyncCoroutinePool * pPool = co::AsyncCoroutinePool::Create(1024);

    pPool->InitCoroutinePool(1024);
    
    pPool->Start(4, 128);
    
    
    
    auto cbp = new co::AsyncCoroutinePool::CallbackPoint;
    pPool->AddCallbackPoint(cbp); 
    
    
    pPool->Post(&foo, &done);
    
    pPool->Post<int>(&calc, &callback);
    
    
    for (;;) {
        size_t trigger = cbp->Run();
        if (trigger > 0)
            break;
    }
    

    }

    就在这个循环里面,感觉好像是死循环了,一直出不来,后面就崩溃了. 是不是没有消息,post之后,在Run里面Pop出来都是空的

    opened by zsxcandy 0
Releases(v2.6)
  • v2.6(Oct 20, 2016)

    v2.6是libgo的针对HTTP优化的版本,用于类似于HTTP这种半双工协议的场景下,性能相比上一个版本提升100%

    主要变更

    HOOK

    • 增加安全signal的功能, 可以让signal在Run中触发, 编译时需使用参数 -DWITH_SAFE_SIGNAL=ON. 以此解决linux对signal里面调用的函数必须可重入的要求.
    • Hook gethostbyname系列函数和gethostbyaddr系列函数,DNS解析阻塞也可以不阻塞线程了,具体实现依赖libcares. 编译时需使用参数 -DWITH_CARES=ON.

    协程调度

    • 优化协程切换响应速度, 100个协程频繁切换, 速度可达到1100万次/s

    网络IO

    • 默认使用ET模式,优化半双工协议场景的性能,提升100%

    修复BUG

    • 修复使用std::fstream时无法hook到close的bug.
    Source code(tar.gz)
    Source code(zip)
  • v2.4-stable(May 31, 2016)

    v2.4-stable是libgo的第一个稳定版,经过了线上环境的大规模考验,也修复了很多线上测试过程中发现的bug。 目前有200多台linux服务器、600多个进程基于此版本7×24小时地运行着。

    主要变更

    协程调度

    1.删除ENABLE_SHARED_STACK选项 2.多线程调度算法采用work-steal, 可以通过设置关闭 3.优化切换速度, 每秒切换次数达到千万级 4.增加go_dispatch, 创建协程时可以指定执行的线程

    内存管理

    1.增加一个栈内存malloc/free函数自定义的机制, 用户可以基于此做内存池优化、调试代码嵌入等事情. 2.修复协程function对象的析构在协程外执行的bug

    网络IO

    1.每个线程各使用一个epoll

    定时器

    1.定时器根据设置定时的参数不同, 使用system(受系统时间影响)和steady(不受系统时间影响)两种计时方式. system_clock::time_point使用system计时 steady_clock::time_point和duration使用steady计时

    Source code(tar.gz)
    Source code(zip)
  • v2.3(Apr 17, 2016)

    1.重构IO层的Hook代码,通过跟踪socket状态的方式优化掉额外的系统调用,IO性能再次提升30%左右 2.支持共享监听端口的多进程服务器的使用方式 3.增加一个接口set_connect_timeout, 可以设置connect的超时时间. 弥补原生syscall不能设置connect超时时间的缺憾.

    Source code(tar.gz)
    Source code(zip)
  • v2.2(Jan 29, 2016)

    优化多线程模式下的性能

    1.重新设计runnable列表的数据结构, 降低processer抓取coroutine的竞争
    2.协程栈改为在创建协程时立即创建(ENABLE_SHARED_STACK模式除外)
    3.重新设计delete列表的结构, 降低多线程竞争
    4.重写bm.cpp性能测试代码
    

    持续集成

    1.travis-ci上增加两个版本的自动测试: ENABLE_BOOST_COROUTINE   ENABLE_SHARED_STACK
    2.持续集成测试环境改为Ubuntu14.04
    
    Source code(tar.gz)
    Source code(zip)
  • v2.1-beta(Jan 27, 2016)

    1.增加一个宏go_stack, 可以指定创建的协程的栈大小

        使用方法:
            go_stack(102400)  foo;
    

    2.Channel的超时实现机制从轮询式改为使用定时器

    3.release模式也生成调试信息

    Source code(tar.gz)
    Source code(zip)
  • v2.0(Jan 23, 2016)

    1.完全支持Windows平台

    支持Win7、Win8、Win10的x86和x64版本
    

    2.提供更多的CMake编译选项

    ENABLE_BOOST_COROUTINE
        libgo在Linux系统上默认使用ucontext做协程上下文切换,开启此选项将使用boost.coroutine来替代ucontext.
        使用方式:
            $ cmake .. -DENABLE_BOOST_COROUTINE=1
    
    ENABLE_SHARED_STACK
        使用ucontext做协程上下文切换时可以开启此选项,开启后多个协程将共享使用同一个栈,
        这个选项可以大概节约4倍的内存.
        但是会有一定的副作用,参见下面的WARNNING第四条.
        在使用ENABLE_BOOST_COROUTINE选项时, 此选项不可开启
        使用方式:
            $ cmake .. -DENABLE_SHARED_STACK=1
    
    DISABLE_HOOK
        禁止hook syscall,开启此选项后,网络io相关的syscall将恢复系统默认的行为,
        协程中使用阻塞式网络io将可能真正阻塞线程,如无特殊需求请勿开启此选项.
        使用方式:
            $ cmake .. -DDISABLE_HOOK=1
    

    3.减轻用户负担

    默认编译参数下不再有协程栈上对象访问权限的限制
    

    4.构建

    libgo的源码、单元测试代码、性能测试代码全部使用CMake来构建
    
    使用travis做持续集成、自动测试,保障代码可用性
    
    Source code(tar.gz)
    Source code(zip)
Owner
null
Go-style concurrency in C

LIBMILL Libmill is a library that introduces Go-style concurrency to C. Documentation For the documentation check the project website: http://libmill.

Martin Sustrik 2.6k Dec 31, 2022
Async++ concurrency framework for C++11

Async++ Async++ is a lightweight concurrency framework for C++11. The concept was inspired by the Microsoft PPL library and the N3428 C++ standard pro

Amanieu d'Antras 1.1k Dec 30, 2022
Concurrency Kit 2.1k Jan 4, 2023
The C++ Standard Library for Parallelism and Concurrency

Documentation: latest, development (master) HPX HPX is a C++ Standard Library for Concurrency and Parallelism. It implements all of the corresponding

The STE||AR Group 2.1k Jan 3, 2023
The libdispatch Project, (a.k.a. Grand Central Dispatch), for concurrency on multicore hardware

Grand Central Dispatch Grand Central Dispatch (GCD or libdispatch) provides comprehensive support for concurrent code execution on multicore hardware.

Apple 2.3k Jan 3, 2023
A header-only C++ library for task concurrency

transwarp Doxygen documentation transwarp is a header-only C++ library for task concurrency. It allows you to easily create a graph of tasks where eve

Christian Blume 592 Dec 19, 2022
Modern concurrency for C++. Tasks, executors, timers and C++20 coroutines to rule them all

concurrencpp, the C++ concurrency library concurrencpp is a tasking library for C++ allowing developers to write highly concurrent applications easily

David Haim 1.2k Jan 3, 2023
HPX is a C++ Standard Library for Concurrency and Parallelism

HPX is a C++ Standard Library for Concurrency and Parallelism. It implements all of the corresponding facilities as defined by the C++ Standard. Additionally, in HPX we implement functionalities proposed as part of the ongoing C++ standardization process. We also extend the C++ Standard APIs to the distributed case.

The STE||AR Group 2.1k Dec 30, 2022
Complementary Concurrency Programs for course "Linux Kernel Internals"

Complementary Programs for course "Linux Kernel Internals" Project Listing tpool: A lightweight thread pool. tinync: A tiny nc implementation using co

null 237 Dec 20, 2022
Yet Another Concurrency Library

YACLib YACLib (Yet Another Concurrency Library) is a C++ library for concurrent tasks execution. Documentation Install guide About dependencies Target

null 193 Dec 28, 2022
Task System presented in "Better Code: Concurrency - Sean Parent"

task_system task_system provides a task scheduler for modern C++. The scheduler manages an array of concurrent queues A task, when scheduled, is enque

Pranav 31 Dec 7, 2022
Laughably simple Actor concurrency framework for C++20

Light Actor Framework Concurrency is a breeze. Also a nightmare, if you ever used synchronization techniques. Mostly a nightmare, though. This tiny li

Josip Palavra 93 Dec 27, 2022
Deadlockempire.github.io - The Deadlock Empire: Slay dragons, learn concurrency!

The Deadlock Empire A game that teaches locking and concurrency. It runs on https://deadlockempire.github.io. Contributing We gladly welcome all contr

null 810 Dec 23, 2022
The RaftLib C++ library, streaming/dataflow concurrency via C++ iostream-like operators

RaftLib is a C++ Library for enabling stream/data-flow parallel computation. Using simple right shift operators (just like the C++ streams that you wo

RaftLib 833 Dec 24, 2022
A golang-style C++ coroutine library and more.

CO is an elegant and efficient C++ base library that supports Linux, Windows and Mac platforms. It pursues minimalism and efficiency, and does not rely on third-party library such as boost.

Alvin 3.1k Jan 5, 2023
A go-style coroutine library in C++11 and more.

cocoyaxi English | 简体中文 A go-style coroutine library in C++11 and more. 0. Introduction cocoyaxi (co for short), is an elegant and efficient cross-pla

Alvin 3.1k Dec 27, 2022
Cpp-concurrency - cpp implementation of golang style concurrency

cpp-concurrency C++ implementation of golang style concurrency Usage Use existing single header concurrency.hpp or run script to merge multiple header

YoungJoong Kim 14 Aug 11, 2022
Go-style concurrency in C

LIBMILL Libmill is a library that introduces Go-style concurrency to C. Documentation For the documentation check the project website: http://libmill.

Martin Sustrik 2.6k Dec 31, 2022
Async++ concurrency framework for C++11

Async++ Async++ is a lightweight concurrency framework for C++11. The concept was inspired by the Microsoft PPL library and the N3428 C++ standard pro

Amanieu d'Antras 1.1k Dec 30, 2022
Concurrency Kit 2.1k Jan 4, 2023