C++ Parallel Computing and Asynchronous Networking Engine

Overview

中文版入口

Sogou C++ Workflow

License Language Platform Build Status

As Sogou`s C++ server engine, Sogou C++ Workflow supports almost all back-end C++ online services of Sogou, including all search services, cloud input method,online advertisements, etc., handling more than 10 billion requests every day. This is an enterprise-level programming engine in light and elegant design which can satisfy most C++ back-end development requirements.

You can use it:

  • To quickly build an HTTP server:
#include <stdio.h>
#include "workflow/WFHttpServer.h"

int main()
{
    WFHttpServer server([](WFHttpTask *task) {
        task->get_resp()->append_output_body("<html>Hello World!</html>");
    });

    if (server.start(8888) == 0) { // start server on port 8888
        getchar(); // press "Enter" to end.
        server.stop();
    }

    return 0;
}
  • As a multifunctional asynchronous client, it currently supports HTTP, Redis, MySQL and Kafka protocols.
  • To implement client/server on user-defined protocol and build your own RPC system.
    • srpc is based on it and it is an independent open source project, which supports srpc, brpc and thrift protocols.
  • To build asynchronous workflow; support common series and parallel structures, and also support any DAG structures.
  • As a parallel computing tool. In addition to networking tasks, Sogou C++ Workflow also includes the scheduling of computing tasks. All types of tasks can be put into the same flow.
  • As a asynchronous file IO tool in Linux system, with high performance exceeding any system call. Disk file IO is also a task.
  • To realize any high-performance and high-concurrency back-end service with a very complex relationship between computing and networking.
  • To build a micro service system.
    • This project has built-in service governance and load balancing features.

Compiling and running environment

  • This project supports Linux, macOS, Windows and other operating systems.
    • Windows version is currently released as an independent branch, using iocp to implement asynchronous networking. All user interfaces are consistent with the Linux version.
  • Supports all CPU platforms, including 32 or 64-bit x86 processors, big-endian or little-endian arm processors.
  • Relies on OpenSSL; OpenSSL 1.1 and above is recommended. If you don't like SSL, you may checkout the nossl branch. But still need to link crypto for md5 and sha1.
  • Uses the C++11 standard and therefore, it should be compiled with a compiler which supports C++11. Does not rely on boost or asio.
  • No other dependencies. However, if you need Kafka protocol, some compression libraries should be installed, including lz4, zstd and snappy.

Try it!

System design features

We believe that a typical back-end program=protocol+algorithm+workflow and should be developed completely independently.

  • Protocol
    • In most cases, users use built-in common network protocols, such as HTTP, Redis or various rpc.
    • Users can also easily customize user-defined network protocol. In the customization, they only need to provide serialization and deserialization functions to define their own client/server.
  • Algorithm
    • In our design, the algorithm is a concept symmetrical to the protocol.
      • If protocol call is rpc, then algorithm call is an apc (Async Procedure Call).
    • We have provided some general algorithms, such as sort, merge, psort, reduce, which can be used directly.
    • Compared with a user-defined protocol, a user-defined algorithm is much more common. Any complicated computation with clear boundaries should be packaged into an algorithm.
  • Workflow
    • Workflow is the actual bussiness logic, which is to put the protocols and algorithms into the flow graph for use.
    • The typical workflow is a closed series-parallel graph. Complex business logic may be a non-closed DAG.
    • The workflow graph can be constructed directly or dynamically generated based on the results of each step. All tasks are executed asynchronously.

Basic task, task factory and complex task

  • Our system contains six basic tasks: networking, file IO, CPU, GPU, timer, and counter.
  • All tasks are generated by the task factory and automatically recycled after callback.
    • Server task is one kind of special networking task, generated by the framework which calls the task factory, and handed over to the user through the process function.
  • In most cases, the task generated by the user through the task factory is a complex task, which is transparent to the user.
    • For example, an HTTP request may include many asynchronous processes (DNS, redirection), but for user, it is just a networking task.
    • File sorting seems to be an algorithm, but it actually includes many complex interaction processes between file IO and CPU computation.
    • If you think of business logic as building circuits with well-designed electronic components, then each electronic component may be a complex circuit.

Asynchrony and encapsulation based on C++11 std::function

  • Not based on user mode coroutines. Users need to know that they are writing asynchronous programs.
  • All calls are executed asynchronously, and there are almost no operation that occupys a thread.
    • Although we also provide some facilities with semi-synchronous interfaces, they are not core features.
  • We try to avoid user's derivations, and encapsulate user behavior with std::function instead, including:
    • The callback of any task.
    • Any server's process. This conforms to the FaaS (Function as a Service) idea.
    • The realization of an algorithm is simply a std::function. But the algorithm can also be implemented by derivation.

Memory reclamation mechanism

  • Every task will be automatically reclaimed after the callback. If a task is created but a user does not want to run it, the user needs to release it through the dismiss method.
  • Any data in the task, such as the response of the network request, will also be recycled with the task. At this time, the user can use std::move() to move the required data.
  • SeriesWork and ParallelWork are two kinds of framework objects, which are also recycled after their callback.
    • When a series is a branch of a parallel, it will be recycled after the callback of the parallel that it belongs to.
  • This project doesn’t use std::shared_ptr to manage memory.

More design documents

To be continued...

Issues
  • benchmark的代码在哪里?

    benchmark的代码在哪里?

    请问在哪里能看到benchmark的代码,benchmark的链接里只有图,希望能把测试代码放出来。

    opened by qicosmos 42
  • FAQ(持续更新)

    FAQ(持续更新)

    项目背景以及解决的问题

    C++ Workflow项目起源于搜狗公司的分布式存储项目的通讯引擎,并且发展成为搜狗公司级C++标准,应用于搜狗大多数C++后端服务。项目将通讯与计算和谐统一,帮助用户建立通讯与计算关系非常复杂的高性能服务。但同时用户也可以只把它当成简易的异步网络引擎或并行计算框架来使用。

    如何开始使用

    以Linux系统为例:

    $ git clone https://github.com/sogou/workflow
    $ cd workflow
    $ make
    $ cd tutorial
    $ make
    

    然后就可以愉快的运行示例了。每个示例都有对应的文档讲解。如果需要用到kafka协议,请预先安装snappy和lz4,并且:

    $ make KAFKA=y
    $ cd tutorial
    $ make KAFKA=y
    

    另外,make DEBUG=y,可以编译调试版。通过make REDIS=n MYSQL=n UPSTREAM=n可以裁剪掉一个或多个功能,让库文件减小到最低400KB,更加适合嵌入式开发。

    与其它的网络引擎,RPC项目相比,有什么优势

    • 简单易上手,无依赖
    • 性能和稳定性优异benchmark
    • 丰富的通用协议实现
    • 通讯与计算统一
    • 任务流管理

    与其它并行计算框架相比,有什么优势

    • 使用简单
    • 有网络

    项目目前不支持的特征

    • pipeline服务器
    • udp服务器(支持udp客户端)
    • http/2
    • websocket(websocket客户端已实现。websocket分枝)

    项目原生包含哪些网络协议

    目前我们实现了HTTP,Redis,MySQL和kafka协议。除kafka目前只支持客户端以外,其他协议都是client+server。也就是说,用户可以用于构建Redis或MySQL协议的代理服务器。kafka模块是插件,默认不编译。

    为什么用callback

    我们用C++11 std::function类型的callback和process来包装用户行为,因此用户需要知道自己是在编写异步程序。我们认为callback方式比future或用户态协程能给程序带来更高的效率,并且能很好的实现通信与计算的统一。由于我们的任务封装方式以及std::function带来的便利,在我们的框架里使用callback并没有太多心智负担,反而非常简单明了。

    callback在什么线程里调用

    项目的一个特点是由框架来管理线程,除了一些很特殊情况,callback的调用线程必然是处理网络收发和文件IO结果的handler线程(默认数量20)或者计算线程(默认数量等于CPU总核数)。但无论在哪个线程里执行,都不建议在callback里等待或执行特别复杂的计算。需要等待可以用counter任务进行不占线程的wait,复杂计算则应该包装成计算任务。 需要说明的是,框架里的一切资源都是使用时分配。如果用户没有用到网络通信,那么所有和通信相关的线程都不会被创建。

    为什么我的任务启动之后没有反应

    int main(void)
    {
        ...
        task->start();
        return 0;
    }
    

    这是很多新用户都会遇到的问题。框架中几乎所有调用都是非阻塞的,上面的代码在task启动之后main函数立刻return,并不会等待task的执行结束。正确的做法应该是通过某种方式在唤醒主进程,例如:

    WFFaciliies::WaitGroup wait_group(1);
    
    void callback(WFHttpTask *task)
    {
        ....
        wait_group.done();
    }
    
    int main(void)
    {
        WFHttpTask *task = WFTaskFactory::create_http_task(url, 0, 0, callback);
        task->start();
        wait_group.wait();
        return 0;
    }
    

    任务对象的生命周期是什么

    框架中任何任务(以及SeriesWork),都是以裸指针形式交给用户。所有任务对象的生命周期,是从对象被创建,到对象的callback完成。也就是说callback之后task指针也就失效了,同时被销毁的也包括task里的数据。如果你需要保留数据,可以用std::move()把数据移走,例如我们需要保留http任务中的resp:

    void http_callback(WFHttpTask *task)
    {
        protocol::HttpResponse *resp = task->get_resp();
        protocol::HttpResponse *my_resp = new protocol::HttpResponse(std::move(*resp));
        /* or
        protocol::HttpResponse *my_resp = new protocol::HttpResponse;
        *my_resp = std::move(*resp);
        */
    }
    

    某些情况下,如果用户创建完任务又不想启动了,那么需要调用task->dismiss()直接销毁任务。 需要特别强调,server的process函数不是callback,server任务的callback发生在回复完成之后,而且默认为nullptr。

    为什么SeriesWork(串行)不是一种任务

    我们关于串并联的定义是:

    • 串行由任务组成
    • 并行由串行组成
    • 并行是一种任务

    显然通过这三句话的定义我们可以递归出任意复杂的串并联结构。如果把串行也定义为一种任务,串行就可以由多个子串行组成,那么使用起来就很容易陷入混乱。同样并行只能是若干串行的并,也是为了避免混乱。其实使用中你会发现,串行本质上就是我们的协程。

    我需要更一般的有向无环图怎么办

    可以使用WFGraphTask,或自己用WFCounterTask来构造。 示例:https://github.com/sogou/workflow/blob/master/tutorial/tutorial-11-graph_task.cc

    server是在process函数结束后回复请求吗

    不是。server是在server task所在series没有别的任务之后回复请求。如果你不向这个series里添加任何任务,就相当于process结束之后回复。注意不要在process里等待任务的完成,而应该把这个任务添加到series里。

    如何让server在收到请求后等一小段时间再回复

    错误的方法是在process里直接sleep。正确做法,向server所在的series里添加一个timer任务。以http server为例:

    void process(WFHttpTask *server_task)
    {
        WFTimerTask *timer = WFTaskFactory::create_timer_task(100000, nullptr);
        server_task->get_resp()->append_output_body("hello");
        series_of(server_task)->push_back(timer);
    }
    

    以上代码实现一个100毫秒延迟的http server。一切都是异步执行,等待过程没有线程被占用。

    怎么知道回复成功没有

    首先回复成功的定义是成功把数据写入tcp缓冲,所以如果回复包很小而且client端没有因为超时等原因关闭了连接,几乎可以认为一定回复成功。需要查看回复结果,只需给server task设置一个callback,callback里状态码和错误码的定义与client task是一样的,但server task不会出现dns错误。

    能不能不回复

    可以。任何时候调用server task的noreply()方法,那么在原本回复的时机,连接直接关闭。

    计算任务的调度规则是什么

    我们发现包括WFGoTask在内的所有计算任务,在创建时都需要指定一个计算队列名,这个计算队列名可用于指导我们内部的调度策略。首先,只要有空闲计算线程可用,任务将实时调起,计算队列名不起作用。当计算线程无法实时调起每个任务的时候,那么同一队列名下的任务将按FIFO的顺序被调起,而队列与队列之间则是平等对待。例如,先连续启动n个队列名为A的任务,再连续启动n个队列名为B的任务。那么无论每个任务的cpu耗时分别是多少,也无论计算线程数多少,这两个队列将近倾向于同时执行完毕。这个规律可以扩展到任意队列数量以及任意启动顺序。

    为什么使用redis client时无需先建立连接

    首先看一下redis client任务的创建接口:

    class WFTaskFactory
    {
    public:
        WFRedisTask *create_redis_task(const std::string& url, int retry_max, redis_callback_t callback);
    }
    

    其中url的格式为:redis://:[email protected]:port/dbnum。port默认值为6379,dbnum默认值为0。 workflow的一个重要特点是由框架来管理连接,使用户接口可以极致的精简,并实现最有效的连接复用。框架根据任务的用户名密码以及dbnum,来查找一个可以复用的连接。如果找不到则发起新连接并进行用户登陆,数据库选择等操作。如果是一个新的host,还要进行DNS解析。请求出错还可能retry。这每一个步骤都是异步并且透明的,用户只需要填写自己的request,将任务启动,就可以在callback里得到请求的结果。唯一需要注意的是,每次任务的创建都需要带着password,因为可能随时有登陆的需要。 同样的方法我们可以用来创建mysql任务。但对于有事务需求的mysql,则需要通过我们的WFMySQLConnection来创建任务了,否则无法保证整个事务都在同一个连接上进行。WFMySQLConnection依然能做到连接和认证过程的异步性。

    连接的复用规则是什么

    大多数情况下,用户使用框架产生的client任务都是无法指定具体连接。框架会有连接的复用策略:

    • 如果同一地址端口有满足条件的空闲连接,从中选择最近一个被释放的那个。即空闲连接的复用是先进后出的。
    • 当前地址端口没有满足条件的空闲连接时:
      • 如果当前并发连接数小于最大值(默认200),立刻发起新连接。
      • 并发连接数已经达到最大值,任务将得到系统错误EAGAIN。
    • 并不是所有相同目标地址和端口上的连接都满足复用条件。例如不同用户名或密码下的数据库连接,就不能复用。

    虽然我们的框架无法指定任务要使用的连接,但是我们支持连接上下文的功能。这个功能对于实现有连接状态的server非常重要。相关的内容可以参考关于连接上下文相关文档。

    同一域名下如果有多个IP地址,是否有负载均衡

    是的,我们会认为同一域名下的所有目标IP对等,服务能力也相同。因此任何一个请求都会寻找一个从本地看起来负载最轻的目标进行通信,同时也内置了熔断与恢复策略。同一域名下的负载均衡,目标都必须服务在同一端口,而且无法配置不同权重。负载均衡的优先级高于连接复用,也就是说会先选择好通信地址再考虑复用连接问题。

    如何实现带权重或不同端口上的负载均衡

    可以参考upstream相关文档。upstream还可以实现很多更复杂的服务管理需求。

    chunked编码的http body如何最高效访问

    很多情况下我们使用HttpMessage::get_parsed_body()来获得http消息体。但从效率角度上考虑,我们并不自动为用户解码chunked编码,而是返回原始body。解码chunked编码可以用HttpChunkCursor,例如:

    #include "workflow/HttpUtil.h"
    
    void http_callback(WFHttpTask *task)
    {
        protocol::HttpResponse *resp = task->get_resp();
        protocol::HttpChunkCursor cursor(resp);
        const void *chunk;
        size_t size;
    
        while (cursor.next(&chunk, &size))
        {
            ...
        }
    }
    

    cursor.next操作每次返回一个chunk的起始位置指针和chunk大小,不进行内存拷贝。使用HttpChunkCursor之前无需判断消息是不是chunk编码,因为非chunk编码也可以认为整体就是一个chunk。

    能不能在callback或process里同步等待一个任务完成

    我们不推荐这个做法,因为任何任务都可以串进任务流,无需占用线程等待。如果一定要这样做,可以用我们提供的WFFuture来实现。请不要直接使用std::future,因为我们所有通信的callback和process都在一组线程里完成,使用std::future可能会导致所有线程都陷入等待,引发整体死锁。WFFuture通过动态增加线程的方式来解决这个问题。使用WFFuture还需要注意在任务的callback里把要保留的数据(一般是resp)通过std::move移动到结果里,否则callback之后数据会随着任务一起被销毁。

    数据如何在task之间传递

    最常见的,同一个series里的任务共享series上下文,通过series的get_context()和set_context()的方法来读取和修改。而parallel在它的callback里,也可以通过series_at()获到它所包含的各个series(这些series的callback已经被调用,但会在parallel callback之后才被销毁),从而获取它们的上下文。由于parallel也是一种任务,所以它可以把汇总的结果通过它所在的series context继续传递。 总之,series是协程,series context就是协程的局部变量。parallel是协程的并行,可汇总所有协程的运行结果。

    Workflow和rpc的关系

    在我们的架构里,rpc是workflow上的应用,或者说rpc是workflow上的一组协议实现。如果你有接口描述,远程接口调用的需求,一定要试用一下srpc,这是一个把workflow的功能发挥到极致又和workflow完美融合的rpc系统,同时兼容brpc和thrift协议且更快更易用,满足你的任何rpc需求。地址:https://github.com/sogou/srpc

    Server的stop()操作完成时机

    Server的stop()操作是优雅关闭,程序结束之前必须关闭所有server。stop()由shutdown()和wait_finish()组成,wait_finish会等待所有运行中server task所在series结束。也就是说,你可以在server task回复完成的callback里,继续向series追加任务。stop()操作会等待这些任务的结束。另外,如果你同时开多个server,最好的关闭方法是:

    int main()
    {
        // 一个server对象不能start多次,所以多端口服务需要定义多个server对象
        WFRedisServer server1(process);
        WFRedisServer server2(process);
        server1.start(8080);
        server2.start(8888);
        getchar(); // 输入回车结束
        // 先全部关闭,再等待。
        server1.shutdown();
        server2.shutdown();
        server1.wait_finish();
        server2.wait_finish();
        return 0;
    }
    

    如何在收到某个特定请求时,结束server

    因为server的结束由shutdown()和wait_finish()组成,显然就可以在process里shutdown,在main()里wait_finish,例如:

    #include <string.h>
    #include <atomic>
    #include “workflow/WFHttpServer.h”
    
    extern void process(WFHttpTask *task);
    WFHttpServer server(process);
    
    void process(WFHttpTask *task) {
        if (strcmp(task->get_req()->get_request_uri(), “/stop”) == 0) {
            static std::atomic<int> flag;
            if (flag++ == 0)
                server.shutdown();
            task->get_resp()->append_output_body(“<html>server stop</html>”);
            return;
        }
    
        /* Server’s logic */
        //  ....
    }
    
    int main() {
        if (server.start(8888) == 0)
            server.wait_finish();
    
        return 0;
    }
    

    以上代码实现一个http server,在收到/stop的请求时结束程序。process中的flag是必须的,因为process并发执行,只能有一个线程来调用shutdown操作。

    Server里需要调用非Workflow框架的异步操作怎么办

    还是使用counter。在其它异步框架的回调里,对counter进行count操作。

    void other_callback(server_task, counter, ...)
    {
        server_task->get_resp()->append_output_body(result);
        counter->count();
    }
    
    void process(WFHttpTask *server_task)
    {
        WFCounterTask *counter = WFTaskFactory::create_counter_task(1, nullptr);
        OtherAsyncTask *other_task = create_other_task(other_callback, server_task, counter);//非workflow框架的任务
        other_task->run();
        series_of(server_task)->push_back(counter);
    }
    

    注意以上代码里,counter->count()的调用可能先于counter的启动。但无论什么时序,程序都是完全正确的。

    个别https站点抓取失败是什么原因

    如果浏览器可以访问,但用workflow抓取失败,很大概率是因为站点使用了TLS扩展功能的SNI。可以通过全局配置打开workflow的客户端SNI功能:

        struct WFGlobalSettings settings = GLOBAL_SETTINGS_DEFAULT;
        settings.endpoint_params.use_tls_sni = true;
        WORKFLOW_library_init(&settings);
    

    开启这个功能是有一定代价的,所有https站点都会启动SNI,相同IP地址但不同域名的访问,将无法复用SSL连接。 因此用户也可以通过upstream功能,只打开对某个确定域名的SNI功能:

    #Include "workflow/UpstreamManager.h"
    
    int main()
    {
        UpstreamManager::upstream_create_weighted_random("www.sogou.com", false);
        struct AddressParams params = ADDRESS_PARAMS_DEFAULT;
        params.endpoint_params.use_tls_sni = true;
        UpstreamManager::upstream_add_server("www.sogou.com", "www.sogou.com", &params);
        ...
    }
    

    上面的代码把www.sogou.com设置为upstream名,并且加入一个同名的server,同时打开SNI功能。

    怎么通过代理服务器访问http资源

    方法一(只适用于http任务且无法重定向): 可以通过代理服务器的地址创建http任务,并重新设置request_uri和Host头。假设我们想通过代理服务器www.proxy.com:8080访问http://www.sogou.com/ ,方法如下:

    task = WFTaskFactory::create_http_task("http://www.proxy.com:8080", 0, 0, callback);
    task->set_request_uri("http://www.sogou.com/");
    task->set_header_pair("Host", "www.sogou.com");
    

    方法二(通用。但有些代理服务器只支持HTTPS。HTTP还是推荐用方法一): 通过带proxy_url的接口创建http任务:

    class WFTaskFactory
    {
    public:
        static WFHttpTask *create_http_task(const std::string& url,
                                            const std::string& proxy_url,
                                            int redirect_max, int retry_max,
                                            http_callback_t callback);
    };
    

    其中proxy_url的格式为:http://user:[email protected]:port/ proxy只能是"http://"开头,而不能是"https://"。port默认值为80。 这个方法适用于http和https URL的代理,可以重定向,重定向时继续使用该代理服务器。

    documentation 
    opened by Barenboim 40
  • [KafkaClient Bug] 使用 set_offset_timestamp 设置时间戳不生效 - Not Bug

    [KafkaClient Bug] 使用 set_offset_timestamp 设置时间戳不生效 - Not Bug

    在设置了 offset_timestamp 之后,获取的消息并不是这个时间点之后的数据。

    workflow 版本:0.9.7

    image

    opened by hiberabyss 34
  • 对框架提供的http server机制很疑惑

    对框架提供的http server机制很疑惑

    你好,我看了提供的3个关于http server的例子,对于服务如何回应请求还是没看明白:1.接收到http请求后,设置好response,框架会自动发送回复吗,还是需要创建一个http请求,然后手动发送回复吗?2.如何区分并处理不同的请求地址,通过上发上来地址吗?

    opened by xiaoshizijiayou 26
  • windows版本code

    windows版本code

    什么时候会开放windows版本的代码呢?

    opened by MaybeShewill-CV 16
  • 在mac上编译测试代码时候出错,提示找不到openssl,但是已经通过homebrew安装过了

    在mac上编译测试代码时候出错,提示找不到openssl,但是已经通过homebrew安装过了

    Communicator.h:28:10: fatal error: 'openssl/ssl.h' file not found #include <openssl/ssl.h>

    opened by NeilStone123 15
  • Help: workflow对长连接推送服务的支持

    Help: workflow对长连接推送服务的支持

    业务需求:

    1. 支持tcp长连接,同时若能支持websocket更佳;
    2. 服务端能在客户端在线状态下主动推送消息(不是客户端心跳模式的一来一回);
    3. 客户端网络异常断开情况下,服务端对该客户端推送消息时,workflow能主动鉴别网络断开的状态(网络层主动read socketfd 后异步回调应用层?);
    opened by cxxjava 15
  • Server如何等待一个非本框架的异步事件完成?-- Good Question

    Server如何等待一个非本框架的异步事件完成?-- Good Question

    背景 有N(几百更多)个客户端,类似地分别请求文件资源 1.txt 2.txt ... N.txt,请求之前这些文件都不存在,workflow收到 url 后,会调用启动一个python脚本,脚本里生成文件的函数,调用后会在while循环里检测是否生成成功,成功则继续:

    新增代码

    #include <sys/stat.h>
    /* 检查某个文件是否存在的函数 */
    /* https://stackoverflow.com/questions/12774207/fastest-way-to-check-if-a-file-exist-using-standard-c-c11-c */
    inline bool exists_test3 (const std::string& name) {
      struct stat buffer;   
      return (stat (name.c_str(), &buffer) == 0); 
    }
    
    /*** 在process里增加的逻辑,在这里加时因为fd=open()在这个函数里,要确保有文件了才能调用open() ***/
    
    void process(WFHttpTask *server_task, const char *root)
    {
    	HttpRequest *req = server_task->get_req();
    	HttpResponse *resp = server_task->get_resp();
    	const char *uri = req->get_request_uri();
    	const char *p = uri;
    
    	printf("Request-URI: %s\n", uri);
    	while (*p && *p != '?')
    		p++;
    
    	std::string abs_path(uri, p - uri);
    	abs_path = root + abs_path;
    	if (abs_path.back() == '/')
    		abs_path += "index.html";
            /******************************************新增代码******************************************/
    	/* 检查文件是否存在 */
    	std::string uri_str(uri);
    	if (uri_str.find(".txt") != uri_str.npos) {  /*如果匹配到txt后缀则调用生成*/
    
                    // 调用一个python脚本生成 abs_path
    		std::string cmd = "python ~/generate.py \""+uri_str+"\""+" para"+" &";
    		// printf("cmd: %s\n", cmd.c_str());
    		system(cmd.c_str());
    
                   
                   // 循环检测,超过6秒没生成,就返回响应
    		int cnt = 0;
    		while (! exists_test3(abs_path)) {
    			// printf("文件不存在: %s\n", abs_path.c_str());
    			// sleep(1); 
    			usleep(100000); // 间隔0.1秒检测文件是否已经生成
    			cnt++; 
    			if (cnt == 60) break;
    		}
    	}
    	 /******************************************新增代码******************************************/
    
    	resp->add_header_pair("Server", "Sogou C++ Workflow Server");
    	resp->add_header_pair("Access-Control-Allow-Origin", "*");
    
    	int fd = open(abs_path.c_str(), O_RDONLY); // 运行到这里,说明文件已经生成了,可以继续了
    	if (fd >= 0)
    
            ......
    }
    

    问题
    在process里增加的逻辑,在这里加时因为fd=open()在这个函数里,要确保有文件了才能调用open()。

    表面上看代码流程逻辑上好像没问题,因为每个请求都是在独立的线程里,阻塞完了结束各自的请求。在 process 里加个死循环,好像不影响其他其他 url 请求的进来,是不是意味着加这里没事?

    不过实际运行时,文件有时5秒能收到,有时9秒能收到,好像比实际文件生成的时间要慢,不清楚是那边网络问题,还是请求数多了之后,这种写法的不合理导致http server逻辑变慢了

    opened by ghost 13
  • 图并发任务数的限制问题请教。

    图并发任务数的限制问题请教。

    请教下在构建图流程的时候,试图测试下并发效果。 但在下面的例子中,同时并发的节点任务数,似乎受限于机器的核数。 这个参数是哪里可以设置的吗? 运行的结果是每次只同时执行cpu核数那么多个节点。 分多次分批执行。 实际上可以同时并发,同时执行。 期待 应该创建100个线程,实际上只创建了 接近CPU逻辑核那么多个线程。

    image

    opened by zhaojinzhou 13
  • workflow v0.9.9 re-released!

    workflow v0.9.9 re-released!

    Improvements

    • Optimize Dns Cache's searching speed and memory size
    • Optimize route manager and dns resolver
    • Increase server task's performance by reducing some atomic operations
    • Increase global performance by removing some singletons
    • Always use dns resolver as name service policy when redirecting
    • Add WFServer::get_listen_addr() for server started on a random port
    • Support kafka kip 329

    Bug Fixes

    • Fix service governance's ref count bug
    • Fix mysql sequence id bug when retrying big request
    • Fix VNSWRR upstream bug
    • Fix dns client bug when host name has trailing dot
    • Fix URL parser fatal bug
    enhancement 
    opened by Barenboim 1
  • MYSQL client 一个事务中插入多条数据报错”Bad message“

    MYSQL client 一个事务中插入多条数据报错”Bad message“

    表结构

    mysql> desc test;
    +-------+--------------+------+-----+---------+-------+
    | field | type         | null | key | default | extra |
    +-------+--------------+------+-----+---------+-------+
    | id    | int(11)      | YES  |     | NULL    |       |
    | name  | varchar(256) | YES  |     | NULL    |       |
    | time  | bigint(20)   | YES  |     | NULL    |       |
    +-------+--------------+------+-----+---------+-------+
    

    插入的sql语句

        char query_str[] = "begin;"
                           "insert into test(id, name, time) values(1, 'name1',91848);"
                           "insert into test(id, name, time) values(2, 'name2',91848);"
                           "insert into test(id, name, time) values(3, 'name3',91848);"
                           "insert into test(id, name, time) values(4, 'name4',91848);"
                           "insert into test(id, name, time) values(5, 'name5',91848);"
                           "insert into test(id, name, time) values(6, 'name6',91848);"
                           "insert into test(id, name, time) values(7, 'name7',91848);"
                           "insert into test(id, name, time) values(8, 'name8',91848);"
                           "insert into test(id, name, time) values(9, 'name9',91848);"
                           "insert into test(id, name, time) values(10, 'name10',91848);"
                           "commit;";
    

    结果

    task packet_type=1, err_msg:Bad message
    
    opened by zhl0921 2
  • MySQL Connection Spike from Workflow

    MySQL Connection Spike from Workflow

    Problem statement: We have multiple MySQL shards, and only one shard has abnormal connections spike(keep for 3 mins) at one time.

    And we have own client throttle logic which limits max on-the-fly request concurrency and use round robin for load balance. If it is our throttle limit bug, all shards should have connections spikes at the same time instead only one.

    Please help on this issue.

    opened by redkongdong 4
  • 关于WORKFLOW同一个进程内开多个HTTPSERVER的问题

    关于WORKFLOW同一个进程内开多个HTTPSERVER的问题

    一个进程中,针对不同的端口创建了多个WFHttpServer。比如10000,10002,10004 3个Server。但是10002 HTTPServer的Process的逻辑特别的重,比如要500ms以上的计算。会影响其他Server 的效率么?

    documentation 
    opened by willreno1987 8
  • 使用gcc-linaro-7.5.0 aarch64-linux-gnu 工具链交叉编译workflow 的方法

    使用gcc-linaro-7.5.0 aarch64-linux-gnu 工具链交叉编译workflow 的方法

    刚刚写了篇csdn 详细看博客吧 https://blog.csdn.net/a1054087304/article/details/121518116

    opened by Shawn-Tao 0
  • Windows下基于iocp的的异步文件IO

    Windows下基于iocp的的异步文件IO

    Workflow支持异步文件IO任务,具体实现目前在Linux下是由操作系统支持的异步IO系统,在非Linux的系统下是用多线程实现的。而Windows下目前也这个需求,所以欢迎熟悉iocp开发的小伙伴可以积极参与共建~

    在此把原异步文件IO的流程大概梳理如下,以供参考:

    1. 用户层接口,我们以create_pread_task()为例子:
    class WFTaskFactory                                                             
    {
        static WFFileIOTask *create_pread_task(const std::string& pathname,                               
                                               void *buf,                              
                                               size_t count,                           
                                               off_t offset,                           
                                               fio_callback_t callback);
        ...
    
    1. Workflow内部都是行为派生,所以用户拿到的都是WFFileIOTask *类型的task,而内部会根据pread行为创建一个__WFFilepreadTask
    WFFileIOTask *WFTaskFactory::create_pread_task(const std::string& pathname,                            
                                                   void *buf,                          
                                                   size_t count,                       
                                                   off_t offset,                       
                                                   fio_callback_t callback)            
    {                                                                                  
        return new __WFFilepreadTask(pathname, buf, count, offset,                             
                                     WFGlobal::get_io_service(),                         
                                     std::move(callback));                          
    }
    
    1. __WFFilepreadTask需要实现prepare(),供内部IOService调用,具体是做与异步文件相关的起始操作:
    class __WFFilepreadTask : public WFFilepreadTask                          
    { 
    protected:                                                                         
        virtual int prepare()
        {
            // 这里调用了IOSession层的prep_preadv(),不同系统实现不一样;
        }
    
    1. 在Linux和Windows中,WFFileTask和IORequest的定义都是一样的。不同点在于上面提到的IOService和IOSession。 IOService是接管所有文件异步IO的服务,IOSession是一次IO请求的上下文,需要根据Windows下iocp的机制来具体实现。 虽然Linux使用了系统的libaio,但大家要做的事情是类似:
    class IOService                                                                    
    {                                                                                  
    public:   
        int request(IOSession *session); // 用于用户提交一个文件io任务
    
    private:                                                                           
        int event_fd;  // 用于结合libaio机制的eventfd,多个IO事件也只用一个,希望在windows下也尽量少占用系统资源
                                                                    
    private:                                                                        
        struct list_head session_list; // 用链表管理了此时发出的多个任务                                    
                                                                   
    private:                                                                        
        static void *aio_finish(void *context); // 用于结合libaio机制的回调函数,有事件通知会回到这里
    
        ...
    };
    
    1. 此IOService需要通过CommScheduler::io_bind()把自己的eventfd和回调绑定到通信器中,同理也需要io_unbind()。其他内部接口需要根据iocp的机制按需添加。目的是做到当系统有异步事件的时候,会通过注册到通信器的机制来告诉框架,框架调起当时的那片上下文的handle(),即可回到task的逻辑中:
    void Communicator::handle_aio_result(struct poller_result *res) 
    {
        ...
        session->handle(state, error);
    
    
    1. 如果希望默认使用此异步文件服务,可以参考现在的__FileIOService, 从IOService派生,并且在全局单例中提供接口供调用,这样也可以保证不用异步文件IO的用户不会创建相应资源:
    class __CommManager
    {
        IOService *get_io_service()
        {
            if (!fio_flag_)
                fio_service_ = new __FileIOService(&scheduler_);
            ...
        }
    }
    

    以上是整个异步文件IO的基本流程,希望在windows下的实现同时遵循Workflow一如既往的对资源的极度节制以及对高并发的严谨。如有了解iocp的小伙伴愿意尝试欢迎随时交流。

    help wanted 
    opened by holmes1412 11
  • fix: task_unittest.cc compilation errors on windows

    fix: task_unittest.cc compilation errors on windows

    Update from master branch and skip linux-only test cases

    opened by wixom 1
  • WebSocket 支持有问题

    WebSocket 支持有问题

    WebSocket 支持有问题,没有实现 random masking key,会导致示例程序 websocket_cli 崩溃退出。

    首先用 Node.js 写了个简易的 WebSocket echo server,代码如下:

    const ws = require('ws');
    const server = new ws.Server({ port: 8080 });
    
    server.on('connection', (socket) => {
      socket.on('message', (data) => {
        console.log(data);
        socket.send(data);
      });
    });
    
    console.log(`Listening on ws://localhost:8080`);
    

    然后编译并执行 tutorial 里的 websocket_cli 程序,在发送 frame 时崩溃退出,server 报错信息如下:

    $ node server.js
    Listening on ws://localhost:8080
    events.js:377
          throw er; // Unhandled 'error' event
          ^
    
    RangeError: Invalid WebSocket frame: MASK must be set
        at Receiver.getInfo (/Users/luangong/Documents/websocket/node_modules/ws/lib/receiver.js:289:16)
        at Receiver.startLoop (/Users/luangong/Documents/websocket/node_modules/ws/lib/receiver.js:136:22)
        at Receiver._write (/Users/luangong/Documents/websocket/node_modules/ws/lib/receiver.js:83:10)
        at writeOrBuffer (internal/streams/writable.js:358:12)
        at Receiver.Writable.write (internal/streams/writable.js:303:10)
        at Socket.socketOnData (/Users/luangong/Documents/websocket/node_modules/ws/lib/websocket.js:1116:35)
        at Socket.emit (events.js:400:28)
        at addChunk (internal/streams/readable.js:293:12)
        at readableAddChunk (internal/streams/readable.js:267:9)
        at Socket.Readable.push (internal/streams/readable.js:206:10)
    Emitted 'error' event on WebSocket instance at:
        at Receiver.receiverOnError (/Users/luangong/Documents/websocket/node_modules/ws/lib/websocket.js:1002:13)
        at Receiver.emit (events.js:400:28)
        at emitErrorNT (internal/streams/destroy.js:106:8)
        at emitErrorCloseNT (internal/streams/destroy.js:74:3)
        at processTicksAndRejections (internal/process/task_queues.js:82:21) {
      code: 'WS_ERR_EXPECTED_MASK',
      [Symbol(status-code)]: 1002
    }
    

    在 macOS Catalina (10.15.7) 上执行 websocket_cli 报错信息如下:

    $ ./websocket_cli ws://localhost:8080
    send callback() state=0 error=0
    opcode=8
    websocket_cli(20393,0x70000bd67000) malloc: *** error for object 0x7f98c58048e2: pointer being freed was not allocated
    websocket_cli(20393,0x70000bd67000) malloc: *** set a breakpoint in malloc_error_break to debug
    [1]    20393 abort      ./websocket_cli ws://localhost:8080
    

    网上搜了下说是由于没有设置 WebSocket masking key,于是在发送消息前加一行代码:

      WebSocketFrame *msg = task->get_msg();
    + msg->set_masking_key(0x12345678);
      msg->set_text_data("This is Workflow websocket client.");
    

    这样 server 收发第一条消息是没问题的,但是 client 端在关闭连接的时候又崩溃了:

    $ ./websocket_cli ws://localhost:8080
    send callback() state=0 error=0
    opcode=2
    libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument
    [1]    21025 abort      ./websocket_cli ws://localhost:8080
    

    server 端还是提示没有设置 masking key,看了下 WebSocket 规范要求每个 client-to-server frame 的 masking key 都要不一样,于是我手动给 WebSocketClient::create_ping_task()WebSocketClient::create_close_task() 也各加了一行 msg->set_masking_key(...),现在关闭连接没问题了,server 没崩溃,但是 websocket_cli 还是崩溃,报错信息跟刚才的一样:

    $ ./websocket_cli ws://localhost:8080
    send callback() state=0 error=0
    opcode=2
    libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument
    [1]    22975 abort      ./websocket_cli ws://localhost:8080
    

    又去网上搜了下,说是可能因为 mutex 在一个线程里 lock 但在另一个线程里 unlock 导致的(https://stackoverflow.com/q/66773247),这个涉及到 workflow 的线程管理与 mutex 使用机制,就没有再深究了,请 @holmes1412 和 @Barenboim 帮忙看下?

    opened by luangong 10
  • 支持监控一个文件描述符

    支持监控一个文件描述符

    可提供一次监控或持续监控接口

    opened by wtLaoFu 2
  • 支持Grpc协议

    支持Grpc协议

    目前微服务中Grpc协议应用广泛,强烈建议考虑支持grpc client/server

    opened by fengyonglv 2
Releases(v0.9.9)
  • v0.9.9(Dec 3, 2021)

    Improvements

    • Optimize Dns Cache's searching speed and memory size
    • Optimize route manager and dns resolver
    • Increase server task's performance by reducing some atomic operations
    • Increase global performance by removing some singletons
    • Always use dns resolver as name service policy when redirecting
    • Add WFServer::get_listen_addr() for server started on a random port
    • Support kafka kip 329

    Bug Fixes

    • Fix service governance's ref count bug
    • Fix mysql sequence id bug when retrying big request
    • Fix VNSWRR upstream bug
    • Fix dns client bug when host name has trailing dot
    • Fix URL parser fatal bug
    Source code(tar.gz)
    Source code(zip)
  • v0.9.8(Sep 30, 2021)

    Improvements

    • Enable creating file IO tasks with path name
    • Add server task's push() interface
    • Optimize poller speed and memory occupying
    • Optimize URI parser, more than 50% faster
    • Optimize http implementation

    Bug Fixes

    • Fix crash when resolv.conf is empty
    • Fix Kafka client's memory leak
    • Fix MySQL transaction checking
    • Fix bazel compiling problem
    Source code(tar.gz)
    Source code(zip)
  • v0.9.7(Aug 8, 2021)

    Improvements

    • Implement DNS protocol and add DNS asynchronous client.
    • Use asynchronous DNS as default.
    • Optimize load balancing.
    • Add bazel support and add selective compiling.
    • Support longer timer.
    • Add WFResourcePool.

    Bug fixes

    • Fix Redis double SELECTs problem.
    • Fix upstream_replace_server() bug.
    • Fix timerfd problem on some WSL platforms.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.6(Jun 3, 2021)

    Improvements

    • Add SSLWrapper.
    • Support http/https task with proxy.
    • Support MySQL SSL client.
    • Add vnswrr upstream policy.

    Bug fixes

    • Fix upstream concurrency bug.
    • Fix MySQL multi-resultset for INSERTs
    • Fix Kafka client sasl auth bug.
    • Add -no-rtti compiling flag for kafka to be compatible with snappy 1.1.9.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.5(Apr 12, 2021)

    Improvements

    • Support TLS SNI on both client and server sides;
    • Upstream skips select history;
    • Kafka supports sasl auth;

    Bug Fixes

    • Fix default port bug;
    • MySQL fix decode overflow bug;
    • MySQL fix parsing suffixed ok_packet;
    • Kafka modify logic of versionapi;
    Source code(tar.gz)
    Source code(zip)
  • v0.9.4(Mar 17, 2021)

    Improvement

    • Add WFNameService and refactor "Upstream" modules.
    • Update the definition of WFServer::stop()'s finish time.
    • Kafka client supports offset storage.
    • Redis supports cluster command MOVED and ASK.
    • Supporting VCPKG.

    Bug fixes

    • Crash when dismissing a named counter.
    • WFGoTask implementation.
    • MySQL int/ulonglong length overflow.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(Jan 13, 2021)

    Improvements

    • Add Kafka client.
    • Improve client tasks performance.

    Bugs fixes:

    • Fix several MySQL parser bugs.
    • Fix iovcnt==0 problem on macOS.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Nov 13, 2020)

    Improvements:

    • Add WFGraphTask for building DAG.
    • Add WFDynamicTask.
    • Make SeriesWork derivable.
    • Improve MySQL client.

    Bug Fixes:

    • Fix mysql protocol parsing bug.
    • Fix EncodeStream bug.

    Last release before kafka protocol.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.1(Sep 30, 2020)

    Improvements:

    • Complete English documents.
    • Optimize kernel codes. The message queue is a standalone module now.
    • Support MySQL character_set_results.
    • Add benchmark codes and documents.

    Bug Fixes:

    • Fix crashing of MySQL client when the local host is disallowed.
    • Fix MySQL client's problem when using short connection.
    • Fix LRU cache bug when cache is full.
    • Fix upstream bug of division by zero.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Aug 17, 2020)

Owner
Sogou-inc
Sogou-inc
Embeddable Event-based Asynchronous Message/HTTP Server library for C/C++

libasyncd Embeddable Event-based Asynchronous Message/HTTP Server library for C/C++. What is libasyncd? Libasyncd is an embeddable event-driven asynch

Seungyoung 163 Nov 17, 2021
Corvusoft's Restbed framework brings asynchronous RESTful functionality to C++14 applications.

Restbed Restbed is a comprehensive and consistent programming model for building applications that require seamless and secure communication over HTTP

Corvusoft 1.6k Nov 29, 2021
fix vmprotect import function used unicorn-engine.

Vm2Import fix vmprotect import function used unicorn-engine. it can repair functions such as call [module.function] or jmp [module.function] or reg(mo

共产主义接班人 55 Nov 10, 2021
🌱Light and powerful C++ web framework for highly scalable and resource-efficient web application. It's zero-dependency and easy-portable.

Oat++ News Hey, meet the new oatpp version 1.2.5! See the changelog for details. Check out the new oatpp ORM - read more here. Oat++ is a modern Web F

Oat++ 4.7k Dec 3, 2021
C++ application development framework, to help developers create and deploy applications quickly and simply

ULib - C++ library Travis CI: Coverity Scan: ULib is a highly optimized class framework for writing C++ applications. I wrote this framework as my too

stefano casazza 936 Nov 26, 2021
Crow is very fast and easy to use C++ micro web framework (inspired by Python Flask)

Crow is C++ microframework for web. (inspired by Python Flask) #include "crow.h" int main() { crow::SimpleApp app; CROW_ROUTE(app, "/")([]()

Jaeseung Ha 6.6k Dec 5, 2021
C library to create simple HTTP servers and Web Applications.

Onion http server library Travis status Coverity status Onion is a C library to create simple HTTP servers and Web Applications. master the developmen

David Moreno Montero 1.8k Dec 2, 2021
a very based, minimal, and flexible static site generator written in pure C89 with no external deps.

based-ssg is a very based, minimal, and flexible static site generator written in pure C89 with no external deps.

null 11 Oct 10, 2021
cserv is an event-driven and non-blocking web server

cserv is an event-driven and non-blocking web server. It ideally has one worker process per cpu or processor core, and each one is capable of handling thousands of incoming network connections per worker. There is no need to create new threads or processes for each connection.

null 35 Sep 10, 2021
This is a proof-of-concept of a modern C web-framework that compiles to WASM and is used for building user interfaces.

DanCing Web ?? ?? (DCW) Getting Started Dancing Web is now distributed with the Tarantella Package Manager — a tool I've made to simplify setup of pro

Danilo Chiarlone 3 Sep 11, 2021
wwasm (Wgmlgz wasm) - is a c++ & reactjs liblary for connecting c++ backend and reactjs frontend.

WWASM (Wgmlgz wasm) - is a c++ & reactjs liblary for connecting c++ backend and reactjs frontend.

null 1 Nov 23, 2021
C++ Parallel Computing and Asynchronous Networking Engine

As Sogou`s C++ server engine, Sogou C++ Workflow supports almost all back-end C++ online services of Sogou, including all search services, cloud input method,online advertisements, etc., handling more than 10 billion requests every day. This is an enterprise-level programming engine in light and elegant design which can satisfy most C++ back-end development requirements.

Sogou-inc 6.4k Nov 28, 2021
C++ Parallel Computing and Asynchronous Networking Engine

As Sogou`s C++ server engine, Sogou C++ Workflow supports almost all back-end C++ online services of Sogou, including all search services, cloud input method,online advertisements, etc., handling more than 10 billion requests every day

Sogou Open Source 6.4k Nov 30, 2021
Asynchronous networking for C

Overview Dyad.c is an asynchronous networking library which aims to be lightweight, portable and easy to use. It can be used both to create small stan

null 1.3k Nov 29, 2021
Powerful multi-threaded coroutine dispatcher and parallel execution engine

Quantum Library : A scalable C++ coroutine framework Quantum is a full-featured and powerful C++ framework build on top of the Boost coroutine library

Bloomberg 370 Nov 30, 2021
RakNet is a cross platform, open source, C++ networking engine for game programmers.

RakNet 4.081 Copyright (c) 2014, Oculus VR, Inc. Package notes The Help directory contains index.html, which is full help documentation in HTML format

Facebook Archive 3k Nov 24, 2021
RakNet is a cross platform, open source, C++ networking engine for game programmers.

RakNet 4.081 Copyright (c) 2014, Oculus VR, Inc. Package notes The Help directory contains index.html, which is full help documentation in HTML format

Facebook Archive 3k Dec 4, 2021
C++-based high-performance parallel environment execution engine for general RL environments.

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With

Sea AI Lab 230 Nov 30, 2021
HashLibPlus is a recommended C++11 hashing library that provides a fluent interface for computing hashes and checksums of strings, files, streams, bytearrays and untyped data to mention but a few.

HashLibPlus HashLibPlus is a recommended C++11 hashing library that provides a fluent interface for computing hashes and checksums of strings, files,

Telepati 2 Oct 21, 2021
Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI

High-Performance-Computing-Experiments Experimental and Comparative Performance Measurements of High Performance Computing Based on OpenMP and MPI 实验结

Jiang Lu 1 Nov 27, 2021
KoanLogic 359 Nov 16, 2021
Patterns and behaviors for GPU computing

moderngpu 2.0 (c) 2016 Sean Baxter You can drop me a line here Full documentation with github wiki under heavy construction. Latest update: 2.12 2016

null 1.3k Nov 28, 2021
A C library for statistical and scientific computing

Apophenia is an open statistical library for working with data sets and statistical or simulation models. It provides functions on the same level as t

null 179 Nov 18, 2021
C++ tensors with broadcasting and lazy computing

Multi-dimensional arrays with broadcasting and lazy computing. Introduction xtensor is a C++ library meant for numerical analysis with multi-dimension

Xtensor Stack 2.4k Dec 7, 2021
C++ class for creating and computing arbitrary-length integers

BigNumber BigNumber is a C++ class that allows for the creation and computation of arbitrary-length integers. The maximum possible length of a BigNumb

Limeoats 118 Nov 22, 2021
A command line tool for numerically computing Out-of-time-ordered correlations for N=4 supersymmetric Yang-Mills theory and Beta deformed N=4 SYM.

A command line tool to compute OTOC for N=4 supersymmetric Yang–Mills theory This is a command line tool to numerically compute Out-of-time-ordered co

Gaoli Chen 1 Oct 16, 2021
Standalone c++ implementation for computing Motif Adjacency Matrices of large directed networks, for 3-node graphlets and 4-node graphletsa containing a 4 edge loop.

Building Motif Adjacency Matrices This is an efficient C++ software for building Motif Adjacency Matrices (MAM) of networks, for a range of motifs/gra

null 4 Oct 4, 2021